Comparison of Audio Encoders for Audio-Text Contrastive Learning Representations

Data de publicació

2026-02-06T13:11:55Z

2026-02-06T13:11:55Z

2025



Resum

Treball fi de màster de: Master in Sound and Music Computing


Supervisor: Pablo Alonso Jiménez


Co-Supervisor: Dmitry Bogdanov


This project investigates contrastive learning techniques for aligning audio and text representations in the music domain, focusing on scenarios with limited data and computational resources. We provide a comprehensive review of existing methods relevant to music-text contrastive learning. Two audio encoders, HTSAT and MAEST, initialized with pretrained weights, are integrated with a frozen RoBERTa text encoder within the LAION-AI CLAP framework and fine-tuned on the MTGJamendo dataset. Model performance is evaluated on three tasks: zero-shot genre classification on the GTZAN dataset, multi-label tag classification on the MagnaTagATune dataset, and text-to-music retrieval on the Song Describer dataset. Results show that HTSAT generalizes better in low-data settings, while MAEST tends to overfit, highlighting the impact of encoder complexity in resource-constrained environments. Attempts to mitigate MAEST’s overfitting with weight decay and learning rate decay were unsuccessful. Additionally, the study highlights the critical role of data volume and batch size in contrastive learning effectiveness. The source code for this work is publicly available at https://github.com/SerX610/smc-master-thesis

Tipus de document

Treball fi de màster

Llengua

Anglès

Matèries i paraules clau

Música per ordinador

Citació recomanada

Aquesta citació s'ha generat automàticament.

Drets

Creative Commons license AttributionNonCommercial- NoDerivs 4.0 International

Attribution-NonCommercial-NoDerivatives 4.0 International

https://creativecommons.org/licenses/by-nc-nd/4.0/

Aquest element apareix en la col·lecció o col·leccions següent(s)