Prompting in Deep Learning Speech Recognition

Altres autors/es

Universitat Politècnica de Catalunya. Departament de Teoria del Senyal i Comunicacions

Hernando Pericás, Francisco Javier

Data de publicació

2024-10-28

Resum

Adapting large-scale pre-trained automatic speech recognition (ASR) models, such as OpenAI?s Whisper, to specific tasks or languages remains a challenging problem due to the substantial computational resources required for traditional fine-tuning methods. This limitation is particularly significant in real-world scenarios where resources are constrained, and efficient adaptation is essential for handling diverse languages and domains. To address this issue, this thesis explores two parameter-efficient fine-tuning (PEFT) techniques: soft prompting and Low-Rank Adaptation (LoRA). Soft prompting leverages trainable prompt embeddings to adapt the model with minimal parameter updates, while LoRA applies low-rank transformations to the model?s weight matrices, reducing the number of trainable parameters. Through experiments on the 3CatParla dataset for Catalan speech recognition, we demonstrate that these techniques achieve competitive performance with significantly lower computational demands. LoRA, in particular, shows strong results in terms of efficiency and accuracy, while soft prompting exhibits performance limitations with larger models. This work opens pathways for further research into hybrid methods and evaluation across more diverse datasets, contributing to the field of efficient ASR adaptation for low-resource environments.

Tipus de document

Master thesis

Llengua

Anglès

Publicat per

Universitat Politècnica de Catalunya

Citació recomanada

Aquesta citació s'ha generat automàticament.

Drets

S'autoritza la difusió de l'obra mitjançant la llicència Creative Commons o similar 'Reconeixement-NoComercial- SenseObraDerivada'

Open Access

Aquest element apareix en la col·lecció o col·leccions següent(s)