Prompting in Deep Learning Speech Recognition

Other authors

Universitat Politècnica de Catalunya. Departament de Teoria del Senyal i Comunicacions

Hernando Pericás, Francisco Javier

Publication date

2024-10-28

Abstract

Adapting large-scale pre-trained automatic speech recognition (ASR) models, such as OpenAI?s Whisper, to specific tasks or languages remains a challenging problem due to the substantial computational resources required for traditional fine-tuning methods. This limitation is particularly significant in real-world scenarios where resources are constrained, and efficient adaptation is essential for handling diverse languages and domains. To address this issue, this thesis explores two parameter-efficient fine-tuning (PEFT) techniques: soft prompting and Low-Rank Adaptation (LoRA). Soft prompting leverages trainable prompt embeddings to adapt the model with minimal parameter updates, while LoRA applies low-rank transformations to the model?s weight matrices, reducing the number of trainable parameters. Through experiments on the 3CatParla dataset for Catalan speech recognition, we demonstrate that these techniques achieve competitive performance with significantly lower computational demands. LoRA, in particular, shows strong results in terms of efficiency and accuracy, while soft prompting exhibits performance limitations with larger models. This work opens pathways for further research into hybrid methods and evaluation across more diverse datasets, contributing to the field of efficient ASR adaptation for low-resource environments.

Document Type

Master thesis

Language

English

Publisher

Universitat Politècnica de Catalunya

Recommended citation

This citation was generated automatically.

Rights

S'autoritza la difusió de l'obra mitjançant la llicència Creative Commons o similar 'Reconeixement-NoComercial- SenseObraDerivada'

Open Access

This item appears in the following Collection(s)