Otros/as autores/as

Universitat Politècnica de Catalunya. Departament de Ciències de la Computació

Sallés Rius, Anna

Fecha de publicación

2025-10-20



Resumen

This Master's Thesis optimizes large language models (LLMs) for multiple-choice question answering (MCQA) to evaluate employee performance from spoken transcripts in personalized training platforms. Current LLMs achieve only 63% accuracy in dynamic assessments due to biases, reasoning failures, and inefficiencies. We develop a systematic framework balancing precision, cost, and execution time through iterative evaluation refinement, corpus preparation, baseline selection, and phased experiments, including single-factor screening (OFAT), multi-factor interactions, and parameter-efficient fine-tuning (PEFT). Key factors assessed include model scale, in-context learning, chain-of-thought (CoT), chain-of-density (CoD), self-correction, and agentic ensembles. Contributions encompass a replicable optimization pipeline and strategies to mitigate biases like positional and literal interpretation errors. Results show improvements from 63% to 80% accuracy and enhanced F1-scores, enabling ethical, scalable AI-driven assessments for enterprise individualized learning.

Tipo de documento

Master thesis

Lengua

Inglés

Publicado por

Universitat Politècnica de Catalunya

Citación recomendada

Esta citación se ha generado automáticamente.

Derechos

Open Access

Este ítem aparece en la(s) siguiente(s) colección(ones)