Natural language models for learning assessment from unstructured data

Other authors

Universitat Politècnica de Catalunya. Departament de Ciències de la Computació

Sallés Rius, Anna

Publication date

2025-10-20



Abstract

This Master's Thesis optimizes large language models (LLMs) for multiple-choice question answering (MCQA) to evaluate employee performance from spoken transcripts in personalized training platforms. Current LLMs achieve only 63% accuracy in dynamic assessments due to biases, reasoning failures, and inefficiencies. We develop a systematic framework balancing precision, cost, and execution time through iterative evaluation refinement, corpus preparation, baseline selection, and phased experiments, including single-factor screening (OFAT), multi-factor interactions, and parameter-efficient fine-tuning (PEFT). Key factors assessed include model scale, in-context learning, chain-of-thought (CoT), chain-of-density (CoD), self-correction, and agentic ensembles. Contributions encompass a replicable optimization pipeline and strategies to mitigate biases like positional and literal interpretation errors. Results show improvements from 63% to 80% accuracy and enhanced F1-scores, enabling ethical, scalable AI-driven assessments for enterprise individualized learning.

Document Type

Master thesis

Language

English

Publisher

Universitat Politècnica de Catalunya

Recommended citation

This citation was generated automatically.

Rights

Open Access

This item appears in the following Collection(s)