dc.contributor.author
De Rus Arance, Juan Antonio
dc.contributor.author
Montagud, Mario
dc.contributor.author
Cobos, Máximo
dc.date.accessioned
2023-03-01T09:36:35Z
dc.date.accessioned
2024-09-20T08:13:33Z
dc.date.available
2023-03-01T09:36:35Z
dc.date.available
2024-09-20T08:13:33Z
dc.date.issued
2022-08-05
dc.identifier.uri
http://hdl.handle.net/2072/531577
dc.description.abstract
This paper contains the research proposal of Juan Antonio De Rus that was presented at the MMSys 2022 doctoral symposium.
The use of virtual reality (VR) is growing every year. With the normalization of remote work it is to expect that the use of immersive virtual environments to support tasks as online meetings, education, etc, will grow even more. VR environments typically include multimodal content formats (synthetic content, video, audio, text) and even multi-sensory stimuli to provide an enriched user experience. In this context, Affective Computing (AC) techniques assisted by Artificial Intelligence (AI) become a powerful means to determine the user’s perceived Quality of Experience (QoE). In the field of AC, we investigate a variety of tools to obtain accurate emotional analysis by using AI techniques applied on physiological data. In this doctoral study we have formulated a set of open research questions and objectives on which we plan to generate valuable contributions and knowledge in the field of AC, spatial audio, and multimodal interactive virtual environments, one of which is the creation of tools to automatically evaluate the QoE, even in real-time, which can provide valuable benefits both to service providers and
consumers. For data acquisition we use sensors of different quality to study the scalability, reliability and replicability of our solutions, as clinical-grade sensors are not always within the reach of the average user
dc.format.extent
5 p.
cat
dc.relation.ispartof
ACM Multimedia Systems 2022 (MMSys'22)
cat
dc.rights
© De Rus, Montagud and Cobos, 2022. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in ACM Multimedia Systems Conference (MMSys '22), http://dx.doi.org/10.1145/3524273.3533930
dc.source
RECERCAT (Dipòsit de la Recerca de Catalunya)
dc.subject.other
Virtual & Immersive Media Technologies
cat
dc.title
AI-assisted Affective Computing and Spatial Audio for Interactive Multimodal Virtual Environments
cat
dc.type
info:eu-repo/semantics/article
cat
dc.type
info:eu-repo/semantics/draft
cat
dc.identifier.doi
https://doi.org/10.1145/3524273.3533930
cat
dc.rights.accessLevel
info:eu-repo/semantics/openAccess