dc.contributor
Universitat Politècnica de Catalunya. Departament de Matemàtiques
dc.contributor
Institut de Robòtica i Informàtica Industrial
dc.contributor
Alberich Carramiñana, Maria
dc.contributor
Dimiccoli, Mariella
dc.contributor.author
Tura Vecino, Biel
dc.identifier
https://hdl.handle.net/2117/346221
dc.description.abstract
Institut de Robòtica i Informàtica Industrial
dc.description.abstract
Recent research has shown that, in particular domains, unsupervised learning algorithms are achieving on par, or even better performance than fully supervised algorithms, avoiding the need of human labelled data. The division of a video into events has been an active research topic through unsupervised algorithms, exploiting relations in the video itself for a temporal segmentation task. In particular, self-supervised learning has shown to be very useful learning video representations without any annotations assigned to it. This thesis proposes a self-supervised method for learning event representations of unconstrained complex activity videos. These are sequences of images with high temporal resolution and with very small visual variance between events, with a clear semantic differentiation for humans. The assumption underlying the proposed model is that a video can be represented by a graph that encodes both semantic and temporal similarity between events. Our method follows two steps: first, meaningful initial features are extracted by a spatio-temporal backbone neural network trained on a self-supervised contrastive task. Then, starting with this initial embedding, low-dimensional graph-based event representation features are iteratively learned jointly with its underlying graph structure. The main contribution in this work is to learn a function parameterized by a graph neural network that learns graph-based event feature representations by exploiting the semantic and temporal relatedness through a fully end-to-end self-supervised trainable approach. Experiments were performed in the challenging \textit{Breakfast Action Dataset} and we show that the proposed approach leads to an effective low-dimensional feature representation of the input data, suitable for the downstream task of event segmentation. Moreover, we show that the presented method, followed by a downstream clustering task, achieves on par state-of-the-art metrics on video segmentation of complex activity videos.
dc.format
application/pdf
dc.publisher
Universitat Politècnica de Catalunya
dc.rights
http://creativecommons.org/licenses/by-nc-nd/3.0/es/
dc.rights
Restricted access - confidentiality agreement
dc.subject
Àrees temàtiques de la UPC::Informàtica::Intel·ligència artificial
dc.subject
Àrees temàtiques de la UPC::Matemàtiques i estadística
dc.subject
Artificial intelligence
dc.subject
Representation learning
dc.subject
Graph embedding
dc.subject
Video segmentation
dc.subject
Event representations
dc.subject
Intel·ligència artificial
dc.subject
Classificació AMS::68 Computer science::68T Artificial intelligence
dc.title
Learning graph-based event representations for unconstrained video segmentation