Textual visual semantic dataset for text spotting

dc.contributor
Universitat Politècnica de Catalunya. Doctorat en Intel·ligència Artificial
dc.contributor
Institut de Robòtica i Informàtica Industrial
dc.contributor
Universitat Politècnica de Catalunya. Departament de Ciències de la Computació
dc.contributor
Universitat Politècnica de Catalunya. ROBiri - Grup de Robòtica de l'IRI
dc.contributor
Universitat Politècnica de Catalunya. GPLN - Grup de Processament del Llenguatge Natural
dc.contributor.author
Sabir, Ahmed
dc.contributor.author
Moreno-Noguer, Francesc
dc.contributor.author
Padró, Lluís
dc.date.issued
2020
dc.identifier
Sabir, A.; Moreno-Noguer, F.; Padró, L. Textual visual semantic dataset for text spotting. A: IEEE Conference on Computer Vision and Pattern Recognition. "2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops: Virtual, 14-19 June 2020: proceedings". Institute of Electrical and Electronics Engineers (IEEE), 2020, p. 2306-2315. ISBN 978-1-7281-9360-1. DOI 10.1109/CVPRW50498.2020.00279.
dc.identifier
978-1-7281-9360-1
dc.identifier
https://arxiv.org/pdf/2004.10349.pdf
dc.identifier
https://hdl.handle.net/2117/329587
dc.identifier
10.1109/CVPRW50498.2020.00279
dc.description.abstract
Text Spotting in the wild consists of detecting and recognizing text appearing in images (e.g. signboards, traffic signals or brands in clothing or objects). This is a challenging problem due to the complexity of the context where texts appear (uneven backgrounds, shading, occlusions, perspective distortions, etc.). Only a few approaches try to exploit the relation between text and its surrounding environment to better recognize text in the scene. In this paper, we propose a visual context dataset1 for Text Spotting in the wild, where the publicly available dataset COCO-text [40] has been extended with information about the scene (such as objects and places appearing in the image) to enable researchers to include semantic relations between texts and scene in their Text Spotting systems, and to offer a common framework for such approaches. For each text in an image, we extract three kinds of context information: objects in the scene, image location label and a textual image description (caption). We use state-of-the-art out-of-the-box available tools to extract this additional information. Since this information has textual form, it can be used to leverage text similarity or semantic relation methods into Text Spotting systems, either as a post-processing or in an end-to-end training strategy.
dc.description.abstract
This work is supported by the KASP Scholarship Program and by the Spanish government under projects HuMoUR TIN2017-90086-R and María de Maeztu Seal of Excellence MDM-2016-0656.
dc.description.abstract
Peer Reviewed
dc.description.abstract
Postprint (author's final draft)
dc.format
10 p.
dc.format
application/pdf
dc.language
eng
dc.publisher
Institute of Electrical and Electronics Engineers (IEEE)
dc.relation
https://ieeexplore.ieee.org/abstract/document/9150617
dc.relation
info:eu-repo/grantAgreement/MINECO/MDM-2016-0656
dc.rights
Open Access
dc.subject
Àrees temàtiques de la UPC::Informàtica::Intel·ligència artificial::Aprenentatge automàtic
dc.subject
Machine learning
dc.subject
Data mining
dc.subject
Image analysis
dc.subject
Text spotting
dc.subject
Deep learning
dc.subject
Dataset
dc.subject
Aprenentatge automàtic
dc.subject
Mineria de dades
dc.subject
Imatges -- Anàlisi
dc.title
Textual visual semantic dataset for text spotting
dc.type
Conference report


Fitxers en aquest element

FitxersGrandàriaFormatVisualització

No hi ha fitxers associats a aquest element.

Aquest element apareix en la col·lecció o col·leccions següent(s)

E-prints [72954]