Learning Multimodal Affinities for Textual Editing in Images
Autor: | Omri Ben-Eliezer, Hadar Averbuch-Elor, Or Perel, Oron Anschel, Shai Mazor |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2021 |
Předmět: |
FOS: Computer and information sciences
Scheme (programming language) I.7.1 I.5.3 I.5.4 I.2.6 Computer science Computer Vision and Pattern Recognition (cs.CV) Computer Science - Computer Vision and Pattern Recognition 02 engineering and technology Image editing computer.software_genre Semantics 0202 electrical engineering electronic engineering information engineering Representation (mathematics) Cluster analysis computer.programming_language Information retrieval 020207 software engineering Computer Graphics and Computer-Aided Design Affinities Range (mathematics) 020201 artificial intelligence & image processing Pairwise comparison computer |
Popis: | Nowadays, as cameras are rapidly adopted in our daily routine, images of documents are becoming both abundant and prevalent. Unlike natural images that capture physical objects, document-images contain a significant amount of text with critical semantics and complicated layouts. In this work, we devise a generic unsupervised technique to learn multimodal affinities between textual entities in a document-image, considering their visual style, the content of their underlying text and their geometric context within the image. We then use these learned affinities to automatically cluster the textual entities in the image into different semantic groups. The core of our approach is a deep optimization scheme dedicated for an image provided by the user that detects and leverages reliable pairwise connections in the multimodal representation of the textual elements in order to properly learn the affinities. We show that our technique can operate on highly varying images spanning a wide range of documents and demonstrate its applicability for various editing operations manipulating the content, appearance and geometry of the image. ACM Transactions on Graphics 2021, to be presented in SIGGRAPH 2021 |
Databáze: | OpenAIRE |
Externí odkaz: |