MENTOR: Multilingual tExt detectioN TOward leaRning by analogy

Autor: Lin, Hsin-Ju, Chung, Tsu-Chun, Hsiao, Ching-Chun, Chen, Pin-Yu, Chiu, Wei-Chen, Huang, Ching-Chun
Rok vydání: 2024
Předmět:
Zdroj: 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Detroit, MI, USA, 2023, pp. 3248-3255
Druh dokumentu: Working Paper
DOI: 10.1109/IROS55552.2023.10342419
Popis: Text detection is frequently used in vision-based mobile robots when they need to interpret texts in their surroundings to perform a given task. For instance, delivery robots in multilingual cities need to be capable of doing multilingual text detection so that the robots can read traffic signs and road markings. Moreover, the target languages change from region to region, implying the need of efficiently re-training the models to recognize the novel/new languages. However, collecting and labeling training data for novel languages are cumbersome, and the efforts to re-train an existing/trained text detector are considerable. Even worse, such a routine would repeat whenever a novel language appears. This motivates us to propose a new problem setting for tackling the aforementioned challenges in a more efficient way: "We ask for a generalizable multilingual text detection framework to detect and identify both seen and unseen language regions inside scene images without the requirement of collecting supervised training data for unseen languages as well as model re-training". To this end, we propose "MENTOR", the first work to realize a learning strategy between zero-shot learning and few-shot learning for multilingual scene text detection.
Comment: 8 pages, 4 figures, published to IROS 2023
Databáze: arXiv