A Dual-Path Model With Adaptive Attention For Vehicle Re-Identification
Autor: | Rama Chellappa, Jun-Cheng Chen, Amit Kumar, Sai Saketh Rambhatla, Neehar Peri, Pirazh Khorramshahi |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2019 |
Předmět: |
FOS: Computer and information sciences
Computer science Orientation (computer vision) business.industry Computer Vision and Pattern Recognition (cs.CV) Computer Science - Computer Vision and Pattern Recognition 02 engineering and technology Machine learning computer.software_genre 01 natural sciences 010309 optics Discriminative model 0103 physical sciences Path (graph theory) 0202 electrical engineering electronic engineering information engineering Key (cryptography) Code (cryptography) Focusing attention 020201 artificial intelligence & image processing Point (geometry) Artificial intelligence Focus (optics) business computer |
Zdroj: | ICCV |
Popis: | In recent years, attention models have been extensively used for person and vehicle re-identification. Most re-identification methods are designed to focus attention on key-point locations. However, depending on the orientation, the contribution of each key-point varies. In this paper, we present a novel dual-path adaptive attention model for vehicle re-identification (AAVER). The global appearance path captures macroscopic vehicle features while the orientation conditioned part appearance path learns to capture localized discriminative features by focusing attention on the most informative key-points. Through extensive experimentation, we show that the proposed AAVER method is able to accurately re-identify vehicles in unconstrained scenarios, yielding state of the art results on the challenging dataset VeRi-776. As a byproduct, the proposed system is also able to accurately predict vehicle key-points and shows an improvement of more than 7% over state of the art. The code for key-point estimation model is available at https://github.com/Pirazh/Vehicle_Key_Point_Orientation_Estimation. This work has been accepted for oral presentation in ICCV 2019 |
Databáze: | OpenAIRE |
Externí odkaz: |