Popis: |
Many models of visual attention have been proposed so far. Traditional bottom-up models, like saliency models, fail to replicate human gaze patterns, and deep gaze prediction models lack biological plausibility due to their reliance on supervised learning. Vision Transformers (ViTs), with their self-attention mechanisms, offer a new approach but often produce dispersed attention patterns if trained with supervised learning. This study explores whether self-supervised DINO (self-DIstillation with NO labels) training enables ViTs to develop attention mechanisms resembling human visual attention. Using video stimuli to capture human gaze dynamics, we found that DINO-trained ViTs closely mimic human attention patterns, while those trained with supervised learning deviate significantly. An analysis of self-attention heads revealed three distinct clusters: one focusing on foreground objects, one on entire objects, and one on the background. DINO-trained ViTs offer insight into how human overt attention and figure-ground separation develop in visual perception. |