Popis: |
The concept of image similarity is ambiguous, and images can be similar in one context and not in another. This ambiguity motivates the creation of metrics for specific contexts. This work explores the ability of deep perceptual similarity (DPS) metrics to adapt to a given context. DPS metrics use the deep features of neural networks for comparing images. These metrics have been successful on datasets that leverage the average human perception in limited settings. But the question remains if they could be adapted to specific similarity contexts. No single metric can suit all similarity contexts, and previous rule-based metrics are labor-intensive to rewrite for new contexts. On the other hand, DPS metrics use neural networks that might be retrained for each context. However, retraining networks takes resources and might ruin performance on previous tasks. This work examines the adaptability of DPS metrics by training ImageNet pretrained CNNs to measure similarity according to given contexts. Contexts are created by randomly ranking six image distortions. Distortions later in the ranking are considered more disruptive to similarity when applied to an image for that context. This also gives insight into whether the pretrained features capture different similarity contexts. The adapted metrics are evaluated on a perceptual similarity dataset to evaluate if adapting to a ranking affects their prior performance. The findings show that DPS metrics can be adapted with high performance. While the adapted metrics have difficulties with the same contexts as baselines, performance is improved in 99% of cases. Finally, it is shown that the adaption is not significantly detrimental to prior performance on perceptual similarity. The implementation of this work is available online: https://github.com/LTU-Machine-Learning/Analysis-of-Deep-Perceptual-Loss-Networks |