Autor: |
Ye, Shaokai, Filippova, Anastasiia, Lauer, Jessy, Schneider, Steffen, Vidal, Maxime, Qiu, Tian, Mathis, Alexander, Mathis, Mackenzie Weygandt |
Předmět: |
|
Zdroj: |
Nature Communications; 6/21/2024, Vol. 15 Issue 1, p1-19, 19p |
Abstrakt: |
Quantification of behavior is critical in diverse applications from neuroscience, veterinary medicine to animal conservation. A common key step for behavioral analysis is first extracting relevant keypoints on animals, known as pose estimation. However, reliable inference of poses currently requires domain knowledge and manual labeling effort to build supervised models. We present SuperAnimal, a method to develop unified foundation models that can be used on over 45 species, without additional manual labels. These models show excellent performance across six pose estimation benchmarks. We demonstrate how to fine-tune the models (if needed) on differently labeled data and provide tooling for unsupervised video adaptation to boost performance and decrease jitter across frames. If fine-tuned, SuperAnimal models are 10–100× more data efficient than prior transfer-learning-based approaches. We illustrate the utility of our models in behavioral classification and kinematic analysis. Collectively, we present a data-efficient solution for animal pose estimation. Quantifying animal behavior is crucial in various fields such as neuroscience and ecology, yet we lack data-efficient methods to perform behavioral quantification. Here, the authors provide new unified models across 45+ species without manual labeling, thus enhancing analysis in behavioral studies. [ABSTRACT FROM AUTHOR] |
Databáze: |
Complementary Index |
Externí odkaz: |
|