Abstrakt: |
A recent study conducted by researchers at the Universidad Politecnica de Madrid explores the use of Contrastive Language-Image Pre-Training (CLIP) for human posture classification, specifically in the context of yoga poses. The study found that the fine-tuned CLIP model achieved an accuracy of over 85% in classifying human postures, surpassing previous works on the same dataset. The researchers also discovered that training with as few as 20 images per pose can yield around 90% accuracy in a six-class dataset. The study suggests that CLIP could be effectively used for yoga pose classification and potentially for human posture classification in general, with applications in automated systems for posture evaluation. [Extracted from the article] |