Abstrakt: |
Nutrition is vital for athletic performance, especially in ultra-endurance sports, which pose unique nutritional challenges. Despite its importance, there exist gaps in the nutrition knowledge among athletes, and emerging digital tools could potentially bridge this gap. The ULTRA-Q, a sports nutrition questionnaire adapted for ultra-endurance athletes, was used to assess the nutritional knowledge of ChatGPT-3.5, ChatGPT-4, Google Bard, and Microsoft Copilot. Their performance was compared with experienced ultra-endurance athletes, registered sports nutritionists and dietitians, and the general population. ChatGPT-4 demonstrated the highest accuracy (93%), followed by Microsoft Copilot (92%), Bard (84%), and ChatGPT-3.5 (83%). The averaged AI model achieved an overall score of 88%, with the highest score in Body Composition (94%) and the lowest in Nutrients (84%). The averaged AI model outperformed the general population by 31% points and ultra-endurance athletes by 20% points in overall knowledge. The AI model exhibited superior knowledge in Fluids, outperforming registered dietitians by 49% points, the general population by 42% points, and ultra-endurance athletes by 32% points. In Body Composition, the AI model surpassed the general population by 31% points and ultraendurance athletes by 24% points. In Supplements, it outperformed registered dietitians by 58% points and the general population by 55% points. Finally, in Nutrients and in Recovery, it outperformed the general population only, by 24% and 29% points, respectively. AI models show high proficiency in sports nutrition knowledge, potentially serving as valuable tools for nutritional education and advice. AI-generated insights could be integrated with expert human judgment for effective athlete performance optimization. [ABSTRACT FROM AUTHOR] |