Abstrakt: |
In recent years, the shift toward online education has allowed for greater distribution of information. Online opportunities for individuals to be exposed to horse judging resources could allow for more involvement in competitive horse judging (CHJ). Research has shown CHJ allows participants to gain knowledge about the horse industry, and transferrable skills such as public speaking and critical thinking. This mixed-methods study explores the need and development of online resources for CHJ through surveys and pilot groups. The study was broken into 4 phases (P1–P4). In response to feedback from P1, (learner analysis survey), 3 online interactive modules were created including: What is a Horse Judging Contest? (M1), Getting Started with Oral Reasons: Competitive Horse Judging (M2), and The Basics of Conformation Evaluation (M3). P2 was an expert panel review comprised of 3 equine extension specialists. P3 and P4 were pilot groups with 10 participants each from various locations across the US P3 reviewers were experienced in horse judging with over 5 years of experience, while P4 reviewers were inexperienced with less than 5 years of experience. Following each phase, edits were made to improve the modules from reviewer feedback. Each reviewer in P3 and P4 participated in a Zoom interview with open ended questions following the review of each module and completed a 5-question quantitative Qualtricspost survey. Survey questions focused on clarity, value, effectiveness, and difficulty understanding each module. A linear mixed model was implemented using (PROC GLIMMIX, SAS 9.4) to analyze survey data. Fixed effects included Group, Question, Sub question and their interactions with a random participant effect accounting for participant variation. Results found a significant (F = 12.8, P < 0.0001) difference between P3 and P4 in the rating of clarity in M3. An α level of 0.05 was used for all statistical tests. Overall, P3 and P4 rated all modules highly 'effective' with LS means estimates ranging from 7.8 to 9.0/10, and highly 'valuable' with LS means estimates ranging from 8.3 to 9.1/10. Themes emerged from data analysis of the Zoom interviews including both positive and negative comments from reviewers relating to navigation, activities/quizzes, content, format/design, and effectiveness/value. In conclusion, the findings show that through phases of review new resources can be improved in the areas of clarity and delivery of content. Additionally, the data suggests the need and value of more online resources for horse judging. Implications of these concepts could be applied to other competitive judging programs. [ABSTRACT FROM AUTHOR] |