A computational model of motion sickness dynamics during passive self-motion in the dark.
Autor: | Allred AR; Smead Department of Aerospace Engineering Sciences, University of Colorado-Boulder, Boulder, CO, USA. aaron.allred@colorado.edu., Clark TK; Smead Department of Aerospace Engineering Sciences, University of Colorado-Boulder, Boulder, CO, USA. |
---|---|
Jazyk: | angličtina |
Zdroj: | Experimental brain research [Exp Brain Res] 2023 Sep; Vol. 241 (9), pp. 2311-2332. Date of Electronic Publication: 2023 Aug 17. |
DOI: | 10.1007/s00221-023-06684-9 |
Abstrakt: | Predicting the time course of motion sickness symptoms enables the evaluation of provocative stimuli and the development of countermeasures for reducing symptom severity. In pursuit of this goal, we present an observer-driven model of motion sickness for passive motions in the dark. Constructed in two stages, this model predicts motion sickness symptoms by bridging sensory conflict (i.e., differences between actual and expected sensory signals) arising from the observer model of spatial orientation perception (stage 1) to Oman's model of motion sickness symptom dynamics (stage 2; presented in 1982 and 1990) through a proposed "Normalized innovation squared" statistic. The model outputs the expected temporal development of human motion sickness symptom magnitudes (mapped to the Misery Scale) at a population level, due to arbitrary, 6-degree-of-freedom, self-motion stimuli. We trained model parameters using individual subject responses collected during fore-aft translations and off-vertical axis of rotation motions. Improving on prior efforts, we only used datasets with experimental conditions congruent with the perceptual stage (i.e., adequately provided passive motions without visual cues) to inform the model. We assessed model performance by predicting an unseen validation dataset, producing a Q 2 value of 0.86. Demonstrating this model's broad applicability, we formulate predictions for a host of stimuli, including translations, earth-vertical rotations, and altered gravity, and we provide our implementation for other users. Finally, to guide future research efforts, we suggest how to rigorously advance this model (e.g., incorporating visual cues, active motion, responses to motion of different frequency, etc.). (© 2023. The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature.) |
Databáze: | MEDLINE |
Externí odkaz: |