Kinect-driven Patient-specific Head, Skull, and Muscle Network Modelling for Facial Palsy Patients.
Autor: | Nguyen TN; Université de technologie de Compiègne, Alliance Sorbonne Universités, CNRS, UMR 7338 Biomécaniques and Bioengineering, Centre de recherche Royallieu, CS 60 319 Compiègne, France. Electronic address: tan-nhu.nguyen@utc.fr., Dakpe S; Department of maxillo-facial surgery, CHU AMIENS-PICARDIE, Amiens, France; CHIMERE Team, University of Picardie Jules Verne, 80000 Amiens France. Electronic address: dakpe.stephanie@chu-amiens.fr., Ho Ba Tho MC; Université de technologie de Compiègne, Alliance Sorbonne Universités, CNRS, UMR 7338 Biomécaniques and Bioengineering, Centre de recherche Royallieu, CS 60 319 Compiègne, France. Electronic address: hobatho@utc.fr., Dao TT; Université de technologie de Compiègne, Alliance Sorbonne Universités, CNRS, UMR 7338 Biomécaniques and Bioengineering, Centre de recherche Royallieu, CS 60 319 Compiègne, France; Univ. Lille, CNRS, Centrale Lille, UMR 9013 - LaMcube - Laboratoire de Mécanique, Multiphysique, Multiéchelle, F-59000 Lille, France. Electronic address: tien-tuan.dao@centralelille.fr. |
---|---|
Jazyk: | angličtina |
Zdroj: | Computer methods and programs in biomedicine [Comput Methods Programs Biomed] 2021 Mar; Vol. 200, pp. 105846. Date of Electronic Publication: 2020 Nov 19. |
DOI: | 10.1016/j.cmpb.2020.105846 |
Abstrakt: | Background and Objective: Facial palsy negatively affects both professional and personal life qualities of involved patients. Classical facial rehabilitation strategies can recover facial mimics into their normal and symmetrical movements and appearances. However, there is a lack of objective, quantitative, and in-vivo facial texture and muscle activation bio-feedbacks for personalizing rehabilitation programs and diagnosing recovering progresses. Consequently, this study proposed a novel patient-specific modelling method for generating a full patient specific head model from a visual sensor and then computing the facial texture and muscle activation in real-time for further clinical decision making. Methods: The modeling workflow includes (1) Kinect-to-head, (2) head-to-skull, and (3) muscle network definition & generation processes. In the Kinect-to-head process, subject-specific data acquired from a new user in neutral mimic were used for generating his/her geometrical head model with facial texture. In particular, a template head model was deformed to optimally fit with high-definition facial points acquired by the Kinect sensor. Moreover, the facial texture was also merged from his/her facial images in left, right, and center points of view. In the head-to-skull process, a generic skull model was deformed so that its shape was statistically fitted with his/her geometrical head model. In the muscle network definition & generation process, a muscle network was defined from the head and skull models for computing muscle strains during facial movements. Muscle insertion points and muscle attachment points were defined as vertex positions on the head model and the skull model respectively based on the standard facial anatomy. Three healthy subjects and two facial palsy patients were selected for validating the proposed method. In neutral positions, magnetic resonance imaging (MRI)-based head and skull models were compared with Kinect-based head and skull models. In mimic positions, infrared depth-based head models in smiling and [u]-pronouncing mimics were compared with appropriate animated Kinect-driven head models. The Hausdorff distance metric was used for these comparisons. Moreover, computed muscle lengths and strains in the tested facial mimics were validated with reported values in literature. Results: With the current hardware configuration, the patient-specific head model with skull and muscle network could be fast generated within 17.16±0.37s and animated in real-time with the framerate of 40 fps. In neutral positions, the best mean error was 1.91 mm for the head models and 3.21 mm for the skull models. On facial regions, the best mean errors were 1.53 mm and 2.82 mm for head and skull models respectively. On muscle insertion/attachment point regions, the best mean errors were 1.09 mm and 2.16 mm for head and skull models respectively. In mimic positions, these errors were 2.02 mm in smiling mimics and 2.00 mm in [u]-pronouncing mimics for the head models on facial regions. All above error values were computed on a one-time validation procedure. Facial muscles exhibited muscle shortening and muscle elongating for smiling and pronunciation of sound [u] respectively. Extracted muscle features (i.e. muscle length and strain) are in agreement with experimental and literature data. Conclusions: This study proposed a novel modeling method for fast generating and animating patient-specific biomechanical head model with facial texture and muscle activation bio-feedbacks. The Kinect-driven muscle strains could be applied for further real-time muscle-oriented facial paralysis grading and other facial analysis applications. (Copyright © 2020. Published by Elsevier B.V.) |
Databáze: | MEDLINE |
Externí odkaz: |