Popis: |
In this dissertation, an approach on whole body motion optimization is presented for humanoid vehicle-handling task. To achieve this goal, the author built a reinforcement-learning-agent based trajectory-optimization framework. The framework planned and optimized a guideline input trajectory with respect to various kinematic and dynamic constraints. A path planner module designed an initial suboptimal motion. Reinforcement learning was then implemented to optimize the trajectories with respect to time-varying constraints at the body and joint level. The cost functions in the body level calculated a robot's static balancing ability, collisions and validity of the end-effector movement. Quasi-static balancing and collisions were computed from kinematic models of the robot and the vehicle. Various costs such as joint angle and velocity limits were computed in the joint level. Energy consumption such as torque limit obedience was also checked at the joint level. Such physical limits of each joint ensured both spatial and temporal smoothness of the generated trajectories. Keeping overall structure of the framework, cost functions and learning algorithm were selected adaptively based on the requirements of given tasks. After the optimization process, experimental tests of the presented approach are demonstrated through simulations using a virtual robot model. Verification-and-validation process then confirmed the efficacy of the optimized trajectory approach using the robot's real physical platform. For both test and verification process, different types of robot and vehicle were used to prove potentials for extension of the trajectory-optimization framework. |