Vision Control Unit in Fully Self Driving Vehicles using Xilinx MPSoC and Opensource Stack
Autor: | Ravikumar V. Chakaravarthy, Hyun W. Kwon, Hua Jiang |
---|---|
Rok vydání: | 2021 |
Předmět: |
0209 industrial biotechnology
Computer science business.industry Video capture Pipeline (computing) 02 engineering and technology MPSoC computer.software_genre Object detection 020202 computer hardware & architecture Rendering (computer graphics) 020901 industrial engineering & automation Software Middleware (distributed applications) Embedded system Scalability 0202 electrical engineering electronic engineering information engineering business computer |
Zdroj: | ASP-DAC |
Popis: | Fully self-driving (FSD) vehicles are becoming increasing popular over the last few years and companies are investing significantly into its research and development. In the recent years, FSD technology innovators like Tesla, Google etc. have been working on proprietary autonomous driving stacks and have been able to successfully bring the vehicle to the roads. On the other end, organizations like Autoware Foundation and Baidu are fueling the growth of self-driving mobility using open source stacks. These organizations firmly believe in enabling autonomous driving technology for everyone and support developing software stacks through the open source community that is SoC vendor agnostic. In this proposed solution we describe a vision control unit for a fully self-driving vehicle developed on Xilinx MPSoC platform using open source software components. The vision control unit of an FSD vehicle is responsible for camera video capture, image processing and rendering, AI algorithm processing, data and meta-data transfer to next stage of the FSD pipeline. In this proposed solution we have used many open source stacks and frameworks for video and AI processing. The processing of the video pipeline and algorithms take full advantage of the pipelining and parallelism using all the heterogenous cores of the Xilinx MPSoC. In addition, we have developed an extensible, scalable, adaptable and configurable AI backend framework, XTA, for acceleration purposes that is derived from a popular, open source AI backend framework, TVM-VTA. XTA uses all the MPSoC cores for its computation in a parallel and pipelined fashion. XTA also adapts to the compute and memory parameters of the system and can scale to achieve optimal performance for any given AI problem. The FSD system design is based on a distributed system architecture and uses open source components like Autoware for autonomous driving algorithms, ROS and Distributed Data Services as a messaging middleware between the functional nodes and a real-time kernel to coordinate the actions. The details of image capture, rendering and AI processing of the vision perception pipeline will be presented along with the performance measurements of the vision pipeline. In this proposed solution we will demonstrate some of the key use cases of vision perception unit like surround vision and object detection. In addition, we will also show the capability of Xilinx MPSoC technology to handle multiple channels of real time camera and the integration with the Lidar/Radar point cloud data to feed into the decision-making unit of the overall system. The system is also designed with the capability to update the vision control unit through Over the Air Update (OTA). It is also envisioned that the core AI engine will require regular updates with the latest training values; hence a built-in platform level mechanism supporting such capability is essential for real world deployment. |
Databáze: | OpenAIRE |
Externí odkaz: |