Loading…
Attending this event?

*PLEASE NOTE: ALL SESSION TIMES ARE LISTED IN UTC by default*

We recommend changing the setting to your local timezone by going to the "Timezone" drop down menu on the right side of this page


Back To Schedule
Wednesday, September 23 • 8:45am - 9:10am
LVC20-202 RoboCar Vision Perception Unit: Using Ultra96 Board and Autoware Stack

Log in to save this to your schedule, view media, leave feedback and see who's attending!

Autonomous vehicles are becoming a part of normal life as companies, universities and foundations are heavily investing in projects to aid its research and development. One such initiative that has taken up wide acceptance by the automotive community is Autoware Foundation. This project supports self-driving mobility and has been adopted by over 100 companies and 40 vehicles. 96Boards, Autoware Foundation and its members (Xilinx, AutoCore) have teamed up to design an Autonomous driving solution using a customized Ultra96 Board and the Autoware stack. Using a distributed system design we will demonstrate some of the key autonomous driving features, which will also have the potential to be deployed as an ADAS module.

The talk will describe in detail the design and implementation of the vision perception unit of RoboCar covering the hardware, software features and performance capabilities. The vision perception unit performs the main perception tasks in autonomous driving including object detection, traffic light detection and self-parking. The algorithms and models are open source and have been implemented using Xilinx FPGAs on the Ultra96 boards. The design of the functional nodes in the autonomous vehicle is distributed in nature with the nodes talking to each other over a Distributed Data Service layer as a messaging middleware and a real-time kernel to coordinate the actions. We also demonstrate the capability of Ultra96 MPSoC technology to handle multiple channels of LVDS real time camera and the integration with the Lidar/Radar point cloud fusion to feed into the decision making unit of the overall system.

The presentation will also cover the Autoware localization algorithms like NDT and AI object detection models like Yolov3. The details of image capture and algorithm processing of the vision perception pipeline will be presented along with the performance measurements in each phase of the pipeline. We will also be demonstrating the ability to update the processing unit capability through OTA. It is envisioned that the core AI engine will require regular updates with the latest training values; hence a built-in platform level mechanism supporting such capability is essential for real world deployment.

Speakers
avatar for Ravikumar Chakaravarthy

Ravikumar Chakaravarthy

Xilinx Inc, Xilinx
Ravikumar Chakaravarthy is an Executive at Xilinx Inc. He leads Open Source Software development at Xilinx including but not limited to Linux kernel, UBoot, OpenAMP, Xen, FreeRTOS, V4L, GStreamer, QEMU, Yocto, TVM/VTA, Autoware etc. He is currently leading AI/ML engines and acceleration... Read More →


Wednesday September 23, 2020 8:45am - 9:10am
Track 1 - IoT/Edge/Embedded

Attendees (10)