Automated Driving - a Recipe for Mixing Engineering, AI, Math and Human Behaviour

If there is anything grounded but similar to rocket science, that is Autonomous Driving and the engineering behind it.

Automated Driving - a Recipe for Mixing Engineering, AI, Math and Human Behaviour

written by Alex Sima (Head of Presales), in the September 2022 issue of Today Software Magazine.

Read the article in Romanian here

A Word of Introduction

Rarely are we presented with the chance to talk about a subject as exciting as Automated Driving and its cross-domain and cross-industry fields. But one thing is clear to summarize this topic's characteristics and complexity, a simple analogy is more suited: "If there is anything grounded but similar to rocket science, that is Autonomous Driving and the engineering behind it".

It's here and now; from a sci-fi topic to working implementations in military applications beginning 1900s, to Darpa's challenges on autonomous driving in the 2000s, and finally to witness the Tesla Full Self-Driving package actively marketed in 2022, we can understand that this topic is not a marketing gimmick.

Given how transformational this industry is, the fact that most of the Top 500 Tech Companies are actively investing in this topic, and the particular realities we will explore below, it will become clear how much this technology already surrounds us.

Facets of automated driving

The most exciting parts when analysing Autonomous Driving Systems fall into the following areas:

  1. The engineering HW behind (SoC chips, sensors, MCU) and the Software Stacks along Sensor Fusion, including algorithms (SLAM, Path Planning) and the frameworks, APIs, and SDKs.

  2. Sensing modules, including stereo-cameras, LIDAR (Light Detection and Ranging), refined IMU (inertial measurement unit) and assisted GPS.

  3. The emerging engineering disciplines (AI training platforms, onboard and central-hosted ML, "Smart Data" Lakes) combined with Extended Reality implementations (AR, XR).The emerging engineering disciplines (AI training platforms, onboard and central-hosted ML, "Smart Data" Lakes) combined with Extended Reality implementations (AR, XR).

  4. Topographic maps, HD Maps, Live Traffic Data Hubs, and Geofencing Maps.

  5. The ADS architecture: Drive by wire, Inter-Vehicular communication for cooperative ACC (CACC), Re-programmable modules with OTA update systems, Modularity Homogenization, and finally, the Mathematical safety model.

But as promised, we will remember to briefly mention the concerns of Legal and Safety Regulations and the hurdle with standardising the Driving Automation Systems: Advanced Driver Assistance Systems (ADAS) and Automated Driving Systems (ADS).

Before we proceed in our journey, it is essential to appreciate that behind implementations of ADS lies more than a dozen of science areas and a ton of documentation and research which makes it impossible to talk comprehensively about all the facets mentioned above in a single article (it could quickly become that "1000 pages tome", to scratch the surface).

This suggests our best chance to acquire awareness on the topic would be to focus on a few particular items related to ADAS and ADS.

What is the Name of the Business?

A note of endorsement, the business of Autonomous Driving spans way beyond the trivial "consumer" applications mainly related to self-driving vehicles. A comprehensive set of working applications for ADS sits at the core of businesses like Commercial Airline and Maritime industries, Industrial Production Lines, and Manufacturing Execution Systems.

How much cross-industry effort is therefore involved?

Just by reading the above-mentioned industry-impacted titles that benefit from ADS, we can understand the depth of cross-domain engineering and its science.

The Autonomous Systems (driving and beyond) are a topic of end-to-end implementations, and this is key in this business: the units which can perform autonomous operations have local onboard complex chips/sensors and software, communicate with a centrally hosted system, use heavily wireless low-latency communication and harness multiple infrastructure (road artefacts, cartography items).

With a bit of courage, we can already admit we are looking at an emergent hyper-end-to-end data-driven journey.

Focusing on consumer applications

The well-known entry point in the topic of ADS is understanding what Automation versus Autonomy means. It is, therefore, useful to clarify this in the shortest way possible.

Vehicle autonomy is categorised into six levels, according to a system developed by SAE J3016 (updated J3016_202104). The SAE levels can be roughly understood as Level 0 - no automation; Level 1 - hands-on/shared control; Level 2 - hands-off; Level 3 - eyes off; Level 4 - mind off, and Level 5 - steering wheel optional.

How Does It Work?

TSM-September-2022-issue-2-1.png

Perception – what the vehicle can see

The vehicle uses LiDAR, radar, cameras, and position estimators for each movement that constantly scans 360 degrees.

The importance of a LiDAR system comes from its accuracy (up to a range of 100 m) and its rotational ability of 360 degrees. With more than two million readings per second, a LiDAR system supplies high-resolution details on the environment around the car.

Radar, ultrasonic, and stereo cameras are essential for building the perception of a car's immediate surroundings. This fusion of sensor data and GPS data allows a car's location to be positioned within 10 cm.

With the SLAM (Simultaneous Localisation and Mapping), vehicles map their surroundings in real-time, orientating themselves based on sensor input. A few challenges for the SLAM tech are:

  1. Data collection: One hour of drive time approximates one terabyte of data.

  2. Data processing: Interpreting one terabyte of gathered data employing high computing power demands two days to develop usable navigation data.

  3. Latency time: Latency must be lower than 10 ms for real-time execution, which demands high-performance computing onboard the vehicle.

Awareness – how the vehicle orients itself

As point Cloud Data is constantly provided to Machine Learning (ML) algorithms, vehicles begin to make sense of tons of data, discerning patterns based on which the adaptive systems can make decisions.

Routing (map, current location, route)

Online mapping occurs onboard the vehicle. Typical examples are SLAM systems for simultaneous localization and mapping.

Recently, semantic SLAM focusing on the geometry and semantic meanings of the surface markings on the road are explored as a lightweight solution to mapping. In addition, monocular semantic online mapping (monoSOM) is one trending topic where a neural network is used to fuse monocular images from multiple cameras temporal sequences into a semantic birds-eye-view map.

Acting – what decisions the vehicle makes

The sensor data collected by thousands of cars allows the establishment of dynamic HD maps with high accuracy and real-time information.

Google's Waymo project is an example of such "acts" of the autonomous system:

  1. Hands-free steering centres the car without the driver's hands on the wheel. The driver is still required to pay attention.

  2. Adaptive cruise control (ACC) down to a stop automatically maintains a selectable distance between the driver's car and the car in front.

  3. Lane-centering steering intervenes when the driver crosses lane markings by automatically nudging the vehicle toward the opposite lane marking.

And these are just level 2 assistance aspects. Fully autonomous integrations are already deployed in several other commercial applications.

The Subject of the Paper (in a nutshell)

TSM-September-2022-issue-1-1.png

Chips (SoC and Microcontrollers)

ADAS/autonomous driving chips have seen a wave of upgrades, and many chip makers have launched or planned to unveil high computing power chips. In January 2022, Mobileye introduced the EyeQ® Ultra™, the company's most advanced, highest-performing system-on-chip (SoC) purpose-built for autonomous driving. As unveiled during CES 2022, EyeQ Ultra maximizes effectiveness and efficiency at only 176 TOPS with 5-nanometer process technology.

SoC chips, mainly involved with the heterogeneous design, should also take different computing units such as GPU, CPU, acceleration core, NPU, DPU, ISP, etc., Chip bandwidth, peripherals, memory, energy efficiency ratio, and cost into account. At the same time, the development toolchain of SoC chips is very important. Only by forming a developer ecosystem can a company build long-term sustainable competitiveness.

In chip design, the configuration of heterogeneous IP is crucial, and autonomous driving SoC chip vendors are constantly strengthening the research and development of core IP to maintain their decisive competitive edges.

For example, NVIDIA upgraded its existing GPU-based product line to a three-chip (GPU+CPU+DPU) strategy:

  1. DPU: NVIDIA announced the completion of its acquisition of Mellanox Technologies, Ltd., an Israeli chip company, for a transaction value of $7 billion and launched the BlueField®-3 data processing unit (DPU). DPU is a programmable electronic component with the versatility and programmability of a central processing unit (CPU), dedicated to efficiently handling network data packets, storage requests, or analysis requests;

  2. NVIDIA launched the Grace™ CPU, an Arm-based processor that will deliver 10x the performance of today's fastest servers on the most complex AI and high-performance computing workloads. NVIDIA's next-generation SOC, Atlan, is based on the ARM-based Grace CPU and Ampere Next GPU.

Other Commercial Chip Applications:

  1. Tesla has launched the Dojo supercomputing training platform, using Tesla's self-developed 7nm AI training chip D1 and relying on a vast customer base to collect autonomous driving data to achieve model training for deep learning systems. Tesla Autopilot uses 2D images + annotations for training and algorithm iteration. The Dojo supercomputing platform allows Autopilot to fulfil training through 3D images + time stamps (4D Autopilot system). The 4D Autopilot system will be predictable and mark the 3D movement trajectory of road objects to enhance the reliability of autonomous driving functions.

  2. NVIDIA has announced NVIDIA Omniverse Replicator, an engine for generating synthetic data with ground truth for training AI networks. NVIDIA also has the most powerful training processor - the NVIDIA A100.

  3. The map data of Mobileye's REM has covered the world. Intel acquired Moovit to enhance REM's strength and data differentiation, extend the traditional HD map data from the roadside to the user side, start from the perception redundancy of assisted autonomous driving and improve the efficiency of path planning. Intel launched its self-developed flagship AI.

As a Conclusion

The implementation of Autonomous Driving Systems involves a revolution in:

  1. Manufacturing (parts and end-user units);

  2. Software (platforms - both onboard and hosted, algorithms, SDKs), Architecture, emergent technologies (AI, XR);

  3. Infrastructure Transformation;

  4. Changing Oil Demand;

  5. Safety and Legal Dividend.

According to Consumer Reports, "Almost every new car sold in the U.S. today falls into a grey area from Level 0 to Level 2-3."


Join our community of problem-solvers