Enhanced safety features are seen by many as the next major differentiator for vehicle manufactures looking to grow market share. At the same time, artificial intelligence is increasingly seen as a means to enable these advanced features and capabilities that were not previously considered possible or viable.
Advancing safety features by using sensor fusion in a vehicle to make tactical driving decisions is only seen as viable if using artificial intelligence, for instance.
The next-generation of advanced driver-assistance systems (ADAS) support in vehicles needs to compute vast amounts of data using sensor fusion applications with decision-making capabilities in order to make sense of the environment in which vehicles operate and make safe tactical maneuvers and decisions.
This presents a growing need for neural network processing and multi-layer data compute stacks to receive and consume vast amounts of data from things like sensors, cameras, GPS, LiDAR 3D point data, ultrasonics, and V2X or V2V communication.
Moreover, there is a need for the layers in these systems to not only be technically efficient, but also compatible with existing automotive safety standards.