Object Verification for ADAS & HAD
CANape Option Driver Assistance

Object Verification for ADAS & HAD with CANape Option Driver Assistance

Driver assistance systems and systems for highly automated driving (HAD) acquire information about the vehicle’s environment via a wide variety of sensors such as video, radar, LIDAR, etc. Warnings to the driver or (semi-) autonomous interventions in the driving situation are made based on the results of object detection, such as the distance to the vehicle driving ahead. During the drive, measurement results are indeed available, but it is not so easy to verify them. Option Driver Assistance serves precisely this purpose. It enables the display of sensor data in the form of graphic elements such as rectangles and lines, or as point clouds. In CANape, the option may be used either during the measurement or for later evaluation of the measured data.

Advantages

Option Driver Assistance displays objects acquired by the sensors of a driver assistance system in a video image of a reference camera that is recorded synchronous to the measurement. It is used for supplemental logging of the driving situation and is needed as verification of the sensor data. Based on object data computed by the ECU, geometric symbols or bitmaps are superimposed on the video image at specified points on the image. Verify the sensor’s object recognition algorithms quickly and reliably by comparing recognized objects to the real environment.

In the GPS window, you can display the associated position data and use it for evaluation purposes. Available map materials include OpenStreetMap and Shobunsha Super MappleG. In addition, graphic objects can be displayed in the GPS window.

Application Areas

The flexible configuration capabilities of the Driver Assistance Option cover a wide range of application areas in the development of driver assistance systems. They can be used to:

  • Check object recognition algorithms for ACC (Adaptive Cruise Control), “stop and go” systems, and parking assistance systems with the help of object overlaying
  • Develop lane keeping systems or adaptive lighting for curves and display driving lanes as curves
  • Provide useful testing support of traffic sign recognition systems with linking of bitmaps

Functions

The GFX Editor offers the convenience of associating detected sensor data (vehicles, road markings, traffic signs, etc.) with graphic elements (polygons for driving lane detection and rectangles for vehicle identification), which are displayed as overlays in the Video and GPS window. In addition, a user-scalable view is available in the Multimedia window. This window, known as the “Grafx” window, shows the objects from a user-configurable bird’s eye perspective.

In addition, image processing algorithms can be linked in the form of DLLs in CANape. Video inputs and outputs are gated via CANape. The results of the algorithm are visualized in CANape. This lets you optimize algorithm parameters like with an ECU in online operation.

Creating the Object Verification

Screenshot Option Driver Assistance "GFX Editor"
Using the GFX Editor to conveniently perform object-signal mapping and grouping for object display

The properties of objects to be displayed, i.e. the relationship between real objects and their display on the screen, are stored in the Object Signal Mapping file. This file contains the flexible mapping of all parameters, i.e. measurement variables and preset constant variables, to display objects (X, Y, Z coordinates, size, color, text and numeric fields, etc.).

Numerous standardized, predefined symbolic objects such as occupancy grids, sensor fields, parking assist objects, crosses, squares, triangles, and lines are available for representing objects. Saved bitmaps may also be used to represent objects. For more intuitive evaluation of the display it is possible to combine individual objects into groups. The GFX Editor supports the user in creating and managing the object-signal mapping file.

Display & Evaluation

Object data, which are either acquired as measured signals or exist as signals in measurement files, are shown as graphic elements and are superimposed on other information:

  • Perspective views and time-synchronous display of the evaluated object information in the video image
  • Continuously adjustable object display (from side view to bird’s eye view) with variable grid size (X, Y, Z elongation)
  • To achieve an optimum display during the measurement or measured data evaluation, objects can simply be selected by numeric input (e.g. object numbers 1-5, 6, 8-10) or by preconfigured groups
  • Objects, texts, and parameter values can be drawn as supplemental information at fixed or variable pixel positions
  • Relative speed and lateral deviation can be displayed as horizontal and vertical excursion lines
  • Text and numeric information on the object can be shown in the display
  • Any desired zoom level in the Grafx window lets you display precisely that section that you need for your application
  • For easy checking of intervals and angles, they can be continuously calculated during the measurement and shown in the Grafx window
  • Subsequent adjustment of all object parameters (size, color, text and numeric fields, etc.) for measurement data evaluation
  • Measured data of the LIDAR sensors (e.g. Velodyne, Ibeo and Quanergy) are visualized in the Scene window which displays the received point cloud objects in 3D. A range of views and rotation and zoom mechanisms are available to permit optimum analysis.
Screenshot CANape visualization of a LIDAR point cloud for developing ADAS
Reliably acquire LIDAR sensor data (e.g. Velodyne, Ibeo and Quanergy) and visualize it meaningfully as a point cloud

Occupancy Grid

For the development of autonomous vehicles, environment models of the vehicle's surroundings are required in the ECU. A frequently used model is the "Occupancy Grid". In this process, the environment is divided up into small sections and each section is assigned a probability that there is something in that section or not.

For this purpose, sensor data from around the vehicle are merged by special algorithms and are evaluated. The result is the probability of an obstacle present at a clearly defined position in relation to the vehicle. The probability of presence is represented by a standardized numeric value. These data are saved in a two-dimensional characteristic diagram which reflects the environment. This is an enormously important factor for an autonomous vehicle in order to be able to make decisions on the potential for further travel in a direction.

CANape handles Occupancy Grid measuring and processing using a 500 x 500 grid with one byte per grid point. By using color functions and the new Occupancy Grid overlapping object you visualize and validate the captured environment of the vehicle as determined by the analysis algorithm of the ECU. For this purpose the Occupancy Grid can be displayed in the video window (three-dimensionally), in bird's eye view or in GPS window.

Training

CANape Fundamentals Workshop

Vector offers many different opportunities for you to build your knowledge of CANape and broadening it. We recommend our CANape Fundamentals Workshop as an entry-level course in CANape. It is best to take this basic course before attending advanced training courses that are also offered. However, you may register for any of the courses independently.

Related ADAS Products

CANape Option vCDM

Easy Collaboration on Paramter Sets Within a Team

Learn more
CANape Option vMDM

Provision and Analysis of Measurement Data

Learn more
CANape Option Simulink XCP Server

Visualize and Parameterize Simulink Models Easily and Efficiently

Learn more
CANape Option Bypassing

Bypassing Computation with Deterministic Time Behavior

Learn more
CANape Option Thermodynamic State Charts

Display of Thermodynamic Data and Informative State Charts for Online and Offline Analysis

Learn more