Object Verification for ADAS & HAD with CANape Option Driver Assistance
Driver assistance systems and systems for highly automated driving (HAD) acquire information about the vehicle’s environment via a wide variety of sensors such as video, radar, LIDAR, etc. Warnings to the driver or (semi-) autonomous interventions in the driving situation are made based on the results of object detection, such as the distance to the vehicle driving ahead. During the drive, measurement results are indeed available, but it is not so easy to verify them. Option Driver Assistance serves precisely this purpose. It enables the display of sensor data in the form of graphic elements such as rectangles and lines, or as point clouds. In CANape, the option may be used either during the measurement or for later evaluation of the measured data.
Advantages
Option Driver Assistance displays objects acquired by the sensors of a driver assistance system in a video image of a reference camera that is recorded synchronous to the measurement. It is used for supplemental logging of the driving situation and is needed as verification of the sensor data. Based on object data computed by the ECU, geometric symbols or bitmaps are superimposed on the video image at specified points on the image. Verify the sensor’s object recognition algorithms quickly and reliably by comparing recognized objects to the real environment.
In the GPS window, you can display the associated position data and use it for evaluation purposes. Available map materials include OpenStreetMap and Shobunsha Super MappleG. In addition, graphic objects can be displayed in the GPS window.





Application Areas
The flexible configuration capabilities of the Driver Assistance Option cover a wide range of application areas in the development of driver assistance systems. They can be used to:
- Check object recognition algorithms for ACC (Adaptive Cruise Control), “stop and go” systems, and parking assistance systems with the help of object overlaying
- Develop lane keeping systems or adaptive lighting for curves and display driving lanes as curves
- Provide useful testing support of traffic sign recognition systems with linking of bitmaps
Functions
The GFX Editor offers the convenience of associating detected sensor data (vehicles, road markings, traffic signs, etc.) with graphic elements (polygons for driving lane detection and rectangles for vehicle identification), which are displayed as overlays in the Video and GPS window. In addition, a user-scalable view is available in the Multimedia window. This window, known as the “Grafx” window, shows the objects from a user-configurable bird’s eye perspective.
In addition, image processing algorithms can be linked in the form of DLLs in CANape. Video inputs and outputs are gated via CANape. The results of the algorithm are visualized in CANape. This lets you optimize algorithm parameters like with an ECU in online operation.
Creating the Object Verification

The properties of objects to be displayed, i.e. the relationship between real objects and their display on the screen, are stored in the Object Signal Mapping file. This file contains the flexible mapping of all parameters, i.e. measurement variables and preset constant variables, to display objects (X, Y, Z coordinates, size, color, text and numeric fields, etc.).
Numerous standardized, predefined symbolic objects such as occupancy grids, sensor fields, parking assist objects, crosses, squares, triangles, and lines are available for representing objects. Saved bitmaps may also be used to represent objects. For more intuitive evaluation of the display it is possible to combine individual objects into groups. The GFX Editor supports the user in creating and managing the object-signal mapping file.
Display & Evaluation
Object data, which are either acquired as measured signals or exist as signals in measurement files, are shown as graphic elements and are superimposed on other information:
- Perspective views and time-synchronous display of the evaluated object information in the video image
- Continuously adjustable object display (from side view to bird’s eye view) with variable grid size (X, Y, Z elongation)
- To achieve an optimum display during the measurement or measured data evaluation, objects can simply be selected by numeric input (e.g. object numbers 1-5, 6, 8-10) or by preconfigured groups
- Objects, texts, and parameter values can be drawn as supplemental information at fixed or variable pixel positions
- Relative speed and lateral deviation can be displayed as horizontal and vertical excursion lines
- Text and numeric information on the object can be shown in the display
- Any desired zoom level in the Grafx window lets you display precisely that section that you need for your application
- For easy checking of intervals and angles, they can be continuously calculated during the measurement and shown in the Grafx window
- Subsequent adjustment of all object parameters (size, color, text and numeric fields, etc.) for measurement data evaluation
- Measured data of the LIDAR sensors (e.g. Velodyne, Ibeo and Quanergy) are visualized in the Scene window which displays the received point cloud objects in 3D. A range of views and rotation and zoom mechanisms are available to permit optimum analysis.

Occupancy Grid
For the development of autonomous vehicles, environment models of the vehicle's surroundings are required in the ECU. A frequently used model is the "Occupancy Grid". In this process, the environment is divided up into small sections and each section is assigned a probability that there is something in that section or not.
For this purpose, sensor data from around the vehicle are merged by special algorithms and are evaluated. The result is the probability of an obstacle present at a clearly defined position in relation to the vehicle. The probability of presence is represented by a standardized numeric value. These data are saved in a two-dimensional characteristic diagram which reflects the environment. This is an enormously important factor for an autonomous vehicle in order to be able to make decisions on the potential for further travel in a direction.
CANape handles Occupancy Grid measuring and processing using a 500 x 500 grid with one byte per grid point. By using color functions and the new Occupancy Grid overlapping object you visualize and validate the captured environment of the vehicle as determined by the analysis algorithm of the ECU. For this purpose the Occupancy Grid can be displayed in the video window (three-dimensionally), in bird's eye view or in GPS window.
Product Descriptions
- 2019-11-26 VectorCAST: Coverage Analysis as the Most Important Metric for...
- 2019-07-09 vCDMstudio – A Very Easy Way to Work with Calibration Data!
- 2019-06-26 CANoe .Car2x: How to Develop and Test Your V2X-Based Driver...
- 2019-05-21 Software Verification and Validation for ISO 26262 with...
- 2019-05-16 How to Develop a Mixed-critical AUTOSAR Adaptive ECU with Safety...
- 2019-05-09 CANoe .Car2x: How to Develop and Test Your V2X-Based Driver...
- 2019-02-13 Data Mining with CANape/vSignalyzer: How to find the needle in...
- 2019-01-31 Calibrating ECUs Optimally with CANape 17.0 – Introduction to New...
- 2018-10-17 Data Management for Measurements and Calibrations with CANape –...
- 2018-02-22 CANape + MATLAB/Simulink = The perfect team for model-based...
Do you have technical questions and are looking for suitable answers? Our KnowledgeBase provides the most important!
Downloads
-
-
2017-12-07 Product DescriptionProduct Information CANape Option vCDM
-
2018-02-13 Press ReleaseVector Presents CANape 16.0 – Extended Functions Simplify the Measurement and Calibration of ECUs and ADAS Sensors
-
2017-12-05 Application NoteAN-IMC-1-036 Using LabView via CANape COM Interface
-
2014-10-24 Technical ArticleVerification of driver assistance systems in the vehicle and in the laboratory
-
2017-09-22 Case StudyHigh-Performance Recording of Raw Radar Data and Algorithm Data With the Scalable AURIX™ Radar Microcontroller Family
-
2017-09-25 DemovCDMstudio 16.0
-
2016-04-01 PresentationXCP Use Cases for the Development of Distributed Embedded Systems
-
2016-04-01 PresentationAndreas Patzer: Why do we need calibration
-
2019-02-05 ManualAUTOSAR Calibration Manual
News & Events
- 2020-03-17 - 2020-03-18 17th International CAN Conference
- 2020-04-21 Vector Automotive Ethernet Symposium 2020
- 2020-04-22 Vector Cybersecurity Symposium 2020
Training

CANape Fundamentals Workshop
Vector offers many different opportunities for you to build your knowledge of CANape and broadening it. We recommend our CANape Fundamentals Workshop as an entry-level course in CANape. It is best to take this basic course before attending advanced training courses that are also offered. However, you may register for any of the courses independently.