Bringing Test-Driven Development to Automotive Applications
Digitization, electrification, connectivity, autonomous driving – automotive OEMs and their suppliers are experiencing one of the biggest paradigm shifts ever. The value chain is changing rapidly, but are the existing innovation and development processes fast and flexible enough to keep up?
Currently, the market is still accepting the traditional development cycles of the major automotive brands. More and more users are becoming accustomed to a much higher speed of innovation in the field of consumer electronics, though. In truth, automotive companies will have to succeed in future in markets that are much more dynamic than before if they want to survive.
As software is becoming increasingly important for product differentiation, software development is also becoming a core competence in the automotive world. Agile development methods such as test-driven development (TDD), which are also spilling over into the automotive industry as software advances, represent both a challenge and a great opportunity for managing change.
Cost Reduction and Better Code Quality
Test-driven development primarily differs from other approaches to testing in that it involves creating tests before the program code itself is written. This forces the developer to think about test cases based on the requirements and interfaces, not based on the design of the code.
Deploying TDD can bring multiple benefits to a project, including a reduction in development costs, higher-quality software and a reduction in time to market. In a detailed analysis carried out by the University of Helsinki, the effects of TDD were measured across several software development categories (Figure 1) and the effectiveness of TDD compared to traditional methods was proven .
Figure 1 – Helsinki report: Test-driven development can reduce the number of defects and result in more maintainable code.
As the automotive market transitions from being hardware based towards the software-defined car, the ability of OEMs and their suppliers to scale up the development of their software while maintaining quality and (safety) compliance will be critical for keeping pace with industry demands.
McKinsey states that by 2030, 30% of the development costs of a vehicle will be attributable to software .
Figure 2 – McKinsey expects software to account for nearly 30 % of total vehicle content by 2030.
As a result of this, software development methodologies that enable the scaling of software development while keep costs low and quality high will be critical. This challenge can be addressed using TDD.
Decompose Complex Systems into Manageable Components
Modern systems used in vehicles are often referred to as cyber-physical systems (CPS) because they not only perform some sort of functional activity, but are also connected to the Internet. This allows over-the-air (OTA) software updates, connection of vehicle diagnostics for preemptive maintenance and R&D enhancements in components based on field usage data. Thus we can consider the control system to be an encapsulation of the software aspects of the system, with specific functionality being broken into larger subsystem components and smaller decomposed software components (SWC).
Figure 3 – Cyber-physical System
This structured approach to the design and development of software is comparable to traditional software engineering practices such as model-based design (MBD). With the size and weight considerations of the automotive space and with safety and security needs growing, it is common to see the entire CPS and its subsystems being further segregated into smaller SWCs or aggregated into one single monolithic system, depending on the specific requirements of an OEM.
This constant churning of architecture models from CPS to CPS means that the use of a tool to design, iterate and publish new architecture models is critical in ensuring that designs are created correctly, quickly and efficiently.
In the automotive world, the standard for designing such software architectures is AUTOSAR. Vector provides its own tools for designing the software architecture as well: DaVinci Developer and PREEvision. While DaVinci Developer is focused on the design of AUTOSAR software components, PREEvision enables the design of software architectures as part of an overall system design including communication buses such as CAN and Ethernet. For code-based development of the software components, generated header files and an implementation template can be used as a starting point.
The Test Pyramid
Efficient Distribution of Tests
Figure 4 – Test pyramid
This top-level system architecture and low-level SWC architecture lends itself nicely to the testing pyramid strategy. The "testing pyramid" is a strategy that tells us to group software tests into categories with different granularity. In this case, the AUTOSAR software component description maps to the service & API layer, while the generated header files and implementation templates map to the unit tests layer.
If we look at the artifacts being produced by the tools, this maps nicely to the starting requirements for us to follow a TDD process. The TDD methodology centers around the creation of tests before programmers write each piece of code for a software project. As in the agile framework, test-driven development requires that a project be divided into small iterations, each of which produces a deliverable unit.
When utilizing the test-driven approach, developers working on a particular feature or component begin by creating an automated test that verifies the requirements for the code they are about to write. This test is based on the predetermined specifications and requirements for that feature or component. Initially, the program will fail the test, since the feature has not yet been created. The developers then work to create code that will pass the test. Once successful, they can then work to "clean up" the code, ensuring along the way that the code continues to pass the test.
TDD with VectorCAST
When you are building a test environment with VectorCAST, you simply point to the directory containing the header files for the API that you want to test. These are the header files generated by DaVinci Developer. VectorCAST will create the test environment automatically, including smart stubs, mock objects and driver code. This results in a complete executable test harness that can be run on the host platform or in an embedded development environment. Most importantly, as you iterate through your development cycles, VectorCAST will automatically include the code for each implemented function into the test environment as it is developed.
For a deeper understanding of this, the following white paper is recommended reading:
vTESTstudio and CANoe build up a multifaceted and integrated work environment for developing tests for embedded systems. When you are configuring the environment to test the service layer or API of a CPS, you can just point them to the AUTOSAR SWC description from PREEvision or DaVinci Developer. This provides the tools with all the data required to be able to stimulate the public interfaces of the SWCs of a CPS. vTESTstudio then allows the tester to model the behavior of the external ports (e.g. input and expected output messages on the interfaces) and then allows the tests to be run in CANoe in an automated manner.
As new functionality is added with vTESTstudio and CANoe, the tests can be rerun to confirm that the SWCs are progressing towards meeting all their requirements and that no existing functionality has been lost – in a similar way to the tests that are rerun on the SWC each time new code is added. Where the behavior of an SWC is reliant on other functionality that is not present, CANoe can be used to simulate this behavior and allow for the parallelization and distribution of the development of the SWCs across departments or suppliers – all using an agreed-upon interface that was automatically generated by PREEvision based on a formally agreed-upon design.
Bringing It All Together in a Continuous Integration and Testing Workflow With VectorCAST/QA
Continuous integration and testing is a key component of TDD. Each developer integrates changes (1) to the source code repository when they are ready for testing, leading to multiple integrations per day. Each integration is then verified by an automated build (3) and test step (4) to detect integration errors (5) as quickly as possible. This approach leads to significantly reduced integration issues, allowing the project team to develop more cohesive software in a shorter amount of time.
Figure 5 – Continuous integration: New program parts can be tested and merged immediately. Errors can thus be localized at a very early stage and corrected for a reasonable amount of money and in a reasonable amount of time.
VectorCAST/QA is the ideal tool for supporting continuous integration and testing, as it allows development teams to assemble previously developed VectorCAST and vTESTstudio/CANoe test environments into regression test suites, providing a single point of control for all unit, integration and functional test activities. At-a-glance logs, summary reports and color-coded pass/fail criteria highlight the status of each test within the regression suite. Trend reporting is also available to show testing progress over time.
With the growing trend towards the software-defined vehicle, processes like TDD will be critical for keeping the development costs of these systems low. By creating an automated workflow from the concept through to modeling the entire system, and then further decomposition into software components, we can define APIs on the service and unit layers in an automated reusable process.
This capability then allows us to commence test case specification and construction based on requirements using tools like VectorCAST and vTESTstudio/CANoe without the need for the code being tested to be present – only their published APIs through AUTOSAR system descriptions for the service layer and header files for the unit layer need be available. As these tools can also be used for automatic analysis and execution of test cases with automated measurement of the correctness of results, they fit nicely into the continuous integration and testing workflows, allowing for a fully automated process for design specification through to the correctness of implementation on every update to the software.