Software Quality and Testing Completeness are linked. You cannot have quality without complete testing. For groups that want to improve quality, the hardest question to answer often is: “where do we start?” VectorCAST/Analytics helps answer that question by making it easy to gather and publish key, “where are we today?” quality metrics.
Real-time access to quality and testing completeness metrics
Built-in connectors for all VectorCAST-produced data
User-defined connectors for third-party data
Fully customizable dashboard based on organization’s goals
VectorCAST/Analytics provides an easy to understand web-based dashboard view of software code quality and test completeness metrics, enabling users to identify trends in a single codebase or compare metrics between multiple codebases.
Highlights of VectorCAST/Analytics
Real-Time Code Quality Metrics
Provides quantifiable data on tests run vs. tests needed, release readiness, risk areas, and hot spot identification.
Technical Debt Identification
Identifies data on the key components of technical debt such as code complexity, comment density, and testing completeness.
Test Case Quality
Reports on the quality of test cases with metrics such as: tests with expected values but no requirements, number of requirements tested, and tests with expected values.
Allows end-user customization of calculated metrics, as well as data presentation using a variety of built-in graphs and tables.
Extendable Data Connectors
Includes built-in data connectors for all VectorCAST tools and is easily extended to support any third-party data sources.
How it Works
VectorCAST/Analytics provides user-configurable data connectors that allow key metrics such as: static analysis errors, code complexity, code coverage, and testing completeness to be captured from VectorCAST or third-party tools. These base metrics can be combined into calculated metrics to identify hot spots in the code, such as functions with high complexity and low coverage. Displaying this information in a heat map view, where code coverage controls the box color and code complexity controls the box size, allows users to quickly view where they should invest testing and refactoring resources to get the best return on investment. Big red boxes imply highly complex functions that are poorly tested.