VectorCAST/Analytics™
Web-based Metrics for Quality and Testing

Web-based Metrics for Quality and Testing

Software Quality and Testing Completeness are linked. You cannot have quality without complete testing. For groups that want to improve quality, the hardest question to answer often is: “where do we start?” VectorCAST/Analytics helps answer that question by making it easy to gather and publish key, “where are we today?” quality metrics.

Download VectorCAST/Analytics Datasheet

Features

VectorCAST/Analytics provides an easy to understand web-based dashboard view of software code quality and test completeness metrics, enabling users to identify trends in a single codebase or compare metrics between multiple codebases.

  • Real-Time Code Quality Metrics: Identifies data on the key components of technical debt such as code complexity, comment density, and testing completeness.

  • Technical Debt Identification: Identifies data on the key components of technical debt such as code complexity, comment density, and testing completeness.
  • Test Case Quality: Reports on the quality of test cases with metrics such as: tests with expected values but no requirements, number of requirements tested, and tests with expected values.
  • Customizable: Allows end-user customization of calculated metrics, as well as data presentation using a variety of built-in graphs and tables.
  • Extendable Data Connectors: Includes built-in data connectors for all VectorCAST tools and is easily extended to support any third-party data sources.

How it Works

VectorCAST/Analytics provides user-configurable data connectors that allow key metrics such as: static analysis errors, code complexity, code coverage, and testing completeness to be captured from VectorCAST or third-party tools. These base metrics can be combined into calculated metrics to identify hot spots in the code, such as functions with high complexity and low coverage. Displaying this information in a heat map view, where code coverage controls the box color and code complexity controls the box size, allows users to quickly view where they should invest testing and refactoring resources to get the best return on investment. Big red boxes imply highly complex functions that are poorly tested.