Skip to main content

Analytics features

This document explains Analytics features and how you can best use them to assess your project.

Overview​

TestOps Analytics module is the central hub for tracking, analyzing, and communicating testing performance and software quality across projects. It unifies dashboards, reports, and insights that turn raw execution data into actionable intelligence for QA managers, testers, and stakeholders.

testops reports and analytics home page

Instead of manually compiling metrics from multiple tools, TestOps consolidates data from connected ALM and CI/CD systems into standardized models. This enables consistent visibility across time-based and iteration-based perspectives-helping teams monitor testing health, release readiness, and long-term quality trends.

Core Features​

Dashboards and Reports​

TestOps provides dashboards for summaries and reports for deeper analysis.

  • Dashboards (e.g., Analytics & Trends, Release Readiness, Live Monitor) give a quick overview of test health and execution progress.
testops reports and analytics three dashboards
  • Reports (e.g., Test Runs Analysis, Defect Trends, Requirement Coverage) let you drill into details to identify issues or improvement opportunities.
testops reports and analytics twelve reports selection view

Together, dashboards and reports offer both high-level awareness and detailed investigation capability.

Scoping​

All analytics in TestOps are project-based. Each project aggregates entities like test cases, executions, requirements, and defects drawn from integrated tools such as Jira and Azure DevOps.

Data from these entities are presented in either of these perspectives, that you can choose from in each dashboard/report:

  • Time-Based: to observe historical trends or productivity changes.
testops reports and analytics scope time
  • Iteration-Based - Release or Sprint: to evaluate iteration progress and quality.
testops reports and analytics scope iteration
  • Current: to assess all available data of the project.
note

Some reports/dashboards don't have all scopes available, due to its analysis goal.

Filtering​

Reports and dashboards have filters so you could filter data and compare between filtered datasets (e.g. automation vs manual tests, performance between test authors...) to discover trends and anomalies. Each report/dashboard has unique filters you could select via the filter dropdown:

testops reports and analytics filters

There are two types of filters:

  • Default filters: Each report/dashboard has default filters by entities (for example, Test Case Status, Defect Priority, Run Type,...) depending on the report purpose.
  • Customizable field filters: You can configure customizable fields to add more attributes to testing entities, and later on filter by these attributes inside reports. They help you effectively further slice data by your project needs (e.g. regression testing,... ) to uncover unexpected insights.

This gives you precise control over what data is shown, while keeping all widgets and reports synchronized to the same filter context.

Custom Views​

In any report, you can create custom views that store filter for different perspectives, to quickly switch between them without reconfiguring the filters each time. This is especially useful when you regularly configure one report for different purposesβ€”such as focusing on failed automated tests, tests from specific suites, or activity within a particular sprint.

With the custom views, every report becomes a personalized workspace that adapts to the way you analyze quality.

What you can do:

  • Save filter configurations: After applying filters in any report, save them as a new view with a descriptive name.
  • Switch between views instantly: Use the view selector at the top of the report to load saved filter combinations with a single click.
  • Set a default view: Mark your most-used view as the default so it automatically loads whenever you open that report.
  • Manage your views: Edit, clone, or delete saved views as your analysis needs change.
testops reports and analytics create new view

Widgets and Visual Insights​

Dashboards are composed of widgets, modular visual elements that summarize metrics from underlying reports. You can expand widgets to view more details and see the report it's linked to:

testops reports and analytics expanding dashboard widget testops reports and analytics navigate to report from dashboard widget

While widgets are static in dashboards, they become interactive report view - clicking a segment filters and refreshes the related data view below it. This allows seamless transition from visual overview to analytical evidence.

testops reports and analytics interactive widget drill down

For Analytics & Trends dashboard, you can customize the dashboard by adding more widgets pulled from reports, to tailor to your team's needs.

testops reports and analytics customize dashboard button testops reports and analytics customize analytics and trends dashboard

Export and Sharing​

Each dashboard or report can be exported to PDF, CSV, or Excel for record-keeping or shared directly via link. This simplifies collaboration and enables teams to review the same data during stand-ups, sprint reviews, or release planning.

testops reports and analytics sharing button testops reports and analytics sharing board view
tip

Click Write with AI to prompt the agent to write a description for you before sharing.

AI features​

Analytics' AI features turn raw test data into triage insights to speed failure investigation and internal communication.

AI Briefing​

The AI Briefing feature condenses report data into concise, insight-ready text for stakeholders. It captures key achievements, risks, and trends to support executive briefings and retrospectives with minimal effort.

testops reports and analytics ai briefing button testops reports and analytics ai briefing example

AI Analysis​

The AI Analysis feature analyzes the full test-execution context (logs, traces, screenshots,... ) and then generating a failure analysis with failure summary, and suggestion for remedy.

AI Failure Grouping​

The Analyze feature analyzes the execution context (logs, stack traces,...) to identify root-cause signals, then assigns the failure to common categories for Analyzing automation error patterns.

Was this page helpful?