The table in the Report Library view lists the following information for all test reports.
|Column||Sub-column||Description||Displayed by default|
|Report Name||The script name as supplied in the PerfectoReportiumClient testStart() method. The column includes a checkbox that you can use to select test reports for the Cross Report view.||Yes|
The final status reported by the specific test run. The test status is one of: Passed, Failed, Blocked, Unknown.
|Cross Report||Indicates if the test report is part of a cross-report.||Yes|
Indicates when this test run was executed relative to other runs of the same test. (See below for more detail.)
|Failure/Blocked Reason||Indicates a reason detected, either automatically by Smart Reporting heuristics or by the testers (see Implement the reporting framework), why the test failed. For more information, see below.||Yes|
|Platform||Indicates if tests were executed on Mobile or Web devices. May include an indication of the number of devices executed on.||Yes|
Icon that identifies if the testing device was a Mobile or Desktop Web device.
List of devices used in the test, with each device separated by comma.
For mobile devices, provides the ID number of the testing device.
|Browser||Browser version for Web devices.||Yes|
|OS||List of operating system versions, coordinated with devices list.||Yes|
List of device resolutions, coordinated with devices list.
|Job||Details of the CI Job reported for this test.||No|
Job name as reported in the execution context.
|#Number||Job number as reported in the execution context.||No|
|Branch||Job branch as reported in the execution context. ||No|
|Time||Provides details of when test was run and its duration.||Yes|
|Start||Start time of the test run.||Yes|
|Duration||Duration time of the test.||Yes|
|Tags||Indicates the number of tags associated with the test run.||Yes|
|Lab||Indicates if test was run in a Perfecto CQ Lab.||No|
|Automation Framework||indicates the automation framework supported by the device used by automation script. ||No|
Test history information
The history graph in the History column shows a series of nodes. Each node represents a single run of the test.
The History mechanism defines test similarity by test name and execution capabilities. If there is a difference in either the name or the capabilities (for example, the
osVersion capability is included in one test run but not in another), the tests are considered 'not similar'. As a result, they are not connected in history.
The test run whose details are described in this report is displayed as a double-ring in the history (). This makes it easier to identify when this test run was executed relative to other test runs. When reading the history graph, keep in mind that:
- The latest test run is always represented by the right-most node.
- No more than five nodes appear in the history graph. If the specific run occurred prior to the five latest runs, the graph shows a break represented by three dots ().
- The color of the node represents the test result status for that particular run, where green means 'passed', yellow means 'blocked, and red means 'failed' ().
Move the pointer over a node to display a tooltip with details for that run.
Arrows around an icon () identify test runs with scheduled retries that have been collapsed into a single test report.
This feature The Scheduled Retries is turned off by default. To turn it on in your cloud instance, contact Perfecto Support.
For a test to be considered a retry, it must share the same parameters and CI job name and number or be part of the same execution. Perfecto does not list a test that is considered a retry in the table and does not take it into account when calculating statistics. Only the last test in a retry series makes it into the statistics. For more information, see STR.
Smart Reporting is designed to provide the test data from your test runs, but also to allow you - the tester or the test manager - to better understand the results of your tests. If your test ends with a success status, then you know that all is well. However, when a test fails, Smart Reporting may analyze the test data and provide a failure reason that indicates what caused the test to fail.
The Smart Reporting system provides functionality that supports generating this failure reason classification, either:
- Manually by the test script, based on the stage in the script execution when the test is determined to have failed.
- Automatically. Smart Reporting analyzes the entire test information and generates the reason based on a heuristic classification.
In either case, the failure reason is color-coded (where green means 'passed', yellow means 'blocked, and red means 'failed') and displayed in the table of the test reports, in the Failure/Blocked Reason column, to allow a quick overview of the different failed tests. If a test fails without reporting a failure reason, the Status column will show a red icon, but the Failure/Blocked Reason column will be blank.
The Smart Reporting system automatically identifies instances when the test failed before the test was able to start. When these failures occur, the report is marked with a blocked failure reason (that will appear in the table as the Failure Reason, color of reason in yellow) and the Status column will show a Blocked status ( ). These blocked failure reasons are completely controlled (failure reason text and color) by the Smart Reporting system. Some examples of blocked failure reasons include:
Device in use
Device not connected
Device not found
You can move the pointer over the tag icon in the Tags column for a specific test to display a tooltip that lists all tags associated with the test.
This tooltip is an active tooltip. Clicking a tag filters the table to show only tests associated with that tag.