Page tree
Skip to end of metadata
Go to start of metadata

Last updated: Sep 15, 2019 12:28

In this section:

Overview

This view presents a detailed overview of the tests run within a selected period of time.The tests can be filtered by different parameters or by the time period they were run.

The detailed overview includes:

  • Statistical information for all listed tests
  • Historical breakdown of when tests failed.
  • Table of all selected tests including information on the parameters of the test run.

In addition, you can group the tests based on different parameters and display a statistical overview of the test results of the different groups.

Breakdown of Report Library Window

The basic grid view includes the following information areas and options:

Heading Bar

The bar across the top of the display (just below the top-level DigitalZoom tabs) displays:

  • Saved view selector - If you have any pre-configured views - select to view the current data based on the configured selections.
  • Active filters - lists all of the currently selected filters and groupings for the list.
  • Search by test name field (see next section) - field supports listing only the results whose test name matches the request (if any exist).

Search by Test Name

When there is a wide variety of tests listed in the Report Library grid, you can use the Search by Name field to isolate the set of tests you are interested in:

  • Start typing the name (or any sub-string of the name) and DigitalZoom will present a list of suggested available test names.
  • Either select the test name from the list of suggestions or complete typing the requested name, and activate the search.
  • The list refreshes to display only the tests whose name matches the search term.
    • If no tests match the search term, the Report Library will notify you that there were no matches.

Statistic Charts

Just below the heading bar, there is a section that includes two statistical views of the listed tests -

  • Statistical Overview - general statistics of the currently listed tests, split into the number of passed, failed, blocked, and other tests.
  • Run History - overview of dates when tests were run - color coded to show the number of passed, failed, blocked, and other tests for that particular date.

This section can be hidden/displayed by toggling the Hide charts/Show charts button displayed on the top right of the section.

Quick Filter by Status

Use the Statistical Overview statistics to narrow the list of reports to focus on the reports with the selected final status:

  • Click on one of the statistic counters (Failed, Passed, Blocked, Unknown) to filter the list to show only those reports with the selected status.
  • Click on the Tests counter to list all test reports.

For example, Clicking on the counter of 26 Blocked in the list below, filters the list to display only those reports whose status is blocked.

Tests Details Table

The table, in the center of the Report Library View, lists all of the test reports with the following information in the columns:

  • Report Name - the script name as supplied in the PerfectoReportiumClient testStart() method.
    • The column includes a checkbox, used to select test reports for the Cross Report view.
  • Status - the final status reported by the specific test run. The test status is one of: Passed, Failed, Blocked, Unknown.
  • Test-set indicator - indicator that the test report is included in a test-set.
  • History - indicates when this test run was executed relative to other runs of the same test. (See below for more detail.)
  • Failure/Blocked reason - Indicates a reason detected, either automatically by DigitalZoom heuristics or by the testers (see Implement the reporting framework), why the test failed. For more information, see below.
  • Platform - Indicates if tests were executed on Mobile or Web devices. May include and indication of the number of devices executed on.
    • Form factor - identifies (in icon format) if the testing device was a Mobile or Desktop Web device.
    • Device - List of the devices used in the test. Each device separated by comma.
    • Device ID - for mobile devices provides the ID number of the testing device. (not displayed by default)
    • Browser - browser version for Web devices.
    • OS - List of operating system versions, coordinated with devices list.
    • Resolution - List of device resolutions, coordinated with devices list. (not displayed by default)
  • CI Job Information - details of the CI Job reported for this test. (not displayed by default)
    • Name - Job name as reported in the execution context. (not displayed by default)
    • Number - Job number as reported in the execution context. (not displayed by default)
    • Branch - Job branch as reported in the execution context. (not displayed by default)
  • Time - Provides details of when test was run and duration.
    • Start - Start time of the test run
    • Duration - duration time of the test.
  • Tags - Indicates number of tags associated with the test run.
  • Lab - indicates if test was run in a Perfecto CQ Lab. (not displayed by default)
  • Automation Framework - indicates the automation framework supported by the device used by automation script. (not displayed by default)
  • Additional columns may be displayed containing the values of the custom fields associated with the tests.

Configuring the set of columns

Click on the Configure Grid button (on right just above the table) to display a menu of columns that can be configured to appear in the Report Library Grid.

Check the boxes next to the column names that should appear in the table. Note that after the columns listed above, there are entries for the different Custom Fields defined by the different tests.

The Report Name and Status columns are not configurable, and are always selected.

Test History Information

The history graph shows a series of "nodes" where each node represents a single run of the test. The test run whose details are described on this row of the grid is displayed as a double-ring in the history. This makes it easier to identify when this test run was executed relative to other test runs:

  • The latest test run is always represented by the right-most node.
  • No more than five nodes appear in the history graph. Therefore, if the specific run is "older" than the five latest runs a break in the graph (represented by three "dots" - see, for example, the SearchGoogle test in the figure above) will be displayed.
  • The color of the node represents the test result status for that particular run.
  • By hovering over a node, a tooltip appears that provides details for that run.

Failure/Blocked Reason

DigitalZoom is designed to not only provide the test data from your test runs, but also to be tool that allows you - the tester or the test-manager - to better understand the results of your tests. If your test ends with a success status then you know that all is well. However, when a test fails, DigitalZoom may analyze the test data and provide a failure reason that indicates what caused the test to fail.

The DigitalZoom Reporting system provides functionality that supports generating this failure reason classification, either:

  • Manually by the test script, based on the stage in the script execution when the test is determined to have failed.
  • Automatically by DigitalZoom, that analyzes the entire test information and based on a heuristic classification generates the reason.

In either case, the failure reason is color-coded and displayed in the grid of the test reports, in the Failure Reason column to allow a quick overview of the different failed tests.

Blocked Reasons

The DigitalZoom system will automatically identify instances when the test failed before the test was able to start. When these failures occur, the report will be marked with a blocked failure reason (that will appear in the table as the Failure Reason, color of reason in yellow) and the Status column will show a Blocked status (). These blocked failure reasons are completely controlled (failure reason text and color) by the DigitalZoom system. Some examples of blocked failure reasons include:

  • Device in use

  • Device not connected

  • Device not found

Tags Column

By hovering over the Tags value for a specific test, a tooltip listing all the tags associated with the test is displayed.

This tooltip is an active tooltip - clicking on one of the tags will filter the list of tags to all tests that have that tag associated with them.

Note: Selecting a tag from the tooltip, replaces any existing tag filter with the single tag that was selected.

Filtered Grid

The following shows a filtered list - based on the "Regression" tag: note the indications of the filtering.

Use the tags defined by the different context classes to better focus the analysis of the test results by filtering the tests to the subset of interest. See DigitalZoom Best Practices document on how to optimize your test tags for efficient test analysis.

Sidebar

Use the sidebar to:

  • Filter the list of displayed test runs, to limit the list to those that comply with the selected parameters or time-frame.
  • Create groups of test runs to display in the groups overview.

For more information on the sidebar - read here.