Page tree
Skip to end of metadata
Go to start of metadata

The Perfecto DigitalZoom Reporting system is a methodology to optimize the visibility of test results to allow a better picture of the test results. To gain the maximum benefits from the Reporting system we present an overview of the basic concepts incorporated in the system.

Tags

One of the central tools used by the system is the ability to attach a set of tags to either a test or to a specific subset of the test. The tags may later be used to create different cross-sections of the test results and to focus on a specific subset of test reports.

Best Practice

It is important to use tags that are meaningful and valuable to your teams. Tags should be classified into different buckets:

  1. Execution level tags - Tags that are configured across all tests in a particular execution.
  2. Single Test tags - Tags that identify specific tests that may exercise a particular subset of the application.
  3. Logical Step names - Identify the different logical execution steps of the test

To learn more on how to optimize your tags for easier report analysis see DigitalZoom Best Practices document.

Using Context Classes for Tags

The Reporting SDK includes two Context classes used to declare parameters associated with different sets of tests.

PerfectoExecutionContext Class

The PerfectoExecutionContext class is used to associate different settings to the general text execution. The following settings can be declared with this class:

  • Project identifier - use the "withProject()" method to set a label of the execution to associate with a particular testing project.
  • Job identifier - use the "withJob()" method to associate the job identifier (especially when running tests within a CI tool) of a particular run with an execution.
  • Tags - use the "withContextTags()" method to declare a set of tags that will be associated with all of the tests run in this execution.

All of these settings are associated with all tests that are executed within the entire Execution unit. This association could be used to filter all of the tests relative to a particular execution when examining the actual reports (see below for more information).

In addition, the class supports the following two methods:

  • withWebDriver() - defines the automation driver that the Reporting system will use to collect the execution data and artifacts from.
  • build() - generates the settings data so that it can be associated to the Reporting execution.

TestContext Class

The TestContext class supports defining a separate set of tags that can be associated with a particular Test execution. The list of the specific tags are provided as a comma separated list of tags, to the instance constructor.

Using the TestContext Constructor
reportiumClient.testStart("myTest", new TestContext("Sanity", "NewNote", "Android"));

Job Identification Tags

When using Continuous Integration tools to automatically generate and run your tests, for example when performing regression testing, the CI tool will associate a Job Name and Job Number with each run of the process. Import these identifiers into your tests and tag the tests to associate the Single Test Reports to these CI Jobs. Learn how to import the job identifiers for different CI Tools here. Then view the Test summary reports in the CI Dashboard.

Test Steps

Within each test, it is possible to group the different executed commands into logical steps. Each step can be identified by a step name, and the step name will be displayed in the test report together with any data and artifact (for example, a screenshot or video) associated with the commands executed by the step. Identifying the steps is part of the ReportiumClient stepStart() method and attaching a result for the step is part of the stepEnd() method.

Viewing the Reports

When logging into the Reporting Server, your test results are presented with four tabs, that present different views of the result data:

  • CI Dashboard - Displays the history of the test results, grouped based on the CI jobname identifier.
  • Heatmap - gives a graphic overview of different cross-sections of the test results. Test results can be grouped by two levels of characterizations - devices or specific tags. The different groups are color-coded based on the distribution of "passed" or "failed" tests.
  • Report Library - presents a grid listing of all test reports for the Perfecto Lab. The listing may be filtered to focus on the set of tests with a particular tag. The tags may be the execution-level context tags or the specific test tags.
  • Admin - tab that supports performing different administration level tasks. This tab is active only for users with administration privileges

CI Dashboard View

When using a Continuous Integration (CI) tool, for example Jenkins, each test-set is assigned a Job Name and, usually, a Job Number - the CI Dashboard displays an overview of the history of the tests listed by their Job Name. The display includes the following information:

  • The Job Name and the details of the selected test-run.
  • A statistical overview of the selected test-run results -
    • Number of tests included
    • Number of tests that reported "passed"
    • Number of tests that reported "failed"
    • Number of tests that did not report a final status or indeterminate result.
  • A set of bars that (graphically) represent the history of the Job's test-runs.
    • Each bar represents the distribution of the test results for the particular run.
  • A line graph the represents the history of the Job' test-runs' duration.
    • Each node on the line represents the duration for the particular run.

History Information

The Result History bars and Duration History nodes are coordinated - each bar represents the same job as the corresponding node. Selecting either one of the bars or one of the nodes will select that run and will update:

  • The details listed in the tooltip include:
    • Job Number
    • Date that test-run was executed
    • Duration of the test run
  • Statistical overview will show the actual test results for the test-run.

Heatmap View

The Heatmap presents an overview of the test results. The results are displayed as color-coded cross-sections (see the Color code key) of tests. Each block in the display represents a group of tests.

The tests are grouped by a primary grouping and (possibly) split into a secondary grouping. Select the primary and secondary grouping properties in the Grouping Selector. The selected group properties are displayed at the top of the dashboard (Selected Grouping area in the figure). The size of each group area in the main display area is displayed relative to the number of tests represented.

The following is the same dashboard where the cross-section is redefined to be - Test (primary) and Device (secondary)

Narrow the number of test results included in the display by filtering the results by -

  • Browser
  • Devices
  • Device Type
  • Operating System
  • Tags
  • Results

All in the Filter Selection area.

When hovering over any of the groups - the system presents the statistical summary of the tests in that group:

Report Library View

The basic grid view includes the following information areas and options:

  • Active filters - lists all of the currently selected filters and groupings for the list.
  • Statistic Overview - general statistics of the currently listed tests, split into the number of passed, failed, and other tests.
  • Run History - overview of dates when tests were run
  • Tests List - the list of all the individual tests, to dive into the STR - click on the specific test. Each test is characterized by:
    • Status - the final status reported by the specific test run.
    • History - indicates when this test run was executed relative to other runs of the same test. (See below for more detail.)
    • Platform - Indicates if tests were executed on Mobile or Web devices. May include and indication of the number of devices executed on.
      • Device - List of the devices used in the test. Each device separated by comma.
      • Browser - browser version for Web devices.
      • OS - List of operating system versions, coordinated with devices list.
      • Resolution - List of device resolutions, coordinated with devices list.
    • Time - Provides details of when test was run and duration.
      • Start - Start time of the test run
      • Duration - duration time of the test.
    • Tags - Indicates number of tags associated with the test run.
  • Sidebar - interface to group or filter the list of tests. You can filter based on -
    • Browser used in test
    • Device capabilities of the device used
    • Tags set for the test

Test History Information

The history graph shows a series of "nodes" where each node represents a single run of the test. The test run whose details are described on this row of the grid is displayed as a double-ring in the history. This makes it easier to identify when this test run was executed relative to other test runs:

  • The latest test run is always represented by the right-most node.
  • No more than five nodes appear in the history graph. Therefore, if the specific run is "older" than the five latest runs a break in the graph (represented by three "dots" - see, for example, the SearchGoogle test in the figure above) will be displayed.
  • The color of the node represents the test result status for that particular run.
  • By hovering over a node, a tooltip appears that provides details for that run.

Filtered Grid

The following shows a filtered list - based on the "Regression" tag: note the indications of the filtering.

Use the tags defined by the different context classes to better focus the analysis of the test results by filtering the tests to the subset of interest. See DigitalZoom Best Practices document on how to optimize your test tags for efficient test analysis.

Single Test Report View

Clicking on a test in the Tests List opens the specific Single Test Report (STR):

The left panel of the STR View shows the list of the logical steps (as named in the testStep() method).

Reports for Native automation executions that activate nested scripts, will include the steps of both the main script as well as the steps of the nested script. The commands of the nested scripts will be identified by a special symbol ("</>").

Clicking on any of the logical steps will present a view of the artifacts (video, screenshots, expected vs actual values) associated with the particular command/step.

Tests on multiple devices

When an automation test script allocated multiple devices to run the test, the reporting system gathers artifacts (screenshots, video) from all of the devices involved. At the completion of the test execution the report for the test run generates a single report that includes the artifacts from all devices.

Multiple Devices in Report Library View

Test-runs that activated multiple devices will be listed in the Report Library View with the following indications that multiple devices were involved:

  • Platform type column - indication of number of devices used.
  • Device column - list of all devices used.
  • OS column - list of all OS versions used, corresponding to the devices listed.
  • Resolution column - list of the device resolution for each device used.

Multiple Devices in Single Test Report View

When drilling down to the STR of a test that activated multiple devices -

  • The device button on the test status line will indicate the number of devices involved in the test run. Hovering over the button will open a tooltip that indicates the names and OS version of the devices involved.
  • Clicking on the button will display the Report Details window, with tabs for each of the devices involved in the test run.
  • The video shown in the visual report area will focus on only one of the devices, arbitrarily chosen from the available device video.
  • Screenshots are available from all of the devices.      

Navigating between views

To move from one view to another view, simply select one of the tabs in the header area.

 

  • No labels