Last updated: Jun 27, 2018 13:27
The Perfecto DigitalZoom Reporting system is a methodology to optimize the visibility of test results to allow a better picture of the test results. To gain the maximum benefits from the Reporting system we present an overview of the basic concepts incorporated in the system.
One of the central tools used by the system is the ability to attach a set of tags to either a test or to a specific subset of the test. The tags may later be used to create different cross-sections of the test results and to focus on a specific subset of test reports.
It is important to use tags that are meaningful and valuable to your teams. Tags should be classified into different buckets:
- Execution level tags - Tags that are configured across all tests in a particular execution.
- Single Test tags - Tags that identify specific tests that may exercise a particular subset of the application.
- Logical Step names - Identify the different logical execution steps of the test
To learn more on how to optimize your tags for easier report analysis see DigitalZoom Best Practices document.
Using Context Classes for Tags
The Reporting SDK includes two Context classes used to declare parameters associated with different sets of tests.
The PerfectoExecutionContext class is used to associate different settings to the general text execution.
Instances of this class are created using a builder method - PerfectoExecutionContextBuilder. The following settings can be declared with this class, using the builder methods:
- Project identifier - use the "withProject()" method to set a label of the execution to associate with a particular testing project.
- Job identifier - use the "withJob()" method to associate the job identifier (especially when running tests within a CI tool) of a particular run with an execution.
- Tags - use the "withContextTags()" method to declare a set of tags that will be associated with all of the tests run in this execution.
- Custom Fields - use the "withCustomFields()" method to define a collection of <"name", "value"> pairs of information associated with all of the tests run in this execution.
All of these settings are associated with all tests executed within the entire Execution unit. This association could be used to filter all of the tests relative to a particular execution when examining the actual reports (see below for more information).
In addition, the class supports the following two methods:
- withWebDriver() - defines the automation driver that the Reporting system will use to collect the execution data and artifacts from.
- build() - activates the builder to configure the Reporting execution.
The TestContext class supports defining a separate set of data associated with a particular Test execution. Instances of the TestContext class are configured with the appropriate using the Builder() method:
- A list of specific tags may be provided, using the withExecutionTags() method, as a comma separated list of tags, to the instance builder.
- Custom field <name, value> pairs may be provided, using the withCustomFields() method, as a collection of CustomField objects.
Often there may be a need to add information to a specific test that requires separate values for a tag. For example, to identify a particular tester you may want to define a "tester" tag that may be "John" for some tests and "Judy" for other tests. Custom Fields allow you to add tags that are <name, value> pairs. The Custom Fields can be used to identify and filter your different test runs.
The Custom Fields may be set either at the Execution Context instance, Test Context instance, or (if you are using CI Tool) as part of the JVM command line parameters. The order of precedence of conflicting Custom Field values is:
- JVM values have highest precedence
- TestContext values have second precedence
- ExecutionContext values have lowest precedence.
When using Continuous Integration tools to automatically generate and run your tests, for example when performing regression testing, the CI tool will associate a Job Name and Job Number with each run of the process. In addition, you may have different versions of the tests defined in different branches of the repository. Import these identifiers into your tests and tag the tests to associate the Single Test Reports to these CI Jobs. Learn how to import the job identifiers for different CI Tools here. Then view the Test summary reports in the CI Dashboard.
Within each test, it is possible to group the different executed commands into logical steps. Each step can be identified by a step name, and the step name will be displayed in the test report together with any data and artifact (for example, a screenshot or video) associated with the commands executed by the step. Identifying the steps is part of the ReportiumClient stepStart() method and attaching a result for the step is part of the stepEnd() method.
Viewing the Reports
When logging into the Reporting Server, your test results are presented with four tabs, that present different views of the result data:
- CI Dashboard - Displays the history of the test results, grouped based on the CI jobname identifier.
- Heatmap - gives a graphic overview of different cross-sections of the test results. Test results can be grouped by two levels of characterizations - devices or specific tags. The different groups are color-coded based on the distribution of "passed" or "failed" tests.
- Report Library - presents a grid listing of all test reports for the Perfecto Lab. The listing may be filtered to focus on the set of tests with a particular tag. The tags may be the execution-level context tags or the specific test tags.
- Admin - tab that supports performing different administration level tasks. This tab is active only for users with administration privileges
CI Dashboard View
When using a Continuous Integration (CI) tool, for example Jenkins, each test-set is assigned a Job Name and, usually, a Job Number - the CI Dashboard displays an overview of the history of the tests listed by their Job Name. The display includes the following information:
- The Job Name and the details of the selected test-run.
- A statistical overview of the selected test-run results -
- Number of tests included
- Number of tests that reported "passed"
- Number of tests that reported "failed"
- Number of tests that did not report a final status or indeterminate result.
- A set of bars that (graphically) represent the history of the Job's test-runs.
- Each bar represents the distribution of the test results for the particular run.
- A line graph the represents the history of the Job' test-runs' duration.
- Each node on the line represents the duration for the particular run.
The Result History bars and Duration History nodes are coordinated - each bar represents the same job as the corresponding node. Selecting either one of the bars or one of the nodes will select that run and will update:
- The details listed in the tooltip include:
- Job Number
- Date that test-run was executed
- Duration of the test run
- Statistical overview will show the actual test results for the test-run.
Branches By Job Tab
Each CI Job (see above) can also be divided into different branches, that may, for example, define a different flavor of test for the features under test. If the tester declared a test run to associated with a particular branch (by using the withBranch method or with the reportium-job-branch command-line parameter), then the job and its branches will be displayed in the Branches by Job tab.
The tab displays a separate line for each job that includes:
- Number of branches in the job
- Summary of the status of the most recent branch run.
Clicking on the expansion button (left side of row) or the job name opens a list of all the branches' history overview (similar to the job history overview described above). One line for each branch of the job.
The Heatmap presents an overview of the test results. The results are displayed as color-coded cross-sections (see the Color code key) of tests. Each block in the display represents a group of tests.
The tests are grouped by a primary grouping and (possibly) split into a secondary grouping. Select the primary and secondary grouping properties in the Grouping Selector. The selected group properties are displayed at the top of the dashboard (Selected Grouping area in the figure). The size of each group area in the main display area is displayed relative to the number of tests represented.
The following is the same dashboard where the cross-section is redefined to be - Test (primary) and Device (secondary)
Narrow the number of test results included in the display by filtering the results by -
- Device Type
- Operating System
All in the Filter Selection area.
When hovering over any of the groups - the system presents the statistical summary of the tests in that group:
Report Library View
The basic grid view includes the following information areas and options:
- Active filters - lists all of the currently selected filters and groupings for the list.
- Search by test name field (see below) - field supports listing only the results whose test name matches the request (if any exist).
- Statistic Overview - general statistics of the currently listed tests, split into the number of passed, failed, and other tests.
- Run History - overview of dates when tests were run
- Tests List - the list of all the individual tests, to dive into the STR - click on the specific test. Each test is characterized by:
- Status - the final status reported by the specific test run.
- History - indicates when this test run was executed relative to other runs of the same test. (See below for more detail.)
- Platform - Indicates if tests were executed on Mobile or Web devices. May include and indication of the number of devices executed on.
- Device - List of the devices used in the test. Each device separated by comma.
- Browser - browser version for Web devices.
- OS - List of operating system versions, coordinated with devices list.
- Resolution - List of device resolutions, coordinated with devices list.
- Time - Provides details of when test was run and duration.
- Start - Start time of the test run
- Duration - duration time of the test.
- Tags - Indicates number of tags associated with the test run.
- Sidebar - interface to group or filter the list of tests. You can filter based on -
- Browser used in test
- Device capabilities of the device used
- Tags set for the test
Test History Information
The history graph shows a series of "nodes" where each node represents a single run of the test. The test run whose details are described on this row of the grid is displayed as a double-ring in the history. This makes it easier to identify when this test run was executed relative to other test runs:
- The latest test run is always represented by the right-most node.
- No more than five nodes appear in the history graph. Therefore, if the specific run is "older" than the five latest runs a break in the graph (represented by three "dots" - see, for example, the SearchGoogle test in the figure above) will be displayed.
- The color of the node represents the test result status for that particular run.
- By hovering over a node, a tooltip appears that provides details for that run.
By hovering over the Tags value for a specific test, a tooltip listing all the tags associated with the test is displayed.
This tooltip is an active tooltip - clicking on one of the tags will filter the list of tags to all tests that have that tag associated with them.
The following shows a filtered list - based on the "Regression" tag: note the indications of the filtering.
Use the tags defined by the different context classes to better focus the analysis of the test results by filtering the tests to the subset of interest. See DigitalZoom Best Practices document on how to optimize your test tags for efficient test analysis.
Search by Test Name
When there is a wide variety of tests listed in the Report Library grid, you can use the Search by Name field to isolate the set of tests you are interested in:
- Start typing the name (or any substring of the name) and DigitalZoom will present a list of suggested available test names.
- Either select the test name from the list of suggestions or complete typing the requested name, and activate the search.
- The list refreshes to display only the tests whose name matches the search term.
- If no tests match the search term, the Report Library will notify you that there were no matches.
Single Test Report View
Clicking on a test in the Tests List opens the specific Single Test Report (STR):
Logical Steps list
The left panel of the STR View shows the list of the logical steps (as named in the testStep() method).
Reports for Native automation executions that activate nested scripts, will include the steps of both the main script as well as the steps of the nested script. The commands of the nested scripts will be identified by a special symbol ("</>").
Clicking on any of the logical steps will present a view of the artifacts (video, screenshots, expected vs actual values) associated with the particular command/step.
Click on the commands of the test to display detailed information regarding the command execution, including -
- Timer information - Displays the Perfecto Timer and UX Timer values when command was executed.
- Parameter information - Identifies -
- Device used for the command
- If the command accessed a UI Element - identifies the element
- If the command inserted text - provides the text sent to the UI element.
Note: If the text was sent as a Secured String - then the text value will appear as: "***"
- Other information - may include parameters for visual analysis, assertion information, UI Element attribute values.
Artifact Visual area
The right panel of the STR View presents visual artifacts, for example screenshots or videos from the test run.
When viewing the video, the timeline includes indicators that highlight the corresponding times that the logical steps occur at. Hovering over any of these points display a tooltip that identifies the corresponding logical step.
When the test script displayed in the STR generated an Error message or Failed - the error message will be displayed at the top of the Artifact Visual area.
At first, only the header line of the error message is displayed on a red background:
To see the more complete error message, together with a stack dump (if relevant), click the "Pull Down" button at the bottom of the error message to reveal the full message:
STR Header Area
The Single Test Report header includes the top two rows of the STR. The header shows the following:
- The top line includes:
- Back to Report Grid button: This button reverts the display to Report Library View, regardless of any navigation to other test report views.
- Name of the current test.
- Second line includes:
- Test status - shows the status of the test run of this STR.
- History graph - shows five runs similar to the history graph in the Report Library View. Selecting one of the nodes of the graph navigates to the STR of the selected run. Tooltip provides information on the run for each node.
- Run information - Start time and duration information of the test's run.
- Device information - information on the device or devices used for the test run.
- Activate interactive session icon - Opens the device in a Perfecto Lab interactive session. If the device is not available, the Perfecto Lab will notify the user to select another device.
- Tags - list of tags associated with the test run.
- JIRA bug reporting icon - appears if DigitalZoom Reporting is integrated with JIRA. Supports entering bug reports directly as a JIRA issue.
- Report Details button - displays detailed information on the test run data, and device(s) data.
- Open Support Case - Connects directly to Perfecto Support to allow you to open a new incident.
- Download button - supports accessing and downloading the artifacts (video, log files) associated with the test run.
Viewing Report Details
Use the Report Details button in the upper right corner of the STR View (see in figure above). This displays a popup window with information details related to the Test Run.
The popup includes two (or more) tabs of information:
- EXECUTION tab - displays the data associated with the test run including:
- Basic execution data
- Job Information (see above)
- Project Information
- Custom Field names and values (see above)
- Any Tags associated with the run.
- DEVICE tab - displays information regarding the device used for the test. (For multiple device tests see next section.) Information includes:
- Device name and manufacturer
- Device ID
- OS version and firmware
Accessing Source Code from STR
Sometimes the test run did not complete as expected and there is a failure status. In many of these cases it is easier to understand what went wrong or what needs to be fixed in the test script, if you could view the source code. This functionality is dependent upon the tester supplying the information as described in the article here.
The source code will be displayed in a new browser tab. There are two access points for the source code:
- If the STR displays an error message, open the error message, and at the bottom there are links to open the source file display (see below).
- For all STR click, on the Report Details Button - At the bottom of the Details popup window there are links to open the source file display (see below).
Access source information links
There are three configured links to access source code information:
- Open commit link - Displayed if the pefecto.vcs.commit custom field was set by the test run.
- Open source file link - Displayed if the pefecto.vcs.filePath custom field was set by the test run.
- Open source file and commit link - Displayed if neither pefecto.vcs.commit nor pefecto.vcs.filePath custom field was set by test run.
If both custom fields were set by the test run, both links 1 & 2 are displayed.
Hovering over link 3 will display a tooltip encouraging the user to set the custom fields for future test runs.
The links are displayed either:
- Error Message, when you pull down to see the full error message, at bottom of the display (as shown above).
- In the Report Details window, at the bottom
Displaying the source code
When clicking on one of the Open commit or Open source file links, DigitalZoom opens a new tab in the browser and browses directly to the VCS at the display of either the commit or the source file.
Tests on multiple devices
When a Perfecto Native Automation test script allocated multiple devices to run the test, the reporting system gathers artifacts (screenshots, video) from all of the devices involved. At the completion of the test execution the report for the test run generates a single report that includes the artifacts from all devices.
Multiple Devices in Report Library View
Test-runs that activated multiple devices will be listed in the Report Library View with the following indications that multiple devices were involved:
- Platform type column - indication of number of devices used.
- Device column - list of all devices used.
- OS column - list of all OS versions used, corresponding to the devices listed.
- Resolution column - list of the device resolution for each device used.
Multiple Devices in Single Test Report View
When drilling down to the STR of a test that activated multiple devices -
- The device button on the test status line will indicate the number of devices involved in the test run. Hovering over the button will open a tooltip that indicates the names and OS version of the devices involved.
- Clicking on the button will display the Report Details window, with tabs for each of the devices involved in the test run.
- The video shows all devices involved in the test run. Control which devices to display (or hide) using:
- The Show devices menu - checked devices are displayed, unchecked devices will not be displayed.
- The Remove device button - to stop displaying the individual device.
- Screenshots are available from all of the devices.
Navigating between views
To move from one view to another view, simply select one of the tabs in the header area.