Last updated: Dec 19, 2019 12:39
Digital quality challenges
When Perfecto engages with customers in various market segments like finance, retail, insurance, and more, the customers, typically, raise the following issues:
- Organizations find it hard to triage failures post executions (regardless of whether CI is involved or not).
- Planning, management, and optimization of test cycles based on proper insights is a significant challenge.
- Inconsistent test results and test flakiness are usually the root cause of project delays, product area blind spots, and coverage concerns in respective functional areas.
- Execution-based reports are too long to validate and examine. The ability to break long reports into smaller blocks is a necessary ingredient in faster triaging.
- An on-demand management view of the entire product quality, from top to bottom, is a hard-to-achieve goal, especially around large test suites.
- It is not possible to break long test execution into smaller test reports as a way to achieve faster quality analysis of issues.
Fig 1: Common challenges with digital quality
Test analysis with Perfecto Smart Reporting
This section provides a deep-dive into the available tags and how you can best use them to achieve these desired outcomes:
- Less flaky tests and stable execution
- On-demand quality visibility
- Test planning, management, and optimization
- Data-Driven decision making
When you start to build test execution suites, it is important to begin by pre-configuring custom tags that are meaningful and valuable to your teams. The tags should be classified into 3 different buckets, as suggested in Table 1.
- Execution level tags
- Single Test Report tags
- Logical steps names
Suggested tagging for advanced digital quality visibility
Execution Level Tags Categories
Single Test Report Tags
Logical Test Steps
Test type (Regression, Unit, Nightly, Smoke)
“Regression”, “Unit”, “Nightly”, “Smoke”
Test Scenario Identifiers
“Banking Check Deposit”, “Geico Login”
Build number, Job, Branch
“Testing 2G conditions”, “Testing on iOS Devices”, “Testing Device Orientation”, “Testing Location Changes”, “Persona”
“Launch App”, “Press Back”, “Click on Menu”, “Navigate to page”
CI Server Names
“iOS Team”, “Android Team”, “UI Team”
“iOS”, “Android”, “Chrome”, “IOT”
Test Frameworks Associations
“Appium”, “Espresso”, “Selenium”, “UFT”
Test Code Languages
“Java”, “C#”, “Python”
Table 1: Suggested custom tags with classification accelerate analysis
Looking at the above table, it would make sense for teams to define regression tags that only cover the relevant tests per the functional areas as recommended, in the Logical Steps column above, while each test step has a name to indicate what the test is doing. When implemented correctly, at the end of each execution, management and relevant personas can easily correlate between the high-level suite, the middle layer tested area and, finally, lower to the single test failure. Such governance and management of the entire suite can support better planning, triaging, and decision making.
With the use of proper tags, you can realize an additional benefit when implementing advanced CI. Perfecto’s Reporting SDK supports Jenkins and Groovy APIs for easy communication with the reporting suite. Filtering your test report by Job number, Build number, or release version can be simplified and provide proper insights on-demand.
Fig 2: Combining the 3 Smart Reporting tags
1 | Get started with Smart Reporting and basic tagging
To start working with this technology, teams need to download the Reporting SDK and integrate it into their IDE of choice. The detailed setup instructions are available at the above link.
Basically, you can select one of the following methods to download and set up Smart Reporting:
- Directly downloading the Reporting SDK (as a .jar file for Java, Ruby Gem, etc.), as described in the document at the above URL.
- Setting the required dependency management tool (Maven, Gradle, Ivy) to download it for you.
No more endless reports with hundreds/thousands of commands. The Reporting SDK enables teams to break the execution into a reasonable/digestible amount of content.
Create an instance of the reporting client
Use the following code to instantiate the Reporting client in the test code:
Fig 3: Smart Reporting client instantiation
Important: The creation of the reportiumClient should be in proximity to the driver. In addition, create 1 reportiumClient instance per 1 Automation Driver.
When you instantiate the reporting client in your test code through a simple call, as below, the Reporting SDK allows the test developer to wrap each test with the basic commands:
Per each annotated @Test that identifies a test scenario, you can use custom tags like "Regression", "Unit", or functional-area specific tags (by using the PerfectoExecutionContext class) and the following methods:
The following code shows a sample test that uses the above methods:
Fig 4: Sample implementation of test code with all supported methods
As mentioned in the introduction, creating functional tests that do not utilize tags as a method of pre/post-test execution analysis and triaging usually results in lengthy and inefficient processes.
As seen in Fig 5, marking a set of tests with a WithContextTag makes it much easier during test debugging and test execution to filter the corresponding tests relevant to that tag (in the below example, we use a “Regression” tag). In the same way, as below, you can gather tests under a testing type context named "Smoke", "UI", "CI", or other but also signify specific tests that cover a specific functional area such as Login, Search, and more. These tags help manage the test execution flows and the results at the end, and they gather insights and trends throughout builds, CI jobs, and other milestones.
These are generic tags on the Driver (entire execution) level. They are added automatically to each test running as part of this execution.
Fig 5: Using Smart Reporting custom tags through ContextTags capabilities
To add tags to a single test, we use the TestContext class and create the instance when starting the specific test.
These are specific tags on the single test (method/function) level. They will be automatically added to this test only.
Compared to the entire execution level tags demonstrated in Fig 5, the use of tags within a single test scenario would look as follows (Fig 6):
Fig 6: Using tags within a single test scenario
To get the report post execution as URL and drill down, you would need to implement the following code:
Fig 7: Generating report URL sample code
When using tags within a single test, as shown above, it is possible to distinguish and gain better flexibility when running a test in various contexts and under different conditions.
If you use the TestNG framework, we strongly recommend to work with the TestNG Listener so all report statuses get reported and aggregated automatically, in the following way:
When leveraging TestNG, you need to implement the ITestListener.
All test status results are reported through the following method:
2 | Implement tags across application functional areas/test types
Now that we are clear on the environment setup to use Smart Reporting, let’s understand how to structure a winning test suite that leverages tags and supports better planning and insights.
Perfecto created a getting started project example (found in the Perfecto GIT Repository) that uses a set of RemoteWebDriver automated tests on Geico’s responsive web site running via TestNG on 3 platforms (Windows, Android, and iOS).
If you look at the example, you can see that adding a simple method with a tag called “Regression”, as seen in Fig 8 below, can help you start building better tests and triaging failures, as you’ll see later in this document.
Fig 8: Including a new tag in a test automation case
When the above “Regression” tag is included in the test class, any test cases like the above (Fig 8) are added and grouped under that tag.
One of the common use cases for using tags is the need to generate the same context for a group of test cases that are not executed from within CI and are used for other quality purposes. Another good example is setting the release version or sprint number so that it can later be used for comparison and trending illustration.
Drill down to a failure root cause analysis using tags
In the following example, we use the pre-defined Regression tag as part of our triaging to isolate the real issue. Figures 9-12 demonstrate the entire process up to the test code itself.
Fig 9: Applying tags within the dashboard view - 1st step in triaging
The above dashboard view enables filtering the entire suite to display only the results relevant to the Regression test scenarios. Moving the pointer over the failures bucket allows drilling down to the actual report library (grid) as shown in Fig 10.
Fig 10: The new “Regression” tag within the report library grouping 3 relevant test cases
In the above grid view, you can access a single test report and filter through the execution steps as needed. If we examine Fig 11, we can see a correlation between the test flow steps and the code in Fig 12.
Fig 11: Single Test Report test flow view
Fig 12: Smart Reporting testStep code example implementation
Obviously, when using such tags in the report, having the ability, post execution, to also group these tests by tags and a secondary filter-like target platform, such as Web or Mobile, can add an additional layer of insight, as seen in Fig 13.
Group tests by tags and more
Fig 13: Grouping test reports by Tags and secondary filter option like Device
The custom view shown in Fig 13 has 2 levels of group-by: Tags and Devices. As you can see in Fig 13, we included both Selenium tag (you can include/exclude tags via the filter as needed) and created a filter by specific devices of interest to us. With that, we received a custom view that merges both the Selenium tag as well as the relevant devices that we wish to examine.
Fig 14: Save custom view option
Assuming the view created in Fig 13 fits the organization and various personas as well as offers the right quality visibility needs, the Smart Reporting feature supports saving this custom view either as private or shared view for future leverage (Fig 14).
3 | Implement multiple tags across several applications
Now that you understand how to set up Smart Reporting and work with the supported SDK methods (testStep, testStart, testStop) and tags, we can scale the method to multiple applications and various test scenarios and use cases.
As a first step, let’s create a new test class and include a new tag name.
In this specific case, what we created was a simple search test within the Perfecto responsive web site. The test opens the Perfecto website, navigates to the search text box, and perform a search on the word “Digital Index”. This test is added to the existing testng.xml file used to execute the above Geico example. As you can see in Fig 15, there is an implementation of the above scenario with a newly added tag named “Perfecto Search”.
Fig 15: New test class with an added tag named “Perfecto Search”
When we have scaled our test suite, we can examine a "non-tag based" report and a "tagged" one. Fig 16 shows a full test execution of both the Regression and the Perfecto Search test scenarios along with a long list of reports that is hard to navigate and analyze.
Fig 16: Grid view unfiltered and with no tags selected
When you want to drill down only to the 2 reports that we have tagged above, it is easy to get a subset report and then also drill down as shown earlier in this document to the STR (See Fig 17-20).
Fig 17: Complete cloud aggregated reports generated through Smart Reporting
Fig 18: Reports filtered by Tags and Failures Only
Fig 19: Reports filtered by failures only on Chrome browsers
Fig 20: Single test report with logical steps and details for the specific test failure
From the above Fig 20, if you want to investigate and triage the failure, you can easily get additional test artifacts that include environmental details, videos, vitals, network PCAP file, PDF reports, logs, and more (see for example Fig 21).
Fig 21: Detailed persona single test report
4 | Implement logical steps within the test code
Logical steps are the fundamentals of injecting order and sense into the entire test scenarios. With that in mind, let’s make sure that each test step in your test scenario is well documented per the Reporting SDK so that it appears well in the test reports. As can be seen in the Fig 22, prior to each step, we document the logical action to make it easy to track when the execution is completed.
Fig 22: Sample java test steps for Geico responsive site
If we execute the above example, we can see the logical steps side-by-side as they were developed in Java in the above snippet and drill into every step in the single test report. In addition, clicking a specific logical step brings up the visual for the specific command executed, as seen in Fig 23.
Fig 23: Detailed test step report with visuals, video, and more
What we have documented above should allow you to shift from using a basic test report - either a legacy Perfecto report, testNG report, or other - to a more customizable test report that, as we have demonstrated above, allows you to achieve the following outcomes:
- Better structured test scenarios and test suites.
- Use of tags from early test authoring as a method for faster triaging and prioritizing fixes.
- Shifting of tag-based tests into planned test activities (CI, Regression, Specific functional area testing, etc.).
- Easy filtering of big test data and drilldown into specific failures per test, platform, test result, or through groups.
- Elimination of flaky tests through high-quality visibility into failures.
The result of the above is a facilitation of a methodological based RTDD workflow that is easier to maintain.
To learn more, and be constantly up to date with Smart Reporting, bookmark the following URL: