Enabling Reporting Test Driven Development (RTDD) Workflow
As this paper is being written – the mobile market is quite mature. Most organizations are deeply invested in digital technologies that focus on Mobile, Web, IOT, and others. Together with that investment comes also an increased risk of losing business due to poor quality, defects that leak into production, and late releases.
Among the challenges many enterprises are facing as it comes to assuring continuous digital quality is the area of quality visibility, test planning and optimization, test flakiness, and false test results (false positives and negatives).
This paper, addresses some of the challenges, the root cause of these challenges and a suggested approach to digital quality insight that works.
Digital Quality Challenges
When Perfecto engages with customers in various market segments like finance, retail, insurance, and more, the customers, typically, will raise the following painful issues:
- Organizations find it hard to triage failures post executions (Regardless within or without CI).
- Planning, management, and optimization of test cycles based on proper insights is a significant challenge.
- Inconsistent test results and test flakiness are usually the root cause of project delays, product area blind spots, as well as coverage concerns in respective functional areas.
- Execution based reports are way too long to validate and examine. The ability to break long reports into smaller blocks is a necessary ingredient in faster triaging.
- On-Demand management view of the entire product quality from top to bottom is a hard to achieve goal, especially around large test suites.
- The inability to break long test execution into smaller test reports as a way to achieve faster quality analysis of issues.
Fig 1: Digital Quality Common Challenges
Introducing Perfecto’s DigitalZoom™ Reporting & Tagging
Perfecto introduces an innovative product and methodology to optimize quality visibility using DigitalZoom™ Reporting. This tool empowers practitioners to build structured test logic that can be used for test reports. In addition, DigitalZoom™ leverages tags built into the tests from the initial authoring stage, as a driver for future test planning, defect triaging, scaling continuous integration (CI) testing activities, and more.
In this section, we will provide a deep-dive into the available tags and how they can best be used to achieve these desired outcomes:
- Less flaky tests and stable execution
- On-demand quality visibility
- Test planning, management, and optimization
- Data-Driven decision making
When starting to build test execution suites, it is important to start by pre-configuring custom tags that are meaningful and valuable to your teams. The tags should be classified into 3 different buckets as suggested in Table 1.
- Execution level tags
- Single Test Report tags
- Logical steps names
Suggested Tagging for Advanced Digital Quality Visibility
Execution Level Tags Categories
Single Test Report Tags
Logical Test Steps
Test type (Regression, Unit, Nightly, Smoke)
“Regression”, “Unit”, “Nightly”, “Smoke”
Test Scenario Identifiers
“Banking Check Deposit”, “Geico Login”
Build number, Job, Branch
“Testing 2G conditions”, “Testing on iOS Devices”, “Testing Device Orientation”, “Testing Location Changes”, “Persona”
“Launch App”, “Press Back”, “Click on Menu”, “Navigate to page”
CI Server Names
“iOS Team”, “Android Team”, “UI Team”
“iOS”, “Android”, “Chrome”, “IOT”
Test Frameworks Associations
“Appium”, “Espresso”, “Selenium”, “UFT”
Test Code Languages
“Java”, “C#”, “Python”
Table 1: Suggested Custom Tags with Classification Accelerates Analysis
Looking at the above table, it would make a lot of sense for teams to define regression tags that only cover the relevant tests per the functional areas as recommended, in the Logical Steps column above, while each test step has a name to indicate what the test is doing. When implemented correctly – at the end of each execution, management and relevant personas can easily correlate between the high-level suite, the middle layer tested area and, finally, lower to the single test failure.
Such governance and management of the entire suite can support better planning, triaging, and decision making.
An additional benefit, when using proper tags, can be realized when implementing advanced CI.
Perfecto’s Reporting SDK supports Jenkins and Groovy API’s, which allows these tools to communicate well with the reporting suite. Filtering your test report by Job number, Build number, or release version can be simplified and provide proper insights on-demand.
Fig 2: Combining The 3 DigitalZoom™ Tags
Step 1: Getting Started with DigitalZoom™ and Basic Tagging
To start working with this technology, teams need to download the Reporting SDK and integrate it into their IDE of choice.
The detailed setup instructions are available at the above link.
Basically, users can select one of the following methods to download and setup their DigitalZoom™:
- Direct download of the Reporting SDK (as a .jar file for Java, Ruby Gem etc.) as described in the document at the above URL.
- Set the required dependency management tool (Maven, Gradle, Ivy) to download it for you.
No more endless reports with hundreds/thousands of commands. The Reporting SDK enables teams to break the execution into a reasonable/digestible amount of content.
Create an Instance of the Reporting Client
Use the following code to instantiate the Reporting Client in the test code:
Fig 3: DigitalZoom™ Reporting client instantiation
Important: The creation of the reportiumClient should be in proximity to the driver.
- In addition, create 1 reportiumClient instance per 1 Automation Driver
Once a user instantiates the reporting client in his test code through a simple call as below, the Reporting SDK allows the test developer to wrap each test with the basic commands:
Per each annotated @Test that will basically identify a test scenario, users will be able to use custom tags like "Regression", "Unit", or functional area specific tags (by using the PerfectoExecutionContext class) and the following methods:
The following code shows a sample test that uses the above methods:
Fig 4: Sample implementation of test code with all supported methods
As mentioned in the introduction, creating functional tests that do not utilize tags as a method of pre/post-test execution analysis and triaging usually results in lengthy and inefficient processes.
As seen in Fig 5, marking a set of tests with a WithContextTag makes it much easier during the test debugging, and test execution to filter the corresponding tests relevant to that tag (in the below example, we use a “Regression” tag). In the same way, as below, users can gather tests under a testing type context named "Smoke", "UI", "CI", or other but also signify specific tests that cover a specific functional area such as Login, Search and more. These tags help manage the test execution flows, the results at the end, as well as gather insights and trends throughout builds, CI jobs, and other milestones.
These are generic tags on the Driver (entire execution) level. They will be auto-added to each test running as part of this execution.
Fig 5: Using DigitalZoom™ custom tags through ContextTags capabilities
To add tags to a single test we use the TestContext class, and create the instance when starting the specific test.
These are specific tags on the single test (method/function) level. They will be auto-added only to this test.
Compared to the entire execution level tags demonstrated in Fig 5 above, the use of tags within a single test scenario would look as follows (Fig 6):
Fig 6: Using tags within a single test scenario
To get the report post execution as URL and drill down, users would need to implement the following code
Fig 7: Generating report URL sample code
When using tags within the single test as shown above, customers can distinguish and gain better flexibility when running a test in various context, conditions etc.
If you are using the TestNG framework, it is strongly recommended to work with the TestNG Listener so all report statuses will get reported and aggregated automatically, in the following way:
Once leveraging TestNG, customers will need to implement the ITestListener.
All the test status results will be reported through the following method
Step 2: Implementing Tags Across Application Functional Areas/Test Types
Now that we are clear on the environment setup to use DigitalZoom™, let’s understand how to structure a winning test suite that leverages tags and supports better planning and insights.
Perfecto created a getting started project example (found in the Perfecto GIT Repository) that uses a set of RemoteWebDriver automated tests on Geico’s responsive web site running via TestNG on 3 platforms (Windows, Android, and iOS).
If you look at the example, you can see that adding a simple method with a tag called “Regression” as seen in Fig 8 below, can help you start building better tests and triaging failures as you’ll see later in this document.
Fig 8: Including a new Tag in a test automation case
Once adding the above “Regression” tag in the test class, any test cases like the above (Fig 8) will be added and grouped under that tag.
One of the common use cases for using tags is the need to generate the same context for a group of test cases that aren’t executed from within CI and are being used for other quality purposes. Another good example, can be setting the release version number or sprint number, that can later be used for comparison and trending illustration.
Drill Down to a Failure Root Cause Analysis using Tags
In the following example, we will use the pre-defined Regression tag as part of our triaging to isolate the real issue. Figures 9 - 12 demonstrate the entire process till the test code itself.
Fig 9: Applying tags within the dashboard view in DigitalZoom™ - 1st Step in Triaging
The above dashboard view, enables filtering the entire suite to display only the results relevant to the Regression test scenarios. Hovering on the failures bucket allows drilling down to the actual report library (grid) as shown in Fig 10.
Fig 10: The new “Regression” tag within the report library grouping 3 relevant test cases
In the above grid view, users can access a Single Test Report and filter through the execution steps, as needed.
If we examine Fig 11, we can see a correlation between the test flow steps and the code in Fig 12.
Fig 11: Single Test Report test flow view
Fig 12: DigitalZoom™ testStep code example implementation
Obviously, when using such tags in the report, having the ability, post execution, to also group these tests by tags and a secondary filter like target platforms such as Web or Mobile, can add an additional layer of insights as seen in Fig 13.
Grouping Tests by Tags and More
Fig 13: DigitalZoom™ - grouping test report by Tags and secondary filter option like Device
The custom view shown in Fig 13 has 2 levels of group-by: Tags and Devices.
As you can see in Fig 13, what we have done, was to include both Selenium tag (Users can include/exclude tags via the filter as needed) and, in addition create a filter by specific devices of interest to us. With that, we received a custom view that merges both the Selenium Tag as well as the relevant devices that we wish to examine.
Fig 14: Save Custom View Option within DigitalZoom™
Assuming the view created in Fig 13 fits the organization and various personas as well as offers the right quality visibility needs, the DigitalZoom™ solution supports saving this custom view either as Private or shared view for future leverage (Fig 14).
Step 3: Implementing Multiple Tags Across Few Applications
Now that it is clear how to setup DigitalZoom™, work with the supported SDK methods (testStep, testStart, testStop) and tags, we can scale the method to multiple applications and various test scenarios and use cases.
As a first step, let’s create a new test class and include a new tag name.
In this specific case, what we created was a simple search test within the Perfecto responsive web site. The test will open the Perfecto web site, navigate to the search text box and perform a search on the word “Digital Index”. This test will be added to the existing testng.xml file used to execute the above Geico example.
As you can see in Fig 15, there is an implementation of the above scenario with a newly added tag named “Perfecto Search”.
Fig 15: New test class with an added tag name “Perfecto Search”
Once we have scaled our test suite, we can examine a "non-tag based" report and a "tagged" one.
In Fig 16 post a full test execution of both the Regression and the Perfecto Search test scenarios the situation is that we see a large list of reports that is hard to navigate and analyze.
Fig 16: DigitalZoom™ Grid view unfiltered and with no tags selected
When a user wants to drill down only to the 2 reports that we have tagged above, it will be very easy for him to get a subset report and then also drill down as shown earlier in this document to the STR (See Fig 17 - 20).
Fig 17: Complete cloud aggregated reports generated through DigitalZoom™
Fig 18: DigitalZoom™ reports filtered by Tags and Failures Only
Fig 19: DigitalZoom™ reports filtered by failures only on Chrome browsers
Fig 20: Single Test Report with logical steps and details for the specific test failure
From the above Fig 20, developer that wishes to investigate and triage the failure, can easily get additional test artifacts that include environmental details, videos, vitals, network PCAP file, PDF reports, logs, and more (see for example Fig 21)
Fig 21: Detailed persona single test report
Implementing Logical Steps within The Test Code
Now that we are clear on using execution level tags as well as specifying logical steps. The logical steps are the fundamentals of injecting order and sense into the entire test scenarios. With that in mind, let’s make sure that each test step in your test scenario is well documented per the Reporting SDK so it appears well in the test reports. As can be seen in the Fig 22, prior to each step, we document the logical action to make it easy to track once the execution is completed.
Fig 22: Sample java test steps for Geico responsive site
If we execute the above example, we can see side by side the logical steps as they were developed in Java in the above snippet and drill into every step in the Single Test Report. In addition, clicking on a specific logical step will bring up the visual for the specific command executed, as seen in Fig 23.
Fig 23: Detailed test step report with visuals, video, and more
What we have documented above should allow any practitioner to shift from using a basic test report - either a legacy Perfecto report, testNG report, or other - to a more customizable test report that, as we have demonstrated above, allows them to achieve the following outcomes:
- Better structured test scenarios and test suites.
- Use tags from early test authoring as a method for faster triaging and prioritizing fixes.
- Shift tag based tests into planned test activities (CI, Regression, Specific functional area testing, etc.).
- Easily filter big test data and drill down into specific failures per test, per platform, per test result, or through groups.
- Eliminate flaky tests through high quality visibility into failures
The result of the above is a facilitation of a methodological based RTDD workflow that can be maintained much easier than before.
To learn more, and be constantly up to date with DigitalZoom™ please bookmark the following URL