Page tree
Skip to end of metadata
Go to start of metadata

Enabling Reporting Test Driven Development (RTDD) Workflow

Tag Driven reports enable triaging prioritization


As this paper is being written – the mobile market is quite mature. Most organizations are deeply invested in digital technologies that focus on Mobile, Web, IOT, and others. Together with that investment comes also an increased risk of losing business due to poor quality, defects that leak into production, and late releases.

Among the challenges many enterprises are facing as it comes to assuring continuous digital quality is the area of quality visibility, test planning and optimization, test flakiness, and false test results (false positives and negatives).

This paper, addresses some of the challenges, the root cause of these challenges and a suggested approach to digital quality insight that works.

Digital Quality Challenges

When Perfecto engages with customers in various market segments like finance, retail, insurance, and more, the customers, typically, will raise the following painful issues:

  • Organizations find it hard to triage failures post executions (Regardless within or without CI).
  • Planning, management, and optimization of test cycles based on proper insights is a significant challenge.
  • Inconsistent test results and test flakiness are usually the root cause of project delays, product area blind spots, as well as coverage concerns in respective functional areas.
  • Execution based reports are way too long to validate and examine. The ability to break long reports into smaller blocks is a necessary ingredient in faster triaging.
  • On-Demand management view of the entire product quality from top to bottom is a hard to achieve goal, especially around large test suites.
  • The inability to break long test execution into smaller test reports as a way to achieve faster quality analysis of issues.



Fig 1: Digital Quality Common Challenges


Introducing Perfecto’s DigitalZoom™ Reporting & Tagging

Perfecto introduces an innovative product and methodology to optimize quality visibility using DigitalZoom™ Reporting.  This tool empowers practitioners to build structured test logic that can be used for test reports. In addition, DigitalZoom™ leverages tags built into the tests from the initial authoring stage, as a driver for future test planning, defect triaging, scaling continuous integration (CI) testing activities, and more.

With Perfecto’s DigitalZoom™ customers can use the Reporting SDK and methodology within their FW and dev language of choice since this SDK supports authoring tests in Java, JavaScript, C#, Python, Ruby and HPE UFT, with support for IDE’s like Android Studio, IntelliJ, Eclipse, and XCode.

In this section, we will provide a deep-dive into the available tags and how they can best be used to achieve these desired outcomes:

  1. Less flaky tests and stable execution
  2. On-demand quality visibility
  3. Test planning, management, and optimization
  4. Data-Driven decision making


When starting to build test execution suites, it is important to start by pre-configuring custom tags that are meaningful and valuable to your teams. The tags should be classified into 3 different buckets as suggested in Table 1.

  1. Execution level tags
  2. Single Test Report tags
  3. Logical steps names

Suggested Tagging for Advanced Digital Quality Visibility

Execution Level Tags Categories

Single Test Report Tags

Logical Test Steps







Test type (Regression, Unit, Nightly, Smoke)

“Regression”, “Unit”, “Nightly”, “Smoke”

Test Scenario Identifiers

“Banking Check Deposit”, “Geico Login”

Functional Areas

“Login”, “Search”


Build number, Job, Branch

Environmental Identifiers

“Testing 2G conditions”, “Testing on iOS Devices”, “Testing Device Orientation”, “Testing Location Changes”, “Persona”

Functional Actions

“Launch App”, “Press Back”, “Click on Menu”, “Navigate to page”

CI Server Names






Team Names

“iOS Team”, “Android Team”, “UI Team”






“iOS”, “Android”, “Chrome”, “IOT”





Release/Sprint Versions






Test Frameworks Associations

“Appium”, “Espresso”, “Selenium”, “UFT”





Test Code Languages

“Java”, “C#”, “Python”





Table 1: Suggested Custom Tags with Classification Accelerates Analysis

Looking at the above table, it would make a lot of sense for teams to define regression tags that only cover the relevant tests per the functional areas as recommended, in the Logical Steps column above, while each test step has a name to indicate what the test is doing. When implemented correctly – at the end of each execution, management and relevant personas can easily correlate between the high-level suite, the middle layer tested area and, finally, lower to the single test failure.

Such governance and management of the entire suite can support better planning, triaging, and decision making.

An additional benefit, when using proper tags, can be realized when implementing advanced CI.

Perfecto’s Reporting SDK supports Jenkins and Groovy API’s, which allows these tools to communicate well with the reporting suite. Filtering your test report by Job number, Build number, or release version can be simplified and provide proper insights on-demand.

Fig 2: Combining The 3 DigitalZoom™ Tags

Step 1: Getting Started with DigitalZoom™ and Basic Tagging

To start working with this technology, teams need to download the Reporting SDK and integrate it into their IDE of choice.

The detailed setup instructions are available at the above link.

Basically, users can select one of the following methods to download and setup their DigitalZoom™:

  1. Direct download of the Reporting SDK (as a .jar file for Java, Ruby Gem etc.) as described in the document at the above URL.
  2. Set the required dependency management tool (Maven, Gradle, Ivy) to download it for you.

The uniqueness of the Reporting SDK is its ability to “break” a long execution into small building blocks and differentiate between methods or smaller test pieces.

The Result:

No more endless reports with hundreds/thousands of commands. The Reporting SDK enables teams to break the execution into a reasonable/digestible amount of content.

Create an Instance of the Reporting Client

Use the following code to instantiate the Reporting Client in the test code:

@BeforeClass(alwaysRun = true)
public void baseBeforeClass() throws MalformedURLException {
    driver = createDriver();
    reportiumClient = createReportiumClient(driver);

Fig 3: DigitalZoom™ Reporting client instantiation

Important: The creation of the reportiumClient should be in proximity to the driver.

  • In addition, create 1 reportiumClient instance per 1 Automation Driver

Once a user instantiates the reporting client in his test code through a simple call as below, the Reporting SDK allows the test developer to wrap each test with the basic commands:

Per each annotated @Test that will basically identify a test scenario, users will be able to use custom tags like "Regression", "Unit", or functional area specific tags (by using the PerfectoExecutionContext class) and the following methods:

  1. testStart()
  2. testStep()
  3. testStop()

The following code shows a sample test that uses the above methods:

public void myTest() {
       reportiumClient.testStart("myTest", new TestContext("Sanity"));
       try {
              reportiumClient.testStep("Login to application");
              WebElement username = driver.findElement("username"));
              reportiumClient.testStep("Open a premium account");
              WebElement premiumAccount = driver.findElement("premium-account"));
              assertTrue(premiumAccount.getText(), "PREMIUM");
              reportiumClient.testStep("Transfer funds");
              //stopping the test - success
       } catch (Throwable t) {
              //stopping the test - failure
              reportiumClient.testStop(TestResultFactory.createFailure(t.getMessage(), t));

Fig 4: Sample implementation of test code with all supported methods

As mentioned in the introduction, creating functional tests that do not utilize tags as a method of pre/post-test execution analysis and triaging usually results in lengthy and inefficient processes.

As seen in Fig 5, marking a set of tests with a WithContextTag makes it much easier during the test debugging, and test execution to filter the corresponding tests relevant to that tag (in the below example, we use a “Regression” tag). In the same way, as below, users can gather tests under a testing type context named "Smoke", "UI", "CI", or other but also signify specific tests that cover a specific functional area such as Login, Search and more. These tags help manage the test execution flows, the results at the end, as well as gather insights and trends throughout builds, CI jobs, and other milestones.

Creating Context Tags – Key Practice Toward Quality Fast Analysis and Test Planning

These are generic tags on the Driver (entire execution) level. They will be auto-added to each test running as part of this execution.

PerfectoExecutionContext perfectoExecutionContext = new PerfectoExecutionContext.PerfectoExecutionContextBuilder()
    .withProject(new Project("Sample Reportium project", "1.0"))
    .withJob(new Job("IOS tests", 45))
ReportiumClient reportiumClient = new ReportiumClientFactory().createPerfectoReportiumClient(perfectoExecutionContext);

Fig 5: Using DigitalZoom™ custom tags through ContextTags capabilities

To add tags to a single test we use the TestContext class, and create the instance when starting the specific test.

These are specific tags on the single test (method/function) level. They will be auto-added only to this test.

Compared to the entire execution level tags demonstrated in Fig 5 above, the use of tags within a single test scenario would look as follows (Fig 6):

public void myTest() {
	reportiumClient.testStart("myTest", new TestContext("Log-in Use Case", "iOSNativeAppLogin", "iOS Team"));

Fig 6: Using tags within a single test scenario

To get the report post execution as URL and drill down, users would need to implement the following code

String reportURL = reportiumClient.getReportUrl();
System.out.println("Report URL - " + reportURL);

Fig 7: Generating report URL sample code


When using tags within the single test as shown above, customers can distinguish and gain better flexibility when running a test in various context, conditions etc.

If you are using the TestNG framework, it is strongly recommended to work with the TestNG Listener so all report statuses will get reported and aggregated automatically, in the following way:

public void onTestStart(ITestStart testResult) {
	if (getBundle().getString("remote.server", "").contains("perfecto")) {
			new TestContext(testResult.getMethod().getGroups()));

Once leveraging TestNG, customers will need to implement the ITestListener.

All the test status results will be reported through the following method

	public void onTestSuccess (ITestResult testResult) {
        ReportiumClient client = getReportiumClient();
        if (null != client) {


    public void onTestFailure (ITestResult testResult) {
        ReportiumClient client = getReportiumClient();
        if (null != client) {
            client.testStop(TestResultFactory.createFailure("An error occurred",

Step 2: Implementing Tags Across Application Functional Areas/Test Types

Now that we are clear on the environment setup to use DigitalZoom™, let’s understand how to structure a winning test suite that leverages tags and supports better planning and insights.

Perfecto created a getting started project example (found in the Perfecto GIT Repository) that uses a set of RemoteWebDriver automated tests on Geico’s responsive web site running via TestNG on 3 platforms (Windows, Android, and iOS).

If you look at the example, you can see that adding a simple method with a tag called “Regression” as seen in Fig 8 below, can help you start building better tests and triaging failures as you’ll see later in this document.

    private static ReportiumClient createReportium(WebDriver driver) {
       	PerfectoExecutionContext perfectoExecutionContext = new PerfectoExecutionContext.PerfectoExecutionContextBuilder()
    		.withProject(new Project("Sample Geico Test", "1.0"))
        return new ReportiumClientFactory().createPerfectoReportiumClient(perfectoExecutionContext);

Fig 8: Including a new Tag in a test automation case

Once adding the above “Regression” tag in the test class, any test cases like the above (Fig 8) will be added and grouped under that tag.

One of the common use cases for using tags is the need to generate the same context for a group of test cases that aren’t executed from within CI and are being used for other quality purposes. Another good example, can be setting the release version number or sprint number, that can later be used for comparison and trending illustration.

Drill Down to a Failure Root Cause Analysis using Tags

In the following example, we will use the pre-defined Regression tag as part of our triaging to isolate the real issue. Figures 9 - 12 demonstrate the entire process till the test code itself.

Fig 9: Applying tags within the dashboard view in DigitalZoom™ - 1st Step in Triaging

The above dashboard view, enables filtering the entire suite to display only the results relevant to the Regression test scenarios. Hovering on the failures bucket allows drilling down to the actual report library (grid) as shown in Fig 10.

Fig 10: The new “Regression” tag within the report library grouping 3 relevant test cases


In the above grid view, users can access a Single Test Report and filter through the execution steps, as needed.

If we examine Fig 11, we can see a correlation between the test flow steps and the code in Fig 12.

Fig 11: Single Test Report test flow view

    // Test Method, navigate to Geico and get insurance quote
    public void geicoInsurance() throws MalformedURLException {

        reportiumClient.testStep("Navigate to Geico webpage");

Fig 12: DigitalZoom™ testStep code example implementation

Obviously, when using such tags in the report, having the ability, post execution, to also group these tests by tags and a secondary filter like target platforms such as Web or Mobile, can add an additional layer of insights as seen in Fig 13.

Grouping Tests by Tags and More

Fig 13: DigitalZoom™ - grouping test report by Tags and secondary filter option like Device


The custom view shown in Fig 13 has 2 levels of group-by: Tags and Devices.

As you can see in Fig 13, what we have done, was to include both Selenium tag (Users can include/exclude tags via the filter as needed) and, in addition create a filter by specific devices of interest to us. With that, we received a custom view that merges both the Selenium Tag as well as the relevant devices that we wish to examine.

Fig 14: Save Custom View Option within DigitalZoom

Assuming the view created in Fig 13 fits the organization and various personas as well as offers the right quality visibility needs, the DigitalZoom™ solution supports saving this custom view either as Private or shared view for future leverage (Fig 14).

Perfecto strongly recommends, building a triaging process that takes into consideration multiple custom views that supports the quality goals of the project. In addition, the custom view report creation phase is the right step in the triaging process to identify any existing gaps in your tags and reporting test driven development implementation.

Step 3: Implementing Multiple Tags Across Few Applications

Now that it is clear how to setup DigitalZoom™, work with the supported SDK methods (testStep, testStart, testStop) and tags, we can scale the method to multiple applications and various test scenarios and use cases.

As a first step, let’s create a new test class and include a new tag name.

In this specific case, what we created was a simple search test within the Perfecto responsive web site. The test will open the Perfecto web site, navigate to the search text box and perform a search on the word “Digital Index”. This test will be added to the existing testng.xml file used to execute the above Geico example.

As you can see in Fig 15, there is an implementation of the above scenario with a newly added tag named “Perfecto Search”.

    private static ReportiumClient createReportium(WebDriver driver) {
       	PerfectoExecutionContext perfectoExecutionContext = new PerfectoExecutionContext.PerfectoExecutionContextBuilder()
    		.withProject(new Project("Sample Perfecto Test", "1.0"))
    		.withContextTags("Perfecto Search")
        return new ReportiumClientFactory().createPerfectoReportiumClient(perfectoExecutionContext);

    // Test Method, navigate to Perfecto Web Site
    public void perfectoSearch() throws MalformedURLException {

        reportiumClient.testStep("Navigate to Perfecto Home Page");
        reportiumClient.testStep("Press Start Free button");
        driver.findElement(By.xpath("//*[text()='Start Free']")).click();
        reportiumClient.testStep("Enter my first name");
        driver.findElement("First Name")).sendKeys("Eran");
        reportiumClient.testStep("Enter my last name");
        driver.findElement("Last Name")).sendKeys("Kinsbruner");
        reportiumClient.testStep("Enter my email adr");
        reportiumClient.testStep("Enter my phone number");
        reportiumClient.testStep("Enter my company name");

		System.out.println("Done: Perfecto Search");

Fig 15: New test class with an added tag name “Perfecto Search

Once we have scaled our test suite, we can examine a "non-tag based" report and a "tagged" one.

In Fig 16 post a full test execution of both the Regression and the Perfecto Search test scenarios the situation is that we see a large list of reports that is hard to navigate and analyze.

Fig 16: DigitalZoom™ Grid view unfiltered and with no tags selected

When a user wants to drill down only to the 2 reports that we have tagged above, it will be very easy for him to get a subset report and then also drill down as shown earlier in this document to the STR (See Fig 17 - 20).

Fig 17: Complete cloud aggregated reports generated through DigitalZoom

Fig 18: DigitalZoom™ reports filtered by Tags and Failures Only

Fig 19: DigitalZoom™ reports filtered by failures only on Chrome browsers

Fig 20: Single Test Report with logical steps and details for the specific test failure

From the above Fig 20, developer that wishes to investigate and triage the failure, can easily get additional test artifacts that include environmental details, videos, vitals, network PCAP file, PDF reports, logs, and more (see for example Fig 21)

Fig 21: Detailed persona single test report

Implementing Logical Steps within The Test Code

Now that we are clear on using execution level tags as well as specifying logical steps. The logical steps are the fundamentals of injecting order and sense into the entire test scenarios. With that in mind, let’s make sure that each test step in your test scenario is well documented per the Reporting SDK so it appears well in the test reports. As can be seen in the Fig 22, prior to each step, we document the logical action to make it easy to track once the execution is completed.

Fig 22: Sample java test steps for Geico responsive site

If we execute the above example, we can see side by side the logical steps as they were developed in Java in the above snippet and drill into every step in the Single Test Report. In addition, clicking on a specific logical step will bring up the visual for the specific command executed, as seen in Fig 23.

Fig 23: Detailed test step report with visuals, video, and more


What we have documented above should allow any practitioner to shift from using a basic test report - either a legacy Perfecto report, testNG report, or other - to a more customizable test report that, as we have demonstrated above, allows them to achieve the following outcomes:

  • Better structured test scenarios and test suites.
  • Use tags from early test authoring as a method for faster triaging and prioritizing fixes.
  • Shift tag based tests into planned test activities (CI, Regression, Specific functional area testing, etc.).
  • Easily filter big test data and drill down into specific failures per test, per platform, per test result, or through groups.
  • Eliminate flaky tests through high quality visibility into failures

The result of the above is a facilitation of a methodological based RTDD workflow that can be maintained much easier than before.

To learn more, and be constantly up to date with DigitalZoom™ please bookmark the following URL