Page tree
Skip to end of metadata
Go to start of metadata

Last updated: Oct 14, 2019 11:56


This section explains how to enable the Reporting Test Driven Development (RTDD) workflow.

Tag-driven reports enable triaging prioritization.

Introduction

The mobile market is quite mature. Most organizations are deeply invested in digital technologies that focus on Mobile, Web, IoT, and others. With this investment comes an increased risk of losing business due to poor quality, defects that leak into production, and late releases.

Among the challenges many enterprises are facing as it comes to assuring continuous digital quality is the area of quality visibility, test planning and optimization, test flakiness, and false test results (false positives and negatives). This section addresses some of these challenges and their root cause and suggests an approach to a digital quality insight that works. 

On this page:

Digital quality challenges

When Perfecto engages with customers in various market segments like finance, retail, insurance, and more, the customers, typically, raise the following issues:

  • Organizations find it hard to triage failures post executions (Regardless within or without CI).
  • Planning, management, and optimization of test cycles based on proper insights is a significant challenge.
  • Inconsistent test results and test flakiness are usually the root cause of project delays, product area blind spots, as well as coverage concerns in respective functional areas.
  • Execution-based reports are too long to validate and examine. The ability to break long reports into smaller blocks is a necessary ingredient in faster triaging.
  • On-demand management view of the entire product quality from top to bottom is a hard-to-achieve goal, especially around large test suites.
  • It is not possible to break long test execution into smaller test reports as a way to achieve faster quality analysis of issues.


 

Fig 1: Common challenges with digital quality


Perfecto’s test analysis with Smart Reporting

Perfecto's Smart Reporting feature allows you to optimize quality visibility for better test analysis. It empowers you to build structured test logic that can be used for test reports. In addition, Smart Reporting leverages tags built into the tests from the initial authoring stage, as a driver for future test planning, defect triaging, scaling continuous integration (CI) testing activities, and more. Smart Reporting lets you use the Reporting SDK and methodology within your FW and dev language of choice because this SDK supports authoring tests in JavaJavaScriptC#PythonRuby and HPE UFT, with support for IDE’s like Android Studio, IntelliJ, Eclipse, and XCode.

This section provides a deep-dive into the available tags and how you can best use them to achieve these desired outcomes:

  • Less flaky tests and stable execution
  • On-demand quality visibility
  • Test planning, management, and optimization
  • Data-Driven decision making

When you start to build test execution suites, it is important to begin by pre-configuring custom tags that are meaningful and valuable to your teams. The tags should be classified into 3 different buckets, as suggested in Table 1.

  1. Execution level tags
  2. Single Test Report tags
  3. Logical steps names

Suggested tagging for advanced digital quality visibility

Execution Level Tags Categories

Single Test Report Tags

Logical Test Steps

Categories

Examples

Categories

Examples

Categories

Examples

Test type (Regression, Unit, Nightly, Smoke)

“Regression”, “Unit”, “Nightly”, “Smoke”

Test Scenario Identifiers

“Banking Check Deposit”, “Geico Login”

Functional Areas

“Login”, “Search”

CI

Build number, Job, Branch

Environmental Identifiers

“Testing 2G conditions”, “Testing on iOS Devices”, “Testing Device Orientation”, “Testing Location Changes”, “Persona”

Functional Actions

“Launch App”, “Press Back”, “Click on Menu”, “Navigate to page”

CI Server Names

“Alexander”





Team Names

“iOS Team”, “Android Team”, “UI Team”





Platforms

“iOS”, “Android”, “Chrome”, “IOT”





Release/Sprint Versions

“9.x”





Test Frameworks Associations

“Appium”, “Espresso”, “Selenium”, “UFT”





Test Code Languages

“Java”, “C#”, “Python”





Table 1: Suggested custom tags with classification accelerate analysis

Looking at the above table, it would make sense for teams to define regression tags that only cover the relevant tests per the functional areas as recommended, in the Logical Steps column above, while each test step has a name to indicate what the test is doing. When implemented correctly, at the end of each execution, management and relevant personas can easily correlate between the high-level suite, the middle layer tested area and, finally, lower to the single test failure. Such governance and management of the entire suite can support better planning, triaging, and decision making.

With the use of proper tags, you can realize an additional benefit when implementing advanced CI. Perfecto’s Reporting SDK supports Jenkins and Groovy APIs for easy communication with the reporting suite. Filtering your test report by Job number, Build number, or release version can be simplified and provide proper insights on-demand.

Fig 2: Combining the 3 Smart Reporting tags

1 | Get started with Smart Reporting and basic tagging

To start working with this technology, teams need to download the Reporting SDK and integrate it into their IDE of choice.

The detailed setup instructions are available at the above link.

Basically, you can select one of the following methods to download and set up Smart Reporting:

  1. Direct download of the Reporting SDK (as a .jar file for Java, Ruby Gem, etc.), as described in the document at the above URL.
  2. Setting the required dependency management tool (Maven, Gradle, Ivy) to download it for you.

The uniqueness of the Reporting SDK is its ability to “break” a long execution into small building blocks and differentiate between methods or smaller test pieces.

The result:

No more endless reports with hundreds/thousands of commands. The Reporting SDK enables teams to break the execution into a reasonable/digestible amount of content.

Create an instance of the reporting client

Use the following code to instantiate the Reporting client in the test code:

@BeforeClass(alwaysRun = true)
public void baseBeforeClass() throws MalformedURLException {
    driver = createDriver();
    reportiumClient = createReportiumClient(driver);
}

Fig 3: Smart Reporting client instantiation

Important: The creation of the reportiumClient should be in proximity to the driver. In addition, create 1 reportiumClient instance per 1 Automation Driver.

When you instantiate the reporting client in your test code through a simple call, as below, the Reporting SDK allows the test developer to wrap each test with the basic commands:

Per each annotated @Test that identifies a test scenario, you can use custom tags like "Regression", "Unit", or functional-area specific tags (by using the PerfectoExecutionContext class) and the following methods:

  1. testStart()
  2. testStep()
  3. testStop()

The following code shows a sample test that uses the above methods:

@Test
public void myTest() {
       reportiumClient.testStart("myTest", new TestContext("Sanity"));
 
       try {
              reportiumClient.testStep("Login to application");
              WebElement username = driver.findElement(By.id("username"));
              Username.sendText("myUser");
              Driver.findElement(By.name("submit")).click();
 
              reportiumClient.testStep("Open a premium account");
              WebElement premiumAccount = driver.findElement(By.id("premium-account"));
              assertTrue(premiumAccount.getText(), "PREMIUM");
              premiumAccount.click();
 
              reportiumClient.testStep("Transfer funds");
              ...
              //stopping the test - success
              reportiumClient.testStop(TestResultFactory.createSuccess());
       } catch (Throwable t) {
              //stopping the test - failure
              reportiumClient.testStop(TestResultFactory.createFailure(t.getMessage(), t));
       }
}

Fig 4: Sample implementation of test code with all supported methods

As mentioned in the introduction, creating functional tests that do not utilize tags as a method of pre/post-test execution analysis and triaging usually results in lengthy and inefficient processes.

As seen in Fig 5, marking a set of tests with a WithContextTag makes it much easier during test debugging and test execution to filter the corresponding tests relevant to that tag (in the below example, we use a “Regression” tag). In the same way, as below, you can gather tests under a testing type context named "Smoke", "UI", "CI", or other but also signify specific tests that cover a specific functional area such as Login, Search, and more. These tags help manage the test execution flows and the results at the end, and they gather insights and trends throughout builds, CI jobs, and other milestones.

Creating context tags is a key practice toward fast quality analysis and test planning.

These are generic tags on the Driver (entire execution) level. They are added automatically to each test running as part of this execution.

PerfectoExecutionContext perfectoExecutionContext = new PerfectoExecutionContext.PerfectoExecutionContextBuilder()
    .withProject(new Project("Sample Reportium project", "1.0"))
    .withJob(new Job("IOS tests", 45))
    .withContextTags("Regression")
    .withWebDriver(driver)
    .build();
ReportiumClient reportiumClient = new ReportiumClientFactory().createPerfectoReportiumClient(perfectoExecutionContext);

Fig 5: Using Smart Reporting custom tags through ContextTags capabilities

To add tags to a single test, we use the TestContext class and create the instance when starting the specific test.

These are specific tags on the single test (method/function) level. They will be automatically added to this test only.

Compared to the entire execution level tags demonstrated in Fig 5, the use of tags within a single test scenario would look as follows (Fig 6):

@Test
public void myTest() {
	reportiumClient.testStart("myTest", new TestContext("Log-in Use Case", "iOSNativeAppLogin", "iOS Team"));

Fig 6: Using tags within a single test scenario

To get the report post execution as URL and drill down, you would need to implement the following code:

String reportURL = reportiumClient.getReportUrl();
System.out.println("Report URL - " + reportURL);

Fig 7: Generating report URL sample code

When using tags within a single test, as shown above, it is possible to distinguish and gain better flexibility when running a test in various contexts and under different conditions.

If you use the TestNG framework, we strongly recommend to work with the TestNG Listener so all report statuses get reported and aggregated automatically, in the following way:

@Override
public void onTestStart(ITestStart testResult) {
	if (getBundle().getString("remote.server", "").contains("perfecto")) {
		createReportiumclient(testResult).testStart(testResult.getMethod().getMethodName(),
			new TestContext(testResult.getMethod().getGroups()));
	}
}

When leveraging TestNG, you need to implement the ITestListener.

All test status results are reported through the following method:


@Override    
	public void onTestSuccess (ITestResult testResult) {
        ReportiumClient client = getReportiumClient();
        if (null != client) {
            client.testStop(TestResultFactory.createSuccess());
            logTestEnd(testResult);
        }

    }

    @Override
    public void onTestFailure (ITestResult testResult) {
        ReportiumClient client = getReportiumClient();
        if (null != client) {
            client.testStop(TestResultFactory.createFailure("An error occurred",
                    testResult.getThrowable()));
            logTestEnd(testResult);
        }
    }

2 | Implement tags across application functional areas/test types

Now that we are clear on the environment setup to use Smart Reporting, let’s understand how to structure a winning test suite that leverages tags and supports better planning and insights.

Perfecto created a getting started project example (found in the Perfecto GIT Repository) that uses a set of RemoteWebDriver automated tests on Geico’s responsive web site running via TestNG on 3 platforms (Windows, Android, and iOS).

If you look at the example, you can see that adding a simple method with a tag called “Regression”, as seen in Fig 8 below, can help you start building better tests and triaging failures, as you’ll see later in this document.

    private static ReportiumClient createReportium(WebDriver driver) {
       	PerfectoExecutionContext perfectoExecutionContext = new PerfectoExecutionContext.PerfectoExecutionContextBuilder()
    		.withProject(new Project("Sample Geico Test", "1.0"))
    		.withContextTags("Regression")
 			.withWebDriver(driver)
            .build();
		
        return new ReportiumClientFactory().createPerfectoReportiumClient(perfectoExecutionContext);
    }

Fig 8: Including a new tag in a test automation case

When the above “Regression” tag is included in the test class, any test cases like the above (Fig 8) are added and grouped under that tag.

One of the common use cases for using tags is the need to generate the same context for a group of test cases that are not executed from within CI and are used for other quality purposes. Another good example is setting the release version or sprint number so that it can later be used for comparison and trending illustration.

Drill down to a failure root cause analysis using tags

In the following example, we use the pre-defined Regression tag as part of our triaging to isolate the real issue. Figures 9-12 demonstrate the entire process up to the test code itself.

Fig 9: Applying tags within the dashboard view - 1st step in triaging

The above dashboard view enables filtering the entire suite to display only the results relevant to the Regression test scenarios. Moving the pointer over the failures bucket allows drilling down to the actual report library (grid) as shown in Fig 10.

Fig 10: The new “Regression” tag within the report library grouping 3 relevant test cases

In the above grid view, you can access a single test report and filter through the execution steps as needed. If we examine Fig 11, we can see a correlation between the test flow steps and the code in Fig 12.

Fig 11: Single Test Report test flow view

    // Test Method, navigate to Geico and get insurance quote
    @Test
    public void geicoInsurance() throws MalformedURLException {

        reportiumClient.testStep("Navigate to Geico webpage");
        driver.get("http://www.geico.com");

Fig 12: Smart Reporting testStep code example implementation

Obviously, when using such tags in the report, having the ability, post execution, to also group these tests by tags and a secondary filter-like target platform, such as Web or Mobile, can add an additional layer of insight, as seen in Fig 13.

Group tests by tags and more

Fig 13: Grouping test reports by Tags and secondary filter option like Device

The custom view shown in Fig 13 has 2 levels of group-byTags and Devices. As you can see in Fig 13, we included both Selenium tag (you can include/exclude tags via the filter as needed) and created a filter by specific devices of interest to us. With that, we received a custom view that merges both the Selenium tag as well as the relevant devices that we wish to examine.

Fig 14: Save custom view option

Assuming the view created in Fig 13 fits the organization and various personas as well as offers the right quality visibility needs, the Smart Reporting feature supports saving this custom view either as private or shared view for future leverage (Fig 14).

Perfecto strongly recommends building a triaging process that takes into consideration multiple custom views that supports the quality goals of the project. In addition, the custom view report creation phase is the right step in the triaging process to identify any existing gaps in your tags and reporting test-driven development implementation.

3 | Implement multiple tags across several applications

Now that you understand how to set up Smart Reporting and work with the supported SDK methods (testStep, testStart, testStop) and tags, we can scale the method to multiple applications and various test scenarios and use cases.

As a first step, let’s create a new test class and include a new tag name.

In this specific case, what we created was a simple search test within the Perfecto responsive web site. The test opens the Perfecto website, navigates to the search text box, and perform a search on the word “Digital Index”. This test is added to the existing testng.xml file used to execute the above Geico example. As you can see in Fig 15, there is an implementation of the above scenario with a newly added tag named “Perfecto Search”.

    private static ReportiumClient createReportium(WebDriver driver) {
       	PerfectoExecutionContext perfectoExecutionContext = new PerfectoExecutionContext.PerfectoExecutionContextBuilder()
    		.withProject(new Project("Sample Perfecto Test", "1.0"))
    		.withContextTags("Perfecto Search")
 			.withWebDriver(driver)
            .build();
		
        return new ReportiumClientFactory().createPerfectoReportiumClient(perfectoExecutionContext);
    }

    // Test Method, navigate to Perfecto Web Site
    @Test
    public void perfectoSearch() throws MalformedURLException {

        reportiumClient.testStep("Navigate to Perfecto Home Page");
        driver.get("http://www.perfectomobile.com");
        reportiumClient.testStep("Press Start Free button");
        driver.findElement(By.xpath("//*[text()='Start Free']")).click();
        reportiumClient.testStep("Enter my first name");
        driver.findElement(By.id("First Name")).sendKeys("Eran");
        reportiumClient.testStep("Enter my last name");
        driver.findElement(By.id("Last Name")).sendKeys("Kinsbruner");
        reportiumClient.testStep("Enter my email adr");
        driver.findElement(By.id("Email")).sendKeys("erank@perfectomobile.com");
        reportiumClient.testStep("Enter my phone number");
        driver.findElement(By.id("Phone")).sendKeys("+7816652345");
        reportiumClient.testStep("Enter my company name");
        driver.findElement(By.id("Company")).sendKeys("Eran");

		System.out.println("Done: Perfecto Search");

Fig 15: New test class with an added tag named “Perfecto Search

When we have scaled our test suite, we can examine a "non-tag based" report and a "tagged" one. Fig 16 shows a full test execution of both the Regression and the Perfecto Search test scenarios along with a long list of reports that is hard to navigate and analyze.

Fig 16: Grid view unfiltered and with no tags selected

When you want to drill down only to the 2 reports that we have tagged above, it is easy to get a subset report and then also drill down as shown earlier in this document to the STR (See Fig 17-20).

Fig 17: Complete cloud aggregated reports generated through Smart Reporting

Fig 18: Reports filtered by Tags and Failures Only

Fig 19: Reports filtered by failures only on Chrome browsers

Fig 20: Single test report with logical steps and details for the specific test failure

From the above Fig 20, if you want to investigate and triage the failure, you can easily get additional test artifacts that include environmental details, videos, vitals, network PCAP file, PDF reports, logs, and more (see for example Fig 21).

Fig 21: Detailed persona single test report

4 | Implement logical steps within the test code

Logical steps are the fundamentals of injecting order and sense into the entire test scenarios. With that in mind, let’s make sure that each test step in your test scenario is well documented per the Reporting SDK so that it appears well in the test reports. As can be seen in the Fig 22, prior to each step, we document the logical action to make it easy to track when the execution is completed.

Fig 22: Sample java test steps for Geico responsive site

If we execute the above example, we can see the logical steps side-by-side as they were developed in Java in the above snippet and drill into every step in the single test report. In addition, clicking a specific logical step brings up the visual for the specific command executed, as seen in Fig 23.

Fig 23: Detailed test step report with visuals, video, and more

Summary

What we have documented above should allow you to shift from using a basic test report - either a legacy Perfecto report, testNG report, or other - to a more customizable test report that, as we have demonstrated above, allows you to achieve the following outcomes:

  • Better structured test scenarios and test suites.
  • Use of tags from early test authoring as a method for faster triaging and prioritizing fixes.
  • Shifting of tag-based tests into planned test activities (CI, Regression, Specific functional area testing, etc.).
  • Easy filtering of big test data and drilldown into specific failures per test, platform, test result, or through groups.
  • Elimination of flaky tests through high-quality visibility into failures.

The result of the above is a facilitation of a methodological based RTDD workflow that is easier to maintain.

To learn more, and be constantly up to date with Smart Reporting, bookmark the following URL:

http://developers.perfectomobile.com/display/PD/Reporting