Page tree
Skip to end of metadata
Go to start of metadata

Last updated: Aug 21, 2019 15:34

by Prasant Sutaria

At times, we have observed that the number of test executed as part of test suite doesn't matches with Digital Zoom's CI Dashboard Job. There can be several reason for the mismatch, but the most common one is that for some reason the Driver is not initialized.

The Digital zoom SDK depends on the test driver. If for some reason, the driver initialisation fails we may get the test instance missing from the Dashboard job.

Suggested Best Practise:

1. Provide "scriptName" desired capability as part of driver intialization.

scriptName will provide default test name in case of driver initialization fails. This can help in reducing test cases with name "RemoteWebdriver".

Note: The test name provided by Digital Zoom API later can override the default test name provided by scriptName desired capability.

Example:

String scriptname = "General-Device-Test";

capabilities.setCapability("scriptName", scriptname);

2. Provide "report.jobName" and "report.jobNumber" desired capabilities as part of driver initialization.

report.jobName and report.jobNumber capabilities will help in providing default Job name and number the test belongs to. This can help in resolving test cases count mismatch issue.

Note: The report.jobName and report.jobNumber provided by Digital Zoom API later can override the default Job Name and Job Number provided by above said desired capabilities.

3. Verify that the testing framework is indeed creating logical tests for each test.

In some cases frameworks are set up to skip or fail subsequent tests if a previous test within a test suite is failed. Ensure that DigitalZoom is notified about each test- either by ensuring a connection attempt to a device is sent (attempt to instantiate driver), or if an alive driver exists, by explicitly sending TestStart and TestStop failing the test if you do not want it to be executed. "Failure Reason" can be used to mark these as not requiring further analysis.
Failing to do above and just skipping tests depending on previous execution outcomes, will result in changing test counts and make analysis much harder.


Related Article: Defining capabilities

  • No labels