Selecting a UI Test Execution Framework

Posted on March 25, 2015
By Martin Lienhard

UI test tools such as Selenium do not execute the test scenarios you develop. Instead it is up to you to determine how you want to execute your test scenarios. The following capabilities are critical when selecting a test execution tool.

  1. Run tests in parallel

    The test runner should support executing scenarios and tests concurrently in parallel, primarily at the method level, and secondarily at the class level.

    UI tests take significantly longer than programmatic tests such as unit, integration, and web service tests. In a continuous delivery (CD) pipeline it is critical to get execute automated tests as quickly as possible, and receive immediate feedback on the state of the build. The only way to achieve this is by running tests concurrently in parallel.

  2. Setup and teardown methods

    The test runner should support a before method, after method, before class, and after class for setup and teardown of methods and classes respectively.

    The before method can invoke your UI test tool, and the after method can teardown your UI test tool. The after class can ensure that all of your UI test tool instances are torn down.

  3. Test suite

    The test runner should support the concept of a test suite that is configurable with parameters, and preferably externalized to an XML based format.

    If the test runner supports a test suite and parameters, then parameters such as operating systems, browsers, emulators, simulators, application, and environment can be passed to the before method to setup the test.

    If the test runner supports a test suite API, then you can dynamically generate your test suite and parameters.

  4. Command line execution

    The test runner should support command line invocation and pass configuration parameters to the framework.

  5. Test parameterization

    The test runner should support parameterizing tests primarily at the method level, and secondarily at the class level. It should support the concept of reading data as a tabular collection of data that is passed to the test method's parameter, or class constructor.

    Most of the time UI tests will need to pass different sets of parameters for each test method, and not the same parameters for the entire class. The context of a unit test that passes identical parameters at the class level is completely different than the context of a UI test. Remember that a unit test class is testing another class or relationship between classes, whereas a UI test class is testing the feature of a story, which in the application could comprise hundreds or thousands of classes. Therefore it is preferable for a test runner that parameterizes tests at the method level.

    The test runner should reinvoke the same method for each iteration of the data. Also the report should show each test iteration and the parameter data that differentiates each iteration.

  6. Text context

    The test runner should support a test context object that is accessible to each test method thread. The context should allow the test to read from and write to the context. Also, the context should have a handle to write test output to the test report.

    The test context should include the test method name, parameter data, test results, and handle to write output to the test report.

  7. Integration with BDD

    The test runner should be able to execute scenarios on behavior-driven development (BDD) tools such as Cucumber, JBehave, SpecFlow, etc.

  8. Test reporting

    A test report is obviously very critical to display important details for each test run. The test runner should support report generation that can be persisted to an XML and HTML file format. Also it should have plugins for common continuous integration (CI) servers such as Jenkins, Travis, or Bamboo for displaying reports inside of dashboards.

    The test method should at a minimum contain the following information:

    • Absolute test class name
    • Test method or feature name
    • Description
    • Tags or groups
    • Test status:
      • Passed
      • Failed
      • Skipped
      • Errors
    • Feature details:
      • Scenario
      • Given
      • When
      • Then
    • Start time in milliseconds
    • End time in milliseconds
    • Total duration in milliseconds
    • Parameter data
    • User defined output
    • Exception stack trace

    The test suite should at a minimum contain the following information:

    • Test suite name
    • Test suite parameters
    • Absolute test class name
    • Tags or groups
    • Total passed
    • Total failed
    • Total skipped
    • Total errors
    • Total duration in milliseconds