Generating of automatized functional tests

This research was the first experiment under the web application UIS. Research was focused on generation of automatized functional tests. Generation is based on a graph of states and transitions of the tested application. UIS has been used as the SUT (System Under Test).

Working procedure step by step

  1. Familiarization with prepared graph of UIS (especially in XML format)
    1. Adding essential information into empty XML attribute description
  2. Developing new application with these requirements:
    1. Programming language—Java
    2. Technologies—JUnit4 and Selenium WebDriver
    3. Using already prepared supporting library
    4. Inputs:
      1. Graph of tested application (XML format)
      2. Paths through the graph, eg. test cases (XML format)
    5. Outputs:
      1. Executable functional tests
      2. Coloured graphs of results

Oxygen

Oxygen is an application which is being developed by Faculty of Electrical Engineering Czech Technical University in Prague. This application is an open freeware tool written as platform independent application in Java. In Oxygen it is possible to prepare a graph of states and transitions for wide variety of applications. Based on this graph, Oxygen is able to generate test cases. Test cases (test situations) can be exported as XML, CSV or JSON. We used them in XML format. Each path (test case) is represented as an array of subsequent transitions.

Graph of tested application

As user goes through the tested application it is possible to redraw his or her paths in Oxygen as a graph. It is a graph of transitions (edges) and states (nodes) which exist in tested application. Nodes represent action—for example “login” (click on button Login) or “enroll subject” (click on button Enroll). Edges connect subsequent nodes. Every node has an attribute description. The research was based on content of this attribute.

Oxygen can export the graph of states and transitions as XML, CSV, JSON or SVG. The XML format has been used to generate  tests and SVG format to colour graphs according to test results (see below).

Pic. 1: Graph of UIS with 119 nodes and 164 edges

XML representation of graph

Node element

<node description= "id=loginPage.userNameInput&#10; type=input&#10; text=username" height="40.0" id="3" limitedConnectionProbability="0.0" name="Tea Username" priority="(not defined)" style="STYLE_ACTIVITY_NODE" width="80.0" xpos="280.0" ypos="20.0" />

 

 

Attribute description

Attribute description can be found in the XML file containgn a graph of tested application in every node and edge element. But the information has been added only to node elements, as it is not necessary for egde elements. Attribute description is universal. Types of items which can be written to this attribute can be divided into two groups—universal items (for example buttonId) and items dependent on UIS (for example examDate).

Criteria of a successful test

Test is evaluated as successful only if it can pass through the whole specified path of given nodes (of given test case)—see below. Every test checks actual URL after its every change. Tests also check whether a success alert is shown after an action has been executed (if such an alert exists and should be shown). Because of these criteria, success tests can discover only specific defects in tested application.

There is a wide-ranging set of several defect clones of UIS, but not every defect clone could be used, because generated tests could discover only some special kinds of defect inside it. For example, defect clone C1.H0.M0.L0_U_D_01 is showing empty table of teachers when all teachers are listed but generated test only checks URLs of this test case, not the actual content of this table.

Problems discovered during development of the work

Developed application—results

The application consists of 15 Java classes (91 kB of the source code, including supporting library 41 classes, 241 kB).

The application is able to process the graph diagram and the XML file with prepared paths running through this diagram and generate a suite of tests (altogether 42 tests).

These tests can cover almost all (see bellow) paths in the diagram and moreover they can reveal some of the specific failures in the tested application.

Visualization of test results

An agregated coloured graph according to success/fail rate of test (or whole test suite). Green path—test has been successful, eg. the automatic test went through the whole path of test case without any occurence of a failure. Thicker and darker line means that several tests went through this edge (part of the path). A blue line, which appears very rarely (see upper right corner), shows the path which has not been covered by any test.

Pic. 2: Graph of an agregated test results of the defect-free UIS clone

Previous agregated test results' graph has been prepared from 42 separate test results' graphs for example:

Pic. 3: Graph of one test result of the defect-free UIS clone

The ability to reveal a failure is shown in the next two pictures. The defect-clone (C1.H0.M0.L0_S_S_01) was used as a SUT. Red edges are displayed when a failure occurs in some node - all edges following this node are coloured red.

Pic. 4: Graph of one test result of the defect clone UIS

In agregated graph, edges are coloured green if there is at least one test case which successfully went through this edge.

Pic. 5: An agregated test results' graph of the defect clone UIS

Generating of acceptance tests

This research was focused on experimenting with automatized generation of acceptance tests. Research follows thesis Acceptance testing in project TbUIS in which a set of manually written acceptance tests for Robot Framework has been created. This research content is composed of automatized generation of acceptance tests for Robot Framework and then comparison of generated tests' results with results of manually written set of acceptance tests. UIS has been used as the SUT (System Under Test).

Working procedure step by step

  1. Analysis of UIS knowledge base.
    1. Especially documents such as use cases, requirements and test cases.
  2. Implementation of application Acceptance tests generator.
    1. Inputs:
      1. Requirements document.
      2. Test cases document.
      3. Generated keywords.
    2. Outputs:
      1. Executable acceptance tests.
  3. Verification of generated acceptance tests quality on UIS defect clones.
  4. Comparison of generated tests' results with results of manually written acceptance tests.

Analysis—TbUIS knowledge base

Project TbUIS contains relatively wide knowledge base which contents documentation and various tests. This knowledge base has been analysed and relevant part has been used for this research. Another part of TbUIS knowledge base, manually written acceptance tests for Robot Framework, has been used for result verification.

Implementation

Picture 1 represents context of research. Green rectangles represents results of this research—created application Acceptance test generator which generates runnable acceptance tests and simplified application Keyword generator which generates keywords. Acceptance test generator directly uses (represented with black arrow) requirements and test cases. Gray arrow indicates dependency of Acceptance test generator on support library and also on generated keywords. Big double arrow indicates that generated runnable acceptance tests are testing UIS web application.

Pic. 1: Research context

Results

Application Acceptance test generator has been created. This application can generate files with source code of acceptance tests for Robot Framework and also files with input data for these tests. Generation is based on few input files—requirements, test cases and generated keywords.

Generated acceptance tests has been verified on UIS web application. Verification has been made on defect-free clone and also on all defect clones (28 clones). Results of generated tests has been compared in detail with results of manually written acceptance tests.

Table 1 contains failed tests counts for each defect clone. Failed test means that an error in UIS defect clone has been found which is expected behaviour. In this table there are two columns with results — first column with count of failed manually written acceptance tests (Manual) and second column with count of failed generated acceptance tests (Generated). In 6 defect clones no defect has been detected neither with manual tests nor with generated tests. On top of that, generated tests haven't detected any defect in another 9 defect clones. On the other hand for 4 defect clones there has been more failed tests in generated tests than in manual tests. For most defect clones there has been comparable failed test counts for manual and generated tests.

Conclusion

This research successfully proved that it is possible to generate acceptance tests for nontrivial web application based on few documentation files and with the use of support library. Research proved that this direction of automatized generation of acceptance test is feasible and that it might be possible to create general technique usable more various applications.