Thursday, October 25, 2007

DSL in Automated acceptance testing

I found a very interesting post on the Google Testing Blog
concerning experiences in Automated Acceptance Testing, we also call it end2end testing. It's was very interesting reading, because we share more or less the same experience. We had a very long discussions with my colleague from the "testing department" about how much the automated acceptance tests worth.
He was writing his Master Thesis on this topic. He was using Mercury Winrunner tool for automation of the Java Swing GUIs (Netbeans based). He had very interesting experiences and the thesis was very well done and interesting to read (I can't find it right now, may be I'll post a link to it later). In the end he comes to the heuristics, that for his project it takes 4 times longer to automate and maintain automated test than to write and execute test once, therefore the rule of thumb for his project was if the test is executed less than 4 times then it makes no sense to automate it. Basically it means that if in Waterfall model project has one release per year and feature will not survive more than 4 releases then it is not worth to automate it.

Such statements and heuristics are highly subjective. There are some test cases that will take 10 to 20 times longer to automate than to execute manually (example of it see below).
I completely agree with Markus about maintenance costs of the tests and also faulty failures, it's a very big problem that may also affect our decisions where automate and where not.

Also as Markus mentioned very right : "[test automation] turned into art of engineering ... we come up with best-practices" and if we apply principle of treating test code as production code then we decrease significance of the problems with automated testing hence increase the value of it.

I've played with test automation tools allot and for our most recent project we need to speedup the development process, therefore it was decided to create automated acceptance tests. Fortunately application we developed was a Web-based therefore we had a wide palette of tools at our hands (with ordinary GUI it's much more complex nowadays). Since we haven't used any AJAX and/or Javascript in our application we've chosen HttpUnit and derivatives of that (JWebUnit). After creation of the first tests we simply apply the practices we used for our production code to evolve it - refactor, refactor and refactor. After some two or more evolution steps we've got to the conclusion that:
  • We have refactored common parts of the tests to the kind of testing framework.
  • We might need to start writing unit tests for automated tests :)
The most interesting part is off course our testing framework, because it provide an abstraction layer over the application UI, and since "The behavior of the program should be internally and externally consistent" we essentially recreate application domain model in our automated testing framework. For example in our application we have a report tree, therefore in our testing framework we have a generic tree interface that represents a report tree in the application UI, we can inspect it via the methods of the "ui tree" that are essentially the same methods as tree that is used inside application, internally.

With the testing framework we've got some very nice advantages:
  • Tests are more stable, since testing framework abstract the details of the UI that are likely to be changed. Which means that in case we change a tree representation we only need to change the framework and all tests will be unaffected.
  • Tests have become much smaller and concise. Since tests are now operating on the very clean application domain (see discussion below about DSL) therefore even people with little test-automation experience can write them, and the most important understand and modify existing testcases which is the prerequisite for collective code ownership.
  • Framework includes shared fixtures that enable us to create tests even simpler without looking into the details of the application page flow.
What was also very interesting to see, is that one essential aspect of the testing framework that it is application domain specific, automated testcases operates on the domain terms and domain objects the same as any other written test cases for manual execution. This fact off course make the framework not reusable but enable us to think about simple DSL to simplify test cases creation and modification.

At the end it was a big success, application we created need to work on 5 different platforms instead of planned 1, therefore by reducing regression testing efforts from 2 man/month to 30 minutes we benefit allot.

Now we are thinking about expanding it to the following directions:
  • Try to create a external/internal DSL, something like JRuby or Groovy based
  • So far we hadn't had big number of false failures , however if we will have them then could do a testing of the testing framework, i.e. before executing tests check that testing framework match the interface of the application.

1 comment:

Unknown said...

Hello,
The Article on DSL in Automated acceptance testing, gives amazing information about it. Thanks for Sharing the information about the features of Acceptance Test Driven Development, For More information check the detail on the User Acceptance Testing here Software Testing Company