Testing in Agile
Stefan 1. February 2010
Testing within an agile project for some people seem to be an issue. How can a team be able to keep up with testing while it delivers every second week. Instead of describing how you should do testing in general in an agile project (many have written about that already), I like to describe how we do testing in our current project. Many projects that work with iterations actually drove test teams crazy because the test team couldn’t keep up with the delivered product increments anymore. One of my past project almost ran against the wall because we delivered too fast and the test team escalated to their management and blamed us to be fast…
The project setup
Before we go into testing, it should be understood how the project actually delivers and what quality is to be expected. The current project runs biweekly sprints. With every sprint we deliver a “potentially, shippable product increment”. It is therefore important to define what we mean by potentially shippable. To us it means that the development team tries as hard as possible to produce as less defects as possible, hence deliver good quality.
Quality driven development
We therefore have established the following quality actions:
- Each developer is asked to write unit tests for the service layer and code that includes rather non-simple logic or algorithms.
- Each developer who writes a new screen or form is required to write a new frontend-smoketest that is possible to be run automatically.
- Each user story that is implemented must be reviewed by at least one developer who was not involved in that user story. Found issues must be solved before the iteration is completed. Reviews and task that come out of the review have to be done in time to finish up the user story as fast as possible.
- Tasks get the status “closed” when done. User stories only get the status “resolved” as they can only be closed by a tester (See below)
- Known issues that have been found after a sprint has been delivered and have an impact on the quality of the software have to be fixed in the next iteration. Therefore we do not fully book the developers with user stories but leave some capacity for bug fixing. The decision which issues have an impact and when they have to be fixed is made by the product owner right at the start of the sprint
Testing during sprints
All these actions and rules increase the quality of delivery but do not guarantee error-free software. Therefore the team has decided to spent some capacity on rigid testing on the software before it is actually given to the client for usage. Testing is done in parallel with development, i.e. whenever a user story is resolved the person who is currently in charge with testing takes the latest build and checks the user story as a whole. If an issue is found, the user story is re-opened and given back to the developer who developed the story or closed if anything is okay. Because the tester is currently monitoring if a new user story has been resolved, only very few testing is done at the very end of a sprint.
Our sprint always ends on Tuesday. The team has decided to finish the last user stories on Monday evening, giving another day to assure quality for the delivery. This also typically gives the tester enough time to test all remaining user stories. Even found issues during that test are solved right away by the developers. If the developers run out of work, they can either fix remaining issues from previous sprints (all of which have been prioritized) or can work on FIXMEs or TODOS.
All in all this already delivers quite a good quality in a complex project. However there is still a major issue: Even if all user stories are tested with care, noone can make sure they will not have side-effects. To reduce side-effects that are easily overlooked we have established automated frontend smoke tests (which you can only efficiently do to a certain degree). Additionally as all stories are demoed more issues will typically come up during the demo because more eyes examine the software!
Testing sub-releases is the quality gate for production
Also normally not all sprints are actually delivered into production. You knew, there was a catch, didn’t you? Remember, the sprints are potentially shippable product increments. We could deliver as all user stories are done and are tested but we don’t because we want to raise the quality bar. Therefore we group sprints into sub-releases. Each sub-release must be thoroughly testet. While iterations are tested by user stories, sub-releases are tested by use cases which have their related test cases. Everytime new user stories are created they are either linked to related existing use cases or new cases are created, followed by adapting the related test cases. Only after the sub-release-test that is based on use cases and therefore has a different view on the product has been finished, the product is allowed to be delivered into production.
The above stamp was produced by Kriss Szkurlatowski and can be downloaded freely at http://www.sxc.hu/profile/hisks
- Uncategorized
- No comments