Looking closely – TO REview OR NOT REview

16. December 2009

Looking closely

Lately in our project we discovered a little bit of a quality issue. Actually it was more kind of luxury problem we noticed as we thought that the quality of our current release is pretty good already but we had been better before. While we had only three to four issues after a sprint the number of tickets have gone up into the twenties to thirties.

Therefore this fact was brought up in the retrospective and the time thought about what the cause is and how we go about it. One of the causes was that in fact the software has grown, hence the likelyhood to fail has become higher. Also the mix of experienced and rather unexperienced developers has changed. We were a bigger team now and have more unexperienced developers (especially within this application). Therefore the testing guys were kind of annoyed of the sudden quality loss – on the other hand you could say they had been to spoiled of the last sprints. So be it – but still the team wanted to something about it.

One of the proposals that came was to do more code reviews before it leaves development. A lot of discussion came up the the developers would like to substitute the tester’s work while the testers said their tasks wouldn’ be to test quality into the software – they were here for quality assurance. Finally the team decided to setup some rules how the review should be done and how much time and work should be put into it. The following is an excerpt of the teams ideas and rules.

  • What exactly should I review?
    • Do we have unit test, frontend unit tests?
    • Is the userstory implemented as written?
    • Is the code documented?
  • Frontend
    • Can all forms/screens be opened?
    • Are the validations implemented?
    • All menus/context menus are done?
    • Are shortcuts applied?
    • ….
  • Backend
    • Did we use the right architecture / software design pattern?
    • Is the persistence correctly implemented?
  • How should I document my review?
    • Every review should result into a comment to the user story in our tracking tool, which gives the testing people and the product owner feedback what QA we did in development.
    • For every review we add an estimated task to the user story
    • If rework has to be done we add another task
    • Tell if changes may have an effect on other not so obvious areas to be additionally tested

The above list only should give you an idea how we moved forward. Those ideas that – of course – were written to the wiki (thanks to Eric who wrote down and collected these ideas) and work as a guideline for the team.

How long?

One question that we discussed was how much time each developer would invest as they thought they shouldn’t substitute of what the tester would do. First of all, tester test differently: They have learned not only to look at the user story itself but also have a look at the some other areas in the surrounding. So developers will never make testers obsolete.

We then decided to relate the work to be invested to the review to the size of the user story, ie. to the story points. We defined 5 minutes per story points which of course is different in each project as points are only relative values and we surely will eventually adapt that number too. Just make sure you put that time into your original estimation – you would have needed typically that time (or even more) anyway for unexpected rework.

Experience

We quickly noticed that every review resulted into rework. Most of the time the rework had to do with fixes in the software because some part of the required functionality had been missed. Hence, we would have gotten back to that user story anyway.

Another tip would be not to procrastinate reviews. Do them just in time! Try to review not only more than a few hours after the user story has been finished

Finally make sure the reviewer and the developer should change as it turn out every (developer/reviewer) pair tends to have its own review pattern. Some even hardly wrote any review documentation (like only “user story was reviewed”) which helped nobody. So we mixed the experienced with the non-experienced, the un-motivated with the responsibility-driven people and so on…

Finally: the quality definitely went up again.


These binoculars are those of my son. The glasses contain some code from our project that was photoshopped into the picture. Thanks to Gerhard for the right trick to do that efficiently

Trackback URI | Kommentare als RSS

Write a comment