Recently I’ve been working with an agile team that has been facing a couple of challenges with QA. The challenge is not with the QA work, that is generally of a good standard, but with resourcing problems on the volume of QA work.
- First, the developers are building stuff quicker than the QA team members can test it. In once sense that’s good news – the team is chugging through the background. The bad news is that this leaves lots of stories in limbo awaiting QA attention, and as we know having too many stories in a WIP state is really bad for output quality.
- Second, the budget for QA staff has been constrained. This happens sometimes, and there’s no point agonising about it. The team has to work within the available constraints
What was interesting about this situation is that the agile team had never really faced either of these two situations before, or perhaps, not recognised them before. To my mind this was very much a Hamlet “to be, or not to be” moment for the team. Much like Shakespeare’s fictional character where Hamlet has to choose between the pain and anxieties of life versus suicide, the team had to make some choices about how best to address both of these QA challenges. A software project team that ignores QA priorities is writing a project’s suicide note.
The two issues are inter related. I’ve yet to see any agile project where QA team members run out of things to do. Equally, it is extremely rare to find a project where the QA role can continue without reference to deadlines. Whether it’s a sprint or a release deadline, there is going to be some line in the sand where QA are expected to complete their job.
The first issue of how to deal with a build up of QA work has to be resolved by the team. Team members who are not in a notional QA role need to step up and volunteer to help with QA assignments. This can run up against job and skill demarcation barriers, where people either just don’t want to do QA work, or feel they lack the skills/insight to do a good job.
- For the “won’t” objection the scrum master will have to remind the individual about how crucial the team role is to agile methods. I also like to educate people about how Google go about QA work. I’m normally reluctant to use Google based examples, as their size and financial power normally makes comparisons unrealistic. Yet in this case there’s a seed of organisational truth that applies, Google employ very few QA specialists, and instead use software developers who shift between a “developers in test” role and mainstream functional development. The rational is that QA work is now as technically challenging as mainstream development, so why employ people who can’t do both. However, more generally, if you are in an agile product team then each team member has to adopt a mindset that accepts responsibility for completing team goals, and doing whatever is necessary irrespective of the notional day job title.
- The “can’t” objection can be legitimate as QA specialists generally have a more insightful mindset about the testing scope than the average ‘happy path’ software developer. I’ve found that the best method for addressing this situation is to have a QA specialist specify in advance all the positive and negative test cases that need to be covered by the QA assignment. That way any non QA person delegated to help with the assignment doesn’t have to worry about doing the right thing, as they can simply pick up one or more of the already identified test cases.
The second issue, relating to an insufficient number of QA staff, requires that the product owner and scrum master collaborate to build a top down picture of how the available QA time is going to be spent . This is classic time boxing work, where a conscious up front decision is made to allocate QA resource across different areas according to the business value or tech risk. For example, if a sprint has two main topics, then the available QA days need to split over those two topics.
To support this situation the QA staff will need to do a little more work in the sprint planning phase to ensure that all parties are aware of how the time boxed days are going to be spent. During planning meetings, as each story is being reviewed, the team should make a conscious effort to record the test cases required to validate the story outputs. This can be simple list of the positive and negative test cases required (you don’t need to go into full GIVEN-WHEN-THEN detail at this stage). There are side benefits to this process, as I’ve noticed that the developers can often get a deeper insight on the problem they are looking to address when the test cases are laid out for all to see.
When the planning user story review is completed the team will be in possession of the full list of test cases associated with each topic. They can then review with the product owner how to prioritise the user story/test cases, and assess how many can be completed in the available time box constraint. This step can also help knock out redundant test cases duplicated in different user stories. Any test cases that cannot be addressed should be specifically excluded from the user story scope. Personally, I like to add new user stories to the backlog that capture these excluded test cases.
Product owners and scrum masters need to be vigilant when these ‘Hamlet’ QA scenarios arise. Most teams never have enough QA resource, and WIP build up is a recurring problem. To avoid these metaphorical Hamlet moments these issues are best addressed in the release and sprint planning stage, as this provides a clear picture to the team about the need for cross functional contributions, and keeps stakeholders fully informed about quality assurance priorities.