One of the  key principles underlying agile methods is the concept of having a ‘releasable product’ at the end of each sprint.   The basic working assumption is that the product owner and sprint team agree the scope for a sprint,  with the team then undertaking the work to ensure that the quality of the outputs meets product owner expectations.  So far, so simple.

Then why is it so many agile teams struggle with this basic principle,  where user stories are never truly finished, or more commonly, not enough time/energy is expended to validate quality?   If this situation is allowed to continue for a number of sprints then a team ends up with an ever increasing accumulation of half finished stories and untested features. This represents what I like to term as “QA overhang”.

I’ve evolved this term ‘QA overhang’ from the phrase ‘debt overhand’ used in financial markets, defined in Investopedia as:

“Debt overhang is a debt burden that is so large that an entity cannot take on additional debt to finance future projects, even those that are profitable enough to enable it to reduce its indebtedness over time. Debt overhang serves to dissuade current investment, since all earnings from new projects would only go to existing debt holders, leaving little incentive for the entity to attempt to dig itself out of the hole. In the context of sovereign governments, the term refers to a situation where the debt stock of a nation exceeds its future capacity to repay it.”

Basically any organisation that has significant debt overhand ends up in a vicious downward cycle where its sole focus becomes servicing debt to a point where its future is mortgaged to pay for past sins . That same negative vortex can impact agile teams,  where a ‘QA overhang’ can start to drag down forward momentum because stakeholders start to complain about incomplete features or poor quality.  This is evidenced through declining stakeholder support and/or an avalanche of defect reports back to the team.

Image result for debt overhang

From a strategic viewpoint QA overhang is really bad for two reasons:

  • It gives a false impression of project progress
  • It’s a pernicious factor that eats into team and organisation morale.  Everybody knows that quality is being compromised but people are reluctant to break ranks.

Once that an organisation realises that an agile team has QA overhang, then the solution itself can be inferred from another paragraph on the Investopedia page about how to resolve debt overhang:

“Eventually, the only way out of a debt overhang is either through forgiveness of part or most of the debt by creditors, through bankruptcy (for a company) or debt default by a nation. “

‘Bankruptcy’ for an agile project means anything between project abandonment through to project suspension.   The organisation’s  project governance control process will be duty bound to halt investment in the project until the causal factors are established.  Fortunately ‘bankruptcy’ situations are rare for agile teams,  and usually ‘forgiveness’ is the order of the day.  That typically translates into two practical forms of action:

  • Injecting one or more ‘stabilisation sprints’ into the plan, where these interludes are used to clean up the QA overhang.
  • Improve the team’s planning and work practices to avoid future QA overhang.

There a number of tactics that scrum masters, product owners that the agile team need to take on-board to avoid QA overhang:

a). Recognise real developer work capacity for a sprint.

Too often organisations drastically over estimate what a team can accomplish in a sprint. Take a two week sprint interval – a naive view of planning might assume that this gives 10 days of capacity for each software developer in the team.   Yet as the developer has to attend sprint planning sessions on sprint day 1,  and on the final day attend ‘sprint review’ and ‘retrospective’ sessions this reduces their real available time down to 8 days.  This also assumes that a developer will be working on new stuff right up to the wire, but we know that anything worked on the day before the ‘end of sprint’ review is very unlikely to get a proper time for quality assurance.  From this you can infer that a developer doesn’t have 8 days,  it’s something less to allow for the QA aspect of last tackled user story to be addressed.

That ‘something less’ is highly dependent upon the complexity of QA task, but in general my view is that if a team wants to complete all the stories in a sprint then they have perhaps only half to two thirds of a software developer’s available time to build user story outputs.   That lower percentage is a direct result of using a time bound mechanism that prioritises regular high quality outputs from a team over individual productivity.

b). Build multi function teams where QA and software development work in an interactive partnership

A key business driver for agile methods is the early delivery of business value.  To achieve that means shortening/optimising production processes,  and for this to happen it means ditching the old waterfall approach of dev followed by QA in favour of a more immediate and interactive process where  software developers and QA staff work as part of the same team.

Sometimes circumstances just don’t allow that to happen,  and the QA team is kept entirely separate.  In this case the developers tend to work on new stuff in one sprint,  with QA work undertaken in the following sprint.  But life being what it is this often generates a lot of defect ‘ping-pong’ between the two teams,  so in reality features often take a third sprint to bed down.  Assuming a two week sprint this often leads to a situation where the business thinks it is operating off a two week delivery iteration,  but with separated functional teams this in essence translates into a six week delivery interval. That’s why embedding QA into agile teams is now viewed as the default best practice.

c).  It’s a team role to prove quality.

Few agile teams have enough dedicated QA resource.  Having QA specialists in the team is good because they bring a new perspective/skill set to the problem of defining quality.  However all team members have a role in shaping quality and perform testing.  Rigid “I’m a software developer,  I don’t do testing work” attitudes are incompatible with agile team methods.  For example, if a developer has finished their own dev task, and there are a bunch of user stories awaiting testing, then the team will get more value from that software developer picking up a testing task.  It’s all about getting a user story ‘across the line’.

d). Don’t expect quality expectations to be laid out explicitly

There is no such thing as perfect information,  and its not uncommon for a user story to be assigned to a sprint with requirements that are either are sketchy or half formed.  This leaves the the QA role in a quandary about how to take a translate the story into a set of test cases.  If good agile sprint planning practices are in place then the agile team will have the option of rejecting a user story at the planning/scope setting stage because its purpose and details are unclear.  However,  more often than not these gaps only surface once the team starts to dig down into the weeds about how the aspirations of the user story are translated into a practical implementation plan.   In the past I’ve come across situations where QA staff used to a more formal model are stuck in a mode where they can’t build the test cases until they have the requirements pinned down.  This is too rigid,  and a change in mindset is required to pro-actively engage with the product owner to ratify the emerging situation.  This puts the emphasis on the team to establish up front what quality expectations are in play, and what assurance methods will be used to verify output quality.  The watch word here is: ‘no quality surprises at the end of sprint review’.

e). Automate as much testing as possible – of the right thing!

Test automation is a major enabler of team agility.  Without it the regression burden is to just expensive and time consuming to allow frequent sprint releases.  However, test automation is expensive to build and comes with its own maintenance burden.  So it is important to focus on building automated tests in the right place for the right value, namely:

  • Prioritise test automation for high value,  high volume, and high risk uses cases.
  • Focus test automation effort on the lower levels of the tech stack (e.g. services,  persistent,  integration etc),  rather than the user interface.
  • Building automated tests for a UI is superficiality attractive but can be fragile and expensive, to maintain  so instead focus on the ‘C’ controller classes for a MVC UI design.
  • Ensure that the automated test suite includes end to end tests for the core business scenarios that the product supports, and these should include UI where appropriate to ensure that all basic operations work.

Regression testing is the touchstone for building confidence in agile team outputs. Ignore it at your peril.

f).  Make a point of showing QA data points at the “end of sprint’ review

At the end of each sprint the team will stage some form of “Show n’ tell” session with the product owner and other stakeholders.   In this session I think it is really important for the team to recap the quality expectations set for a user story, and to explain/show how quality has been assured.  Depending on the user story business value that task might take up more time than showing off the user story functionality.  This is what I like to term a ‘quality proving’ approach where the team are confident enough to explain the test cases and show test outcome.  This doesn’t mean running test suites in the meeting,  but you do probably want to have the test suite outcomes available for review.

In summary…

For a agile team to respect the key principle of delivering business value with every sprint it requires a more pro-active view on how it addresses quality and a more practical view of team capacity and shared roles.  With a properly run agile process, and a direction to avoid ‘QA overhang’, then the notional progress rate might appear slower. but the payback is that the outputs generated by the agile team have a much higher degree of confidence in their quality.