Roll back a few years and I was doing a awful lot of business travel from my UK home – mostly to the States, sometimes to India and Europe. To survive the travel grind I quickly evolved a bullet proof packing list, and over the years having that checklist to hand on the night before a trip saved me a huge amount of time and worry. The packing list did evolve over time, particularly for consumer electronics, but the one essential ever present item was a couple of multi region/multi prong power adaptor plugs. That’s because no matter how many times I visit a place I seem to have this mental block about the type of power adaptor pins – round, thick or thin – required in a given location!
Just recently I’ve started to travel again on business, not so far this time and less frequently. As a result I’ve had to blow the metaphorical dust of my long standing travel packing list. The contents have changed a little. For example, the most recent addition and the bulkiest electronic item I now pack is a multi socket USB power charger with cables for my mobile phone, iPad, and toothbrush. Yep, toothbrush,and here’s link for those who don’t believe me.
Co-incidentally I was thinking about how a packing list concept could be used to help teams flesh out the contents of a new product backlog. Recently I have been working with an offshore team on the development of a new service module. The project is fairly typically of initiatives associated with the “API economy”. At its heart is a service that offers a consumer the ability to enact a specific type of business transaction, and in order for that process to be enacted efficiently there are a variety of ancillary services associated with setting up master and preference data. The agile team is smart and relatively young, so as result the number of ‘business trips’ (i.e projects) they have experienced is relatively low, and as a consequence their ‘packing list’ (knowledge of what things have to go into a backlog) is relatively immature.
Whilst everybody understands that customer focused user stories represent the initial content for a product backlog, there are a host of other considerations that have to captured as backlog content to ensure that the overall project work effort is accurately represented. This is particularly true for ‘API economy’ projects where non functional concerns on scaling, resilience and instrumentation add significantly to the overall project timelines.
Agile gurus like to label these non-customer derived backlog entries as ‘technical user stories’. The Scaled Agile Framework calls them ‘Enablers‘, anything associated with below the water line matters like architecture, infrastructure and non functional quality expectations. If an agile project team fails to include ‘enabler’ backlog content then it risks giving a false impression of overall project effort. Historically a lot of this background stuff was either not accounted for, or worse, appended to the first customer facing user story that required it.
Returning to my packing list analogy, in the list below I have assembled a comprehensive set of ‘enabler’ topics that a team should review when considering what ‘enabler’ user stories should be represented in the product backlog:
- Team infrastructure. Anything associated with managing and supporting the team, their roles, and work assignments.
- Project reporting. Putting together whatever mechanisms are going to used to regularly communicate project progress to stakeholders.
- DevOps CI and CD. A ‘no brainer’ these days, but this stuff still needs to be built and tested.
- Discovery. Is there anything in the proposed solution which is completely new to the team? If so, make sure you add some ‘spike’ user stories to experiment and prove the unknowns or assumptions.
- Refactoring. If existing software components are being re-used, does everything completely fit the bill? If not make sure that stories are added to represent the refactoring work.
- Framework/API base layers. Good software is layered, and at some point the lower layers have to be built and proved.
- Data store design and management utilities. Anything to do with building and managing the data stores being employed, like: setup scripts, backups, clear down scripts etc.
- Test data packs. This is the one subject most teams generally overlook. Having a facility to inject data into your system as a way of testing outcomes can save inordinate amounts of testing effort and time.
- System end-to-end testing. Most teams will cover user story focused testing, but normally there is higher tier of business scenario end-to-end testing that can easily be overlooked.
- Instrumentation. It’s expensive to leave this as an after thought. If you are building services then there is a requirement to measure their efficiency and resource consumption
- Load/performance testing. A service has to be profiled in terms of its efficiency and resource consumption and ability to meet declared non functional quality expectations
- Chaos monkey testing. How would your product react if key elements of the architecture were pulled off line/started to respond erratically?
- Defaults. If your project includes a lot of setup processes, master data, reference data or preference data related features, then consider adding support for default options so as avoid having to prep everything for every situation.
- Security model. Does the design cover role and user authorisation aspects and are those roles already defined in whatever owns the identity management role?
- Security testing. Should your new product be included as part of the next security and pen test run?
- Tooling. Are there any system management aspects that will require extra tooling?
- Mocking collaborators. If your project includes a significant amount of real time collaboration with other services, its probably wise to build mocks of these other actors. That way you run test scenarios without requiring access to the collaborators and allow for negative test cases to be simulated.
- Web admin UI. If your project includes a lot of master data or preferences data then does it include provision for a web UI to manage that data?
- UX Analytics. For a product that includes a significant UI element, does the project include provision for adding UI analytics tracking so that the user journey and UX data can be collected?
- Reports/surfacing data. Are there any additional reporting requirements that the team feels are necessary to prove that the product is delivering on its role?
- Audit trail. Should the project scope include facilities for recording who enacted what changes?
- Logging. If one isn’t available, then it will be necessary to define a logging strategy and quantify what aspects of the business transactions being delivered by the project that will need external logging.
When adding ‘enabler’ entries to the product backlog I would suggest not using the normal ‘<role><purpose><benefit>’ story title pattern. Much like the idea of adding a ‘SPIKE’ prefix for discovery related stories, I would propose adding an ‘ENABLER’ prefix to help clearly identify the story purpose. When documenting an enabler story take care to define the business benefit attributable to the work. As most product owners will have a natural reluctance for the team to spend time on anything other than client facing feature stories, it is important for the agile team to explain the role and purpose of all enabler work. When reviewing this subject with business stakeholders I like to use the iceberg metaphor, where for most software projects there is a considerable body of ‘below the water line’ work that has to be completed before any client facing work can really truly be surfaced.
Make sure your agile team has an opportunity to consider what enabler user stories are appropriate for its project. At an organisational level adopt the ‘packing list’ approach of crafting a comprehensive enabler topic list. That way your agile teams will avoid taking business trips (projects) without all their required gear!