agile ipm labs

A Checklist for your Planning Meeting

Checklists?

What do construction, aviation, and internal medicine have in common? They all manage tremendous complexity that has in the last 100 years far surpassed the capacity of any individual to learn despite decades of education and training. Yet major building collapses are extremely rare, flying is the safest way to travel, and surgeons…well, they perform miracles. But that’s not the only thing they have in common. These professions also make systematic use of checklists. Numerous checklists sit neatly in airplane cockpits, walls of surgical wards, and make up construction guidelines and safety codes.

Complexity is in communication

What about our software engineering profession? Our complexity may be trapped inside the machine and we have test suites, and ever improving tools to manage it, but there’s also much complexity in the people aspects of software development and how we exchange information as people.

At Pivotal, we condense a lot of information into less than 3 hours of weekly meetings (iteration planning, standup, and a retrospective) our teams have. What would make it to an IPM(Iteration Planning Meeting) or standup checklist?

A checklist for a Pivotal Labs IPM

Not everything can go on a checklist, so each entry must balance importance of a check against it’s time-cost or the probability a team will forego an item. Basic items that have a high impact when discussing a story:

  • Are designs and assets ready and available? We value deploying a system that is in a state publishable at any time, fully designed. So we prefer not to track down design assets or UI direction much after getting started on programmable functionality. Put a check if all assets and appropriate mockups are in the Tracker story.
  • Are there validations of user input? Having insufficient data sneak into your application may have consequence from mild confusion to borderline business danger. Signups without email address do not help send out your app’s email blast. Designers sometimes also forget to allocate visual space and language for error messages and failure notifications.
  • Do the APIs from external parties exist and respond as expected? Just about any product complex enough, makes use of 3rd party network APIs or other libraries. When we come across a story that relies on such dependencies, it’s important to validate them ahead of time, since an absent or incapable service may completely redefine the feasibility or cost of the desired functionality.
  • Are there ops dependencies? If your project’s production environment differs from your testing environment in ways that can make a story undeployable (missing packages, mail server, nginx tweaks, etc), this can throw off your continuous delivery cycle, something we strive for as a principle.
  • How to handle dependency failures? This especially true of network-based APIs, if a feature relies on them. Network APIs fail with errors, return empty responses, unexpected responses, time out, or simply take a long long time. And this is not an implementation concern. How will the UI of the feature respond to these events? Spinners, progress bars, error pages, logging, exception notification are all on the table here.
  • What analytics data should be gathered? Is the described customer interaction of interest from an analytics perspective? What should be captured?
  • What are performance goals? Much of functionality, but especially front-page, product listing or search results are all subjects to implicit or explicit performance requirements. Have the rough goals been identified? Is an interaction subject to a performance metric?
  • Have important details been brushed over? Common culprits here are: page redirect vs ajax-submit, alert dialog or a lightbox model, search page or an autocomplete widget, etc. Or this maybe something like bigger like a missing mention of entry or exit from a user flow. Details will vary from project to project.
  • Have technologies/libraries/patterns been assumed? This one is for engineers. If a feature pushes the limits of the codebase so far, does the estimate assume a refactoring to a pattern? Or causes a library to be brought in? A quick stabilization of an estimate maybe appropriate for this IPM.

Customizing your checklist

A checklist that’s too long isn’t useful. A checklist item that is often skipped is less useful. Tailor the checklist your project so that it sparks meaningful communication. That’s what it’s for.

What would you add to your checklist? What does your team forget to talk about? Let us know in the comments.