agile labs pivotal_tracker testing tracker

Making testing visible in the Tracker workflow

As a feature story progresses through the Tracker workflow, a lot of testing activities are also underway. Team members are collaborating to turn examples of desired behaviors into business-facing tests that guide development. Testers are performing manual exploratory testing on stories as they are delivered. Performance or security testing may be underway at some point in the development process.

A testing workflow?

To keep things simple, Tracker’s states are limited to Not Started, Started, Finished, Delivered, Accepted and Rejected. Only the “Accepted” and “Rejected’ states seem directly related to testing. Testing activities such as specification by example, acceptance testing, exploratory testing, load testing, and end-to-end testing aren’t reflected in the Tracker workflow, but they’re going on nevertheless. Testers, coders, product owners and other team members continually talk about how a feature should work, and what to verify before accepting a story as “done”. But details can still be overlooked or lost. If stories are rejected multiple times because of missed or misunderstood requirements, or problems slip by and aren’t discovered until after production release, testing activities need to get more attention.

We’re working on enhancing collaboration and communication in Tracker, with increased flexibility that will help with tracking testing activities. Meanwhile, how can Tracker users follow testing along with other development activities? It would be helpful to have a place to specify test cases, note plans for executing different types of tests, and make notes about what was tested. Accomplishing this requires a bit of creativity, but it’s possible to keep testing visible in the current Tracker workflow. Here are some ways we do this on our own Pivotal Tracker team.

Testable stories

First of all, we work hard to slice and dice our features into stories small increments that are still testable. We read the stories in the backlog to make sure we understand what each one should deliver, and how it can be tested. If I have questions about an upcoming story when we’re not in a planning meeting, I note it in a task or comment to make sure we talk about it. Iteration planning meetings are a good place for the team to start discussing how each story will be tested. Some teams get together with their business experts to help write the stories with this in mind.

We make sure we know how we’ll test all the stories in the upcoming iteration. There are a couple of different ways to get enough of this information into the story .

Using tasks and comments

Test cases and testing notes can be added to a feature story as tasks. They’re easy to see in the story, and can be marked as completed when done. We often include links to additional details documented in a wiki page, or to automate-able functional tests used for acceptance test-driven development (ATDD). As teammate Joanne Webb points out, sharing test cases before implementing a story clarifies requirements, and gives developers clues on problems to avoid introducing. In our experience, this shortens the accept/reject cycle for stories.

Comments are another good place to add information about requirements and test cases, especially since you can also attach files with additional information, screenshots, pictures of diagrams, and mockups. And if team members have questions they can’t get answered in person right away, comments provide a place to record a written conversation, and email notifications can alert the story owner and requester so they can answer questions.

Visibility and workflow through labels

We can find ways to record conversations about requirements, but how do we incorporate a testing workflow into the larger development workflow for a Tracker story?

TestingExample

Labels are a handy way to keep stories progressing through all coding and testing tasks. In our Tracker project, automating functional tests is part of development. The story isn’t marked finished until both unit tests and functional tests are checked in, along with the production code. Once a feature story is delivered, someone (usually a tester or the product owner, but it could be a programmer who didn’t work on coding the feature) picks up the story to do manual exploratory testing.

To make this visible, we put a label on it with our name, for example, “lisa_testing”. Not only do we conduct exploratory testing, we verify that there are adequate automated regression tests for the story, and that necessary documentation is present and accurate. Once we’re done testing a feature story, we put a brief description of what we tested in a comment, remove the “testing” label, and add another label to show the story is ready for the product owner to verify. This might be “lisa_done_testing” or “ready_for_dan”. Sometimes the product owner gets to the story first, and uses similar labels to show he’s in the process of testing or finished with his own acceptance testing. Once all involved parties are happy with the story, we can accept it. Using labels is a bit of extra overhead, but it gives us flexibility to continually improve our acceptance process.

Putting together a bigger picture

Some testing activities extend beyond one story, especially since we usually keep our stories small. It’s possible to write a feature story or chore for the testing activity. For example, you might write a story for end-to-end testing of an epic that consists of many stories and extends to more than one iteration. Writing a chore for performance testing, security testing, or usability testing may be useful.

However, as my teammate Marlena Compton points out, there are advantages to making sure testing is integrated with the feature stories themselves. If a story remains in delivered state for several days while we complete system testing related to it, the labels we put on the story convey the testing activities underway. Completing all testing before accepting a story helps ensure the stories meet customer expectations on the first day. As Elisabeth Hendrickson says, testing isn’t a phase, it’s an integral part of software development, along with coding and other work. Having our Tracker stories reflect that helps keep us on target.

As we do exploratory testing on a feature story, we might discover issues or missing requirements that don’t make the story un-shippable, but may need to be addressed later. We can create separate feature stories, bugs or chores for those, and link back to the original story via links or labels.

We track some testing information outside of Tracker, for example, on our team wiki. However, we find that tracking testing activities in Tracker helps ensure that they get done in a timely manner, and keeping tests visible helps ensure that stories meet customer expectations the first time they’re delivered. Integrating testing activities with coding tasks keeps our testing efforts aligned with other development efforts.

While we work to make Tracker more flexible for teams and testers, we hope these ideas help you make your testing more visible in the Tracker workflow right now. Check out our blog post https://blog.pivotal.io/2013-update-new-features-new-api-new-design/ to get an overview of some of the plans for Tracker this year, and come back periodically for the latest news. We’d also love to hear how your team incorporates testing in agile development. Please leave a comment, or write to us at [email protected].