This is the third part of a four-part blog series to share tactical learnings we, a cross-functional product development team, encountered when developing a product where the interface was a set of APIs for other developers to consume in order to meet their own business and user needs.
There were many things we learned about the similarities and differences of managing our product backlog for an API product versus one with a graphical interface, to help us be more effective. We iterated on our model of writing user stories and doing acceptance based on a lot of feedback from the engineering team to do what worked best for us.
Key Takeaways
Some of the things we learned were:
-
There is a fine line between product/user experience design decisions and implementation details for API products. It was best to start with a higher level of abstraction and then iterate with the team to strike the right balance between what and how
-
Providing exact API requests and responses for each user story, with example data, was extremely valuable for the team to be more productive and consistent once the API design direction was clearer
-
As with most backlogs, small user stories were key. We recommend treating each new field in a request/response as a new user story, and creating separate stories for each field validation and each type of error-response
-
We recommend using an API design tool, like Postman, to simplify acceptance of user stories.
-
What is true for projects with graphical interfaces is also true for APIs: acceptance criteria should always be from the API consumer’s perspective
-
We recommend creating either a client library for the API, or a dummy application that consumes the API, to more effectively test your features.
Backlog management
There is a fine line between product/UX design decisions and implementation details for API products. Our team had frequent discussions regarding what should be defined in the user story by the Product Manager and what should be left as an implementation detail for the engineering team. We found it best to start with a higher level of abstraction, focusing on the what and why and iterated on how much of the how needed to be defined. Be patient in your Iteration/Sprint Planning Meetings—when you review the upcoming backlog as a team—to figure out the right level of implementation details that works for you!
As consistency is key for APIs, it was important for a Product Manager to understand the rest of the API ecosystem to help inform a consistent user experience for developers consuming the APIs being developed.
Providing exact API requests and responses, with example data, for each user story was highly valuable for the team. This ensured consistency and saved time for our developers.
That said, the first set of user stories were intentionally more high-level, with just a bulleted list of fields. This was because we did not have a clear sense of the request/response design and we did not want to dictate how the engineers should design the objects up front. After the first few stories, the design stabilized and we felt more comfortable transitioning to exact JSON requests/responses. For less technical PMs, using a JSON formatting tool can be very helpful.
As with most backlogs, we advocate for small user stories. We suggest treating each new field in a request/response as a new user story. This made it easier for our developers to keep their tests focused and to prioritize/re-prioritize on a more granular level. In particular, we strongly recommend creating separate stories for each field validation and each type of error response. We experienced the pain of trying to combine these into the original user story for creating a field or endpoint. By splitting these out, the team was more easily able to deprioritize certain complex validations and error responses. For example, we chose to deprioritize authorization error responses (403s) since we only had one beta user for the initial release and wanted to get feedback as early as possible. Focusing on smaller user stories allowed us to ship the first set of features to this beta user much more quickly.
Acceptance
It’s important to write acceptance criteria from the consuming system’s point of view. We had requirements that some other system or service be called, and it is often tempting to drop these requirements into the acceptance criteria verbatim (i.e., THEN a request is sent to the Email microservice). This can make stories impossible to accept, because backend calls cannot be observed. Instead, we had the acceptance criteria describe the literal process by which the Product Manager would accept the story (THEN I receive an email from the Email Microservice about the charge to my account). This also helped highlight stories that cannot be accepted, and drew early attention to test utilities that we wanted to build into our acceptance environment. For example, suppose the Email Microservice is only accessible in staging and prod, not in the acceptance environment. We built a fake email service that exposed the emails sent through an API instead of sending actual emails, so that the Product Manager’s “I receive an email from the Email microservice” step could consist of asking the fake email service what emails were sent.
We used a human interface for accepting our API stories. Postman is a user-friendly web-based API design tool. On our API project, it served as a front-end for the purpose of acceptance. Alternatives include Advanced REST Client and Insomnia. We saved our services and endpoints to be able to easily access them for acceptance and demos.
While we didn’t do this on our project, we recommend creating either a client library for the API or a dummy application that consumes the API, in parallel with the API itself. This would have let us dog food our own system. As it was, our intuitions and assessments of our design decisions were often informed by what the API was like to use through Postman, which isn’t a lifelike use case.
Between the two—client library or dummy consumer app—the client library probably wouldn’t have been the best for our use case. Our consumers were going to be working in a variety of different languages, and the client library we built would only support one of them.
On the other hand, if all the consumers of the API were going to be using the same language (for example, if the API was to be used internally in a company that did all their development in Java), then the client library could actually have served as the front end of the system, and we could have treated the API itself as an internal detail.
In this blog post we shared our learnings tactically managing a product backlog for APIs. In the next blog, we will share in further detail engineering practices we applied for Developing, Architecting, Testing, and Documenting APIs.
Hello! We are Caroline, David, Francisco – a cross-functional product development team that worked together at Pivotal Labs, a product development consulting company. Pivotal Labs work with our clients as an integrated team, sitting side-by-side, to build & deploy digital products while enabling our clients to learn lean, user-centered, agile software practices that they can use as key capabilities on an ongoing basis within their organizations.
This blog is one of our 4-part blog series on Designing & Developing APIs: