This blog post was co-authored by Caroline Hane-Weijman and Francisco Hui
This is the fourth part of a 4-part blog series to share tactical learnings we, a cross-functional product development team encountered when developing a product where the interface was a set of APIs for other developers to consume in order to meet their own business and user needs.
So far in this series we’ve talked about what developing an API can be like for the Product Management and Design practices, and the critical value those practices lend to the process. In this final installment, we want to talk about our learners in the other corner of the Balanced Team triangle: Engineering.
Engineering is fundamentally the art and science of weighing trade-offs. When developing an API, all of the principles and concerns you’ve learned building other kinds of products are still in play—but some of the scales tip differently when you weigh the costs and benefits of different techniques. Certain kinds of tests become much cheaper, certain architectural concerns become much more costly, and the cost of having poor documentation skyrockets compared to products whose direct users are humans.
Testing
Because APIs are built to be consumed by other programs, it’s especially easy to write feature tests for them.
By “feature test,” we mean a test which exercises the application through the same interface as an end user. These tests usually bring a large slice of the system online for the test, but they aren’t necessarily end-to-end. For a web application, this means a browser-driven test using a tool like Selenium or Capybara.
Since feature tests are written in terms of actions the user takes, they often serve as a useful reference point for conversations between developers, designers, and product managers. However, browser-based feature tests are expensive in a number of ways. Having to drive a browser usually makes them quite slow, and they tend to develop non-deterministic or “brittle” behavior over time. Getting the necessary browser and browser driver on your CI system can be another source of pain.
For APIs, the costs of feature tests are greatly reduced. The system is designed to be consumed by other computer programs—and isn’t your test suite just a computer program consuming the API? This means that writing a feature test against an API likely won’t be brittle, won’t require any special tool like a browser driver, and will be fairly fast!
This difference in trade-offs meant that on our project, we leaned on feature tests more than we would have for a browser-based product. We captured the “happy path” of each feature in a test that exercised the system through the API, the same way the Product Manager would when accepting that feature. Engineers used the story’s acceptance criteria more-or-less verbatim to write tests with something like REST Assured (which we used) or MockMvc.
But we took it further. We designed every feature test to be runnable against either a local instance of the application running on our development workstation, or against a deployed instance (for example, our acceptance environment). This meant that our CI system could deploy a new artifact out to an environment, and then immediately run the feature test suite against the new deployment to ensure all its features (including the brand new ones!) were behaving well. With just a quick glance at our CI monitor, the Product Manager could see that all features were working in a given environment, which indicated whether it was safe to do acceptance or a demo there.
Like we talked about in the previous post in this series, in order to make every feature testable, we had to build some test utilities in non-prod environments. For example, we added a test-only API endpoint that allowed the caller to change the system’s understanding of the current date. This allowed us to write tests like:
Given I have signed up for a monthly subscription
After a month goes by
Then my primary payment method gets billed.
Rather than writing a feature test with a Thread.sleep(ONE_MONTH) line, we could hit the test endpoint to jump into the future. But those utilities weren’t just useful for our automated tests! Our Product Manager had the same needs as our feature tests, and used the test utilities to immediately accept stories that otherwise would have required waiting a month to see desired behavior.
The fact that we could run our feature test suite against our deployed test environments gave us a lot of confidence that each environment was behaving well. Whenever the Product Managers encountered behavior they didn’t expect, our first step was to open the relevant test and compare it to what they had done, anchoring the discussion in concrete expectations. If you can’t feature test it, the Product Managers can’t accept it.
Using feature tests in this way, and designing them to run both locally and against deployed environments, isn’t actually specific to an API product. You can do the same thing with web apps—the tests are just more costly because a browser driver is involved, which means you’ll probably want to invest in other less-costly techniques instead.
In terms of testing techniques that are API-specific, we did experiment with a couple of tools for validating the JSON schema of API responses. Eventually, we settled on a method that was simple to keep up-to-date, and had the added bonus of adding generated examples to our API documentation; see the section below about Documentation.
System Design
The following concepts are not specific to API projects, but we wanted to highlight their importance in the context of APIs.
Ports and Adapters
The presentation layer (whether it’s a browser-based UI or an API) doesn’t have to affect the underlying system. Using a loose-coupling architecture like the ports and adapters pattern (sometimes referred to as “hexagonal architecture”) is one way to prevent changes in other parts of the system from rippling up to the front-end, or vice-versa. This is desirable in any kind of system, but especially so in an API, because making changes to the user interface is a much bigger deal.
If your internal modeling is coupled to how you present information to your callers, then the necessity of keeping the API stable can prevent you from refactoring and improving the internal design of your system—and that’s a recipe for disaster. You need to be sure that you can quickly iterate and modify your system internals while keeping the exposed interface constant.
Separate Data Classes
Because an API is often simply serving JSON objects, which may correspond to service-layer objects and persistence-layer objects (e.g. SQL tables), it may be tempting to use a single class for all of these. However, we learned (the hard way) that these data models change at different times for different reasons. To be able to rapidly iterate on your API request and response bodies, these classes should be separate from other data classes in your system. Over time, we evolved our system to a place where the only point of coupling between how we supplied resources to our callers and how we modeled those resources internally was a simple (and easily testable) converter class.
As a quick note: in our experience, this concept does not compromise well. That is to say, having partially-coupled data classes (i.e. shared by some parts of the system, but duplicated by others) was very confusing and might have actually been more damaging than if we had gone with a completely coupled system.
It does mean more work up-front—work whose value is not immediately obvious—but it will be worth it later on.
Dumb Controllers
We aimed to keep as little logic as possible in our controllers. Business logic of any kind was restricted to the inner service layer. The controller was responsible only for translating API-layer data objects to service-layer objects and handling the bare minimum of validation (that is, validating that the response was well-formed enough that we could call the service layer, but not any business-rule validation like “dates must be in the future.”) This also allowed us to iterate quickly on the request and response bodies.
Our system ended up being shaped roughly like this (each shape represents a component of the system and contains a non-exhaustive sample of the sorts of things that lived there):
-
Our core domain component was where we did all the complex modeling and rule handling of our payment processing—stuff that had nothing to do with the fact that we were an API. This component contained use-case objects like ChargeDueSubscriptions, which you could invoke to check for any due subscriptions and charge them appropriately, and model objects like Consumer and Subscription, which were optimized to work nicely with the way we modeled our business rules.
-
Our API adapter layer defined the endpoints of our API (in classes like the SubscriptionsController), including the expected structure of requests to those endpoints and the structure of their responses. Importantly, we didn’t re-use our Consumer and Subscription model objects as the definition of the request and response structure. Instead, we used separate classes, such as CreateSubscriptionRequest and SubscriptionResponse. Converter classes, like SubscriptionConverter, translated CreateSubscriptionRequests into the appropriate Subscription objects in a cleanly-testable way.
This made it cheap to refactor our underlying domain model while still keeping our API stable. The converter classes were malleable, their tests clear and simple to update, so we didn’t have to stress about backwards compatibility when we identified an improvement we wanted to make in the model.
-
The “metronome” component periodically invoked our core domain’s ChargeDueSubscriptions use case with the current time. This was the component where we added our “control time” endpoint, by which our test suite and product manager could jump the system’s understanding of the current time into the future to ensure that various features worked appropriately.
-
External integrations were carefully tucked away from the rest of the system. For example, once we determined that a payment needed to be made, we invoked an external service to process the payment. This integration was strictly encapsulated to prevent changes in the charge service’s API from forcing us to make backwards-incompatible changes to our own API.
API Documentation
API Documentation is a critical component of creating a good user experience for developers consuming APIs. We landed on a solution that allowed us to automatically generate documentation based on tests and also input human text and diagrams. This meant that the heart of the documentation never went stale (it was always updated on every deployment), while our Product Manager could also easily contribute to the documentation (by committing to our version control repository!).
We used Spring REST Docs to generate API documentation from certain feature tests. It integrates with REST Assured or MockMvc and generates Asciidoc snippets that can be incorporated into a larger document. We served this as part of our application, much like what Swagger UI does. The advantage of this over Swagger UI is that the documentation is guaranteed to be accurate by merit of being test output. If a field is present in the request or response but not in the documentation—or vice-versa—it will result in a test failure.
The downside is that to get the most out of this library requires some extra work you might not normally do in your feature tests. For example, if you want a table of fields in your request or response, descriptions are required, or else you must mark the field as optional. To avoid having to copy and paste descriptions of the same field across requests, you might want to use some kind of string storage.
One big advantage of the snippet generation (and AsciidoctorJ’s “include” functionality) is that you can include snippets in a human-written document. The Product Manager wrote an introduction and overview of our system, including a sequence flow diagram, and then included the snippets at the end. The result was an easily-navigable, readable, and complete API document with provably-accurate examples.
Pro-tip for the Product Manager writing the intro / context: Download a text editor like Atom with Asciidoc tools (asciidoc-preview and language-asciidoc) to easily preview the text as you are writing documentation. Then copy and paste the text into the repository to commit the changes.
Wrap-up
We hope this provides some tactical tips as a product development team developing APIs. There are many resources now available that provide best practices (explore Google’s sites) and tools are evolving. If you have questions and/or feedback, don’t hesitate to reach out!
Hello! We are Caroline, David, Francisco – a cross-functional product development team that worked together at Pivotal Labs, a product development consulting company. Pivotal Labs work with our clients as an integrated team, sitting side-by-side, to build & deploy digital products while enabling our clients to learn lean, user-centered, agile software practices that they can use as key capabilities on an ongoing basis within their organizations.
This blog is one of our 4-part blog series on Designing & Developing APIs: