The VMware Open Source Program Office team has been on a reproducibility tear this summer. In our recent blog series “What Makes a Build Reproducible?” Parts One and Two, Rose Judge and Joshua Lock detailed the architectural best practices that define reproducibility. In this post, I want to explore the current state of reproducibility tools within the container ecosystem, and specifically the tools Docker does (and doesn’t) provide for developers.
The three interpretations of reproducibility
As outlined by Rose and Joshua, there are three major levels of a reproducible build. At the lowest level, we have a repeatable build, which always executes the same set of steps. That’s a start, but repeatable builds don’t control the inputs they perform those steps on, unlike the next stage, rebuildable builds. At this stage, build tools freeze the state of dependencies like distro packages and git repositories. Nothing changes without developer approval. The final and ideal form of reproducibility is harder to pin down. It’s known as a binary reproducible build, where the output of your build system is bit-for-bit identical no matter where and when the build is run. Binary reproducibility is challenging to achieve but provides the greatest level of supply chain security.
What does Docker give us?
Docker makes repeatable builds easy through the Dockerfile build specification. Docker will always run the same series of actions on the inputs it’s provided, but like all repeatable build systems, it has no way of knowing whether those inputs have changed, so you won’t know if any issues arise until you see the failure in your tests, or worse, in production. These problems can be a real challenge to track down, and often developers waste hours debugging their own perfectly functional code before checking dependencies. I’ve personally seen issues where a developer who hadn’t run
git pull on their machine couldn’t understand why a teammate’s build was failing, when the only issue was that that teammate had a more recent version of a library! Throw in the fact that Docker caching doesn’t update unless the
RUN command is changed, and you have a recipe for the most frustrating kind of bugs: inconsistent ones.
How can we go further?
Moving to the next level of reproducibility requires external tools. To solve package issues, we turn to Debuerreotype, a tool originally created for building Dockerhub’s Debian base images. When provided with a date as input, Debuerreotype uses Debian snapshots to build a rootfs image identical to one built on the given date. It’s guaranteed that Debuerreotype builds with the same inputs will consist of exactly the same sequence of bits. Thus, binary reproducibility!
Solving issues with git repositories is easier, and often all that’s needed is a version tag in the
git clone command. If you want more functionality, though, you’ll need to turn to external tools. For example, as part of the trace-cruncher build process, the OSPO team behind trace-cruncher’s open source development here at VMware has implemented a small additional tool within trace-cruncher called git-snapshot. An input file specifies several git repositories to be downloaded, and the git-snapshot tool abstracts the process of downloading these repositories and checking out either a specific commit or the last commit before a given date. Because the tool knows where the files it downloads are, you can also leverage it in
make clean or similar commands to easily and reliably delete your build dependencies.
Better solutions are needed
These tools improve the developer experience and make rebuildable builds possible within the container ecosystem, but they aren’t included in Docker. Ideally, binary reproducibility also depends on a unified build specification; one file which, when provided to the build system, produces exactly the same output everywhere. The need to use external tools that might override or collide with aspects of the intended Docker Workflow highlights the need for standardization and extensibility within the world of containers. The technology that has revolutionized cloud computing is still new even by software standards, and best practices are still evolving both in architecture and implementation.