We all come to open source from different places, but my particular journey to becoming a full-time open source contributor—and more recently a project co-maintainer—is a little unusual.
I started out working in experimental physics doing scientific computing. As part of my work, I was also a project co-maintainer, and it’s interesting to think about how that world differs from open source and whether there’s anything that we can learn from how scientific/academic software projects tend to work—or that the scientific community can learn from open source.
Before joining VMware’s Open Source team, I was working for the University of Geneva (Switzerland), most recently as one of six co-maintainers for a scientific computing project that provided the software framework for a large-scale particle physics experiment. The software we developed was used to acquire data at very high rates from the experiment itself and leveraged for analyzing these data in producing the final result. To use the slang of modern computing, you could call this extreme edge computing combined with big data in the cloud.
One of the largest differences between that experience and now working in open source relates to who you get to work with every day. I now spend a lot of my time working on KernelShark, for example, and most of our contributors are professional programmers. Indeed, the project was created and is co-maintained by Steven Rostedt, who is pretty much a rock star in the Linux community.
But back at the University of Geneva, many of my contributors were students. That’s both appropriate and unsurprising for an academic software project but meant that I spent quite a lot of time explicitly teaching people how to contribute. We also needed to have somewhat lower expectations for contributions than I do now with an open source project. While I can be more demanding now, the university experience helped me understand what people need to know when they are learning how to be good contributors, and I try to apply that as a maintainer. I have a good sense of how newcomers might see the project and that helps me communicate with them in ways that encourage them to excel without scaring them away. At least, that’s my hope!
One of the hardest problems that I had when working in scientific computing at the University was how to answer to a young talented student saying, “I am here to do fundamental science – the software that we develop is just one resource that we use in order to make great discoveries.” It may sound surprising, but my answer is the same as when a brilliant software engineer says, “We have to ship our great product to our customers as soon as possible.” In both situations, what matters most is that the software is useful to its users. But in each case, the quality of the code really does matter as well—and that is a goal that we can all learn to achieve from open source.
In particular, the open source community has developed a robust and mature set of best practices that we can call upon as we need them and that would have been beneficial in scientific computing as well. By having contributors comply with these best practices, we are able to avoid mistakes that some projects still tend to repeat, which further helps us deliver reliable and maintainable code. This is especially true when it comes to workflows. Modern open source review procedures are comprehensive and methodical—we proceed with greater caution in order to make sure that what we send upstream is of a higher quality. That, in turn, makes projects like KernelShark much more maintainable than the software I used to manage in my old line of work.
Perhaps the most important lesson I’ve learned from the open source community is that one of the best ways to make sure that your code is of good quality is to make it public so that everyone can challenge your solution. It is true that sometimes this can have a negative impact on your short-term milestones but in the end, working this way will help you to achieve your long-term goals much faster.