By Alexandre Courouble and John Hawley
Previously, we’ve explored the challenge of measuring progress in open source projects and looked forward to the recent CHAOSScon meeting, held right before the North American Open Source Summit (OSS). CHAOSS, for those who may not know, is the Community Health Analytics Open Source Software project. August’s CHAOSScon marked the first time that the project had held its own, independent pre-OSS event.
After attending the event, we thought it might be interesting to share our takeaways from the conference and reflect on where we stand with regard to the challenges that we outlined both in our previous posts and in our CHAOSScon talk, “The Pains and Tribulations of Finding Data.”
For those who weren’t able to attend CHAOSScon and would like to see our talk, it’s now available for viewing here. We started with an overview of current solutions for gaining visibility into open source data and then outlined what we view as the challenges currently standing in the way of creating solid progress metrics for open source development. We’ll go back to the talk at the end of the post, but first, our overall takeaways.
One thing we appreciated about having a dedicated CHAOSScon was that the event attracted a mix of longtime colleagues and collaborators as well as people new to the community. In particular, there was a strong presence from corporate open source teams. Engineers from Twitter, Comcast, Google and Bitergia shared how they have been tackling different kinds of open source data challenges. Hearing about their own trials and tribulations definitely seemed to validate our impression that we share a number of basic data measurement problems that are worth addressing as a community.
It was good, too, to see CHAOSS welcoming these corporate perspectives. Open source conferences often eschew that kind of engagement, but it is useful to hear how teams are solving problems for themselves out in the wild. Here’s hoping that this marks the start of a new trend.
A pair of workshops in the afternoon offered another useful takeaway. One was on “Establishing Metrics That Matter for Diversity & Inclusion” and the other was a report from the CHAOSS working group on Growth-Maturity-Metrics.
It was clear from the latter workshop that we now have a good number of quantifiable data points to establish where a project is on the growth-maturity-decline continuum. But diversity metrics present a much trickier challenge. The data there exists mostly in mailing lists and board discussions and is currently only really explored through surveys. But the issue provoked a really interesting discussion full of smart suggestions and we’re excited to see what new solutions the community will come up with in the future.
Turning to what we learned from our own panel, we were thrilled to be speaking in front of a similarly engaged audience. We opened with a shout out to the open source projects that have already created tooling around data acquisition and we were lucky enough to have maintainers from many of those projects in the room with us. It was good of our audience to indulge a presentation heavy on questions and light on answers. They seemed genuinely curious about the issues we were raising and interested in trying to figure out how to fundamentally address them – some even started working on potential solutions as we were speaking.
We didn’t arrive at any grand consensus on solutions, but it’s clear that there is active community interest in trying to at least understand the problem of open source metrics and how we might be able to solve it. That’s certainly inspiring us to keep working on the issue—after all, things will only get better as more ideas get discussed, researched, tried and retried. This is not something we expect to be magically fixed in a couple of steps, but we’re excited to keep reaching out to the colleagues we interacted with at the conference and see what develops.
Our final takeaway is a classic example of conference serendipity. We arrived there knowing about GrimoireLab, a tool for tracking data about multiple open source projects on a single dashboard—we even referenced it in our talk. But what we didn’t know is that it’s easy to create your own implementation of it. We attended a presentation where several groups shared how they had implemented GrimoireLab with success and we’re now implementing it internally ourselves to track the status of our open source projects. Talk about a win-win situation.
Stay tuned to the Open Source Blog for future conference recaps and follow us on Twitter (@vmwopensource).