Community

2021: A Look Ahead for Open Source at VMware

A Cultural Shift

2020 was a year that has slowed things down and caused us to rethink our work and its impact. Culturally, one of the biggest impacts to open source was the transition of all community events  to a virtual world. While we may lament the travel, the crowded rooms and lukewarm coffee, what we really miss is the hallway track. Open source communities are built on relationships and the trust between people. Working through meetings, events and email only works to a limited extent. Projects can really suffer when we can’t get into the same room and crawl through solutions together. The community is having a hard time adjusting to what this might look like going forward. Ideating and collaborating in new and creative ways will look different in our future. Together, we will have to figure out how to bridge that gap.

A New Horizon

That said, we also learned that there are no more excuses for not being able to work from home, which is great for open source engineers! It has provided a more global approach to sourcing talent. People have access to jobs that they may not have had before as companies are forced online for the foreseeable future. Employers must either embrace the new reality  and learn how to operate remotely or they may cease to exist. This provides an open field for developers who want to target projects and meaningful work based on their passions and interests over geographic location. 

Strategic Investment

A few things that influence the way the open source team at VMware has decided to invest in the coming year includes the apparent need for a secure software supply chain as well as software bills of material. These areas are gaining traction and momentum as there is now growing recognition from the broader industry for their value. We are starting to see adoption of the work that Nisha Kumar, Rose Judge and the team are delivering with Tern, and we’re getting similar shoutouts from the security folks in the community with respect to the work that Joshua Lock and team are doing with TUF and PyPi. Building products with intrinsic security is a lot easier if you have that security built into the open source projects that make up their foundation. We expect to see a lot more engagement from across the open source community in these strategic areas of development in 2021.

“…work that we’re doing on industry standard projects that make everything better.”

Linux

With respect to Linux and operating systems, Linux is sustaining at this point – it’s not really shiny and new. After all, at 30 years old, it’s nearly middle aged. Even Linus Torvalds asserts that, “all the interesting stuff is outside the kernel.” While all the kernel maintainers would scream bloody murder, that’s what the big penguin said! We are still finding new workloads to support, containers are increasingly becoming a first-class workload on Linux and we’re trying to find ways to streamline how developers work with them. 

Open source engineer Steven Rostedt and team are  investigating observability or logging and tracing across containers; not just across applications, not just across process IDs, but across full containers. How do we synchronize those different systems? Normally we think about applications. But now we must consider  how these containers interact. And that requires some changes to Linux and Linux tooling.  

Containers

While Kubernetes is becoming the de facto orchestration for the cloud, it hasn’t yet defined a full cloud operating system. Key pieces are missing. If you log into a Linux system you typically know your network system, you know your logging system and you know your authentication system, right?  It’s present and it’s known. You log into a Kubernetes system and you wonder: Is it physical networking? Is it software-defined networking? How are we doing load balancing? Are we doing that with a service mesh? How do we authenticate? Are we using SPIFFE/SPIRE? And do all of the components use it? There is this ill-defined maelstrom of components, that are sort of all heading out in that direction, but they are not yet aligned. Tim Pepper and team are looking at that whole thing as a system. They are working on that next generation of the Cloud system. They are asking the right questions: What are the gaps of interoperability? What are the missing components entirely? This way, they can work at making these things more interoperable and standard-compliant. 

The Edge

We’re starting to see some resolution around what we think of as “The Edge” from sort of thick-middle-thin concepts and terms of how far away it is from the main data center. IoT has been a promising technology for long time. Many of the ideas where it can create value have not yet reached full potential. They never answered the problem because they never defined it. But I think we will see some refinement in the understanding of what an edge is and what its requirements are. One of the risks is continued fragmentation of projects that fork major components in order to make them fit in the new domain. But there is often no motivation to then pull those changes back into an upstream project in a way that makes it sustainable. Kubernetes forks are not sustainable on a project of that scale. We hope to see some consolidation and additional  definition in that space.

Machine Learning

While we talk about artificial intelligence and machine learning, we’re ultimately still talking about just large scale pattern matching that we don’t understand. I think the problem space is ill-defined and the solution space is not well understood. One of the biggest challenges with machine learning is making it explainable. For example, you rely on your car to stop at a stop sign and it does great 99,999 times out of a 100,000 but there’s that one time where it sees a red wagon on the back of a truck and it thinks it’s a stop sign, slams on the brakes and causes a collision. We don’t understand why it does that, not really. 

The tools that we use need to expose explainability of whatever we do with machine learning or it further exacerbates our complexity problem. I think complexity is becoming kind of a gremlin for the systems that we’re building to the point where very few people understand enough about the system as a whole, to be able to meaningfully debug a problem. If we all sit at the fringe of the outer edge of the solution that’s built on layers upon layers of code that we’ve never had any experience with, our only answer when it breaks is that “it’s not a bug that’s just the way it works”.

There are a couple of approaches here. In safety critical systems standards require explainability, yet are so hopelessly inadequate when applied to modern software packages containing 15 million lines of code! The approach taken there is one of empirical evidence for the stabilization of bugs in a known system over time. It’s a very complex solution that, for the last several years, has struggled to gain enough momentum and understanding to bring it into something that can be used in practice. The other approach is dramatic simplification where we don’t stack on top of these things. With this you get more bespoke, yet cost prohibitive solutions. The talent required for these creative solutions is increasingly more difficult to come by. And in the end, we end up adding more complexity in order to provide more transparency. 

“…we’re adding more complexity in order to provide more transparency.”

Looking Forward

One of the challenges that I hope the open source community tackles is to work toward an understanding of systems-level thinking. Each person should understand the role that their piece plays in the larger system and, at an architectural-level, describe the larger system. Broadening our  understanding of the contract that exists with  the interacting pieces in the system could reduce the number of failures introduced when we narrowly focus on our piece of a project. Every piece of software is an ecosystem – ignore it at your peril.  Context matters – it always does.