There’s a class of security compromise or exploit we commonly refer to as “remote root.” This is pretty much the worst kind of security flaw a system can have. It allows somebody somewhere out in the world to execute arbitrary code on your computer with elevated privileges, as if they are the administrator of your machine. You’re literally giving your computer away to folks on the internet for any purpose they might intend. It’s not something we usually do. When it does happen, through a coding or configuration error, we treat these as the most severe of vulnerabilities.
And yet…
An increasingly common pattern in software distribution is to make things super easy for somebody to download your deliverable, install it and begin to kick the metaphorical tires for evaluation purposes. We humans like endorphins. Let’s get those endorphins flowing! This takes the form of recommending you open up a shell and run a command that involves:
curl ${url} | sudo bash
An artifact is downloaded from some URL on the internet, is handed to a bash shell, and invoked as root via the “super-user do” utility. The thing is installed and running and your time to endorphins is short and therefore a successful, positive developer experience.
And yet…
This is effectively a remote root for the developer whose code you’re running. Or more accurately, a remote root for the developers whose code you’re running. Typically, the artifacts you download and run are the output of building millions of lines of open source dependency code, with contributions from thousands – if not millions – of individuals. Unauthenticated remote execution of arbitrary code. CVSS 10…Talk about endorphins.
So what can we do about this?
Can we just remove the sudo?
curl ${url} | bash
There’s no privilege escalation. Except there is. Whatever is running can quietly run sudo itself. It should need your password then, except maybe it doesn’t (see the recent sudo CVE-2021-3156). To be clear though, the risk is not dependent on a sudo CVE. Without sudo in the picture, every unpatched issue and every zero-day vulnerability represents a point from which an attacker may escalate. Having sudo in the picture simple makes that privilege escalation more immediate. The question is more: How do you constrain and contain?
Can’t we just document it? Sometimes the curl | sudo bash
pattern is used with some level of recognition of the risks. The doc author will blithely give some parenthetical (“don’t do this in production”). But what is production? Nowadays we see more of a continuum. People talk of DevOps and GitOps. We have software supply chains. You and your desktop (where you’ve just allowed arbitrary remote code execution) are a part of that supply chain, and perhaps now the weakest part of the chain. This is the opposite of constraining and containing risk and just punts it to the end user.
What we need is some different level of trust and software distribution. Whether the software is a bash script, a native executable, an RPM/Deb, a container or a VM, we need to think about where it comes from and how we receive and validate it. Some options:
- Authenticated download of source code then build it from source code in hermetic sandbox: Most software won’t build in this scenario. You can ‘git clone’ or ‘curl’ the source, but need to audit and provide for third-party dependencies pulled in at your local runtime which might vary from those intended by the original developer(s) of the source code. Maybe you don’t need ‘sudo’ at all for an install into
~/bin,
but maybe some things assume differently, need to be debugged or fixed. Forget about time to endorphins. - Authenticated binary download from official project source: A dedicated build team at the project source manages a trusted and controlled build environment, publishes signed artifacts. This could work, but too few examples of this exist and you need a track record of quality operations that enables you to trust community build and release teams.
- Authenticated binary download from a distribution: A dedicated built team curates releases for multiple components. This works and we have many examples (eg: Ubuntu/apt-get; Fedora/dnf; Apple, Microsoft and other OS centralized core content update mechanisms). But rolling updates and leading edge distribution channels are not a common enough pattern, so there’s also usually large lag in time to endorphins.
Notably this list probably should not include framework options like:
because these are platforms where nearly anybody might upload roughly any content for subsequent redistribution.
The distinction is that the prior list specifically places trust in certain hopefully trustworthy humans. Dedicated humans, actively curating content.
This idea of trusted curation is growing. I’ve spoken about it at KubeCon and worked in Kubernetes SIG Release with Stephen Augustus and the release engineering subproject on improving our project’s official artifacts. Nisha Kumar and Joshua Lock also spoke at KubeCon and have a series of VMware Open Source blog posts (parts 1, 2 and 3) on the curation process for containers and the associated software supply chain. And Google recently has published their ideas to bolster confidence and trustworthiness of the content we share, but whether that could possibly work or is even sufficient is an open question. This all represents progress but is still very much work in progress.
In the meantime, think about risk when you tell your users to curl | sudo bash
and think if there might be a better way. Think about risk when somebody tells you to curl | sudo bash
something from the internet and demand a better way. Together we can break this loop and build the better way.