For me, the best open source project to come out of 2018 has been Knative, a serverless platform built on top of Kubernetes. Not just for the platform itself, but for the entire development paradigm that it encourages as well. Event-driven development isn’t new, but Knative lays the groundwork to build an ecosystem around eventing.
If you’re not familiar with Knative, any documentation you read on it will break it down into three distinct categories:
-
Build – How do I build my code and package my code?
-
Serving – How do I serve requests to my code? How does it scale up and down?
-
Eventing – How can my code be triggered by various events?
Now, this isn't meant to be a "Getting Started With Knative" post (more on that in the near future), but what I've been thinking about most lately is how developers can whittle down their code as they leverage more and more of what Knative has to offer. This has been an especially hot topic on Twitter as of late, especially around the time of KubeCon. A common question that I’ve noticed has been, “If you're writing a Dockerfile, is it really a serverless platform?” Others, though, feel like packaging your code as a container may even be the most logical solution because it's portable, it's comprehensive, and it has all of your dependencies. There's no shortage of strongly held opinions and people oh-so-eager to argue them.
Instead of adding fuel to this fire, let's simply take a look at what options Knative gives developers, and gradually work down the amount of code we're writing. We’ll start with the most verbose example—a prebuilt container that we build ourselves. From there, we’ll whittle down our codebase down smaller and smaller,remove the need to build our own container, remove the need to write our own Dockerfile, and finally remove the need to write our own configuration. Most importantly, we’ll look at the power of the Pivotal Function Service (PFS) and how it allows developers to focus on code rather than configuration.
All of the code we'll look at are in two git repos: knative-hello-world and pfs-hello-world.
Prebuilt Docker Container
The first scenario we'll look at is providing Knative a prebuilt container image, already uploaded to our container registry of choice. Most Hello World samples you'll see with Knative take the route of building and managing the container directly. It makes sense because it's easy to digest and doesn't introduce many new concepts, making it a great place to start. The concept is straight forward: you give Knative a container that exposes a port, and it will handle everything else. It doesn't care if your code is written in Go, or Ruby, or Java; it will just take incoming requests and send them to your app.
Let's start with a basic node.js hello world app that uses the Express web framework.
const express = require("express"); const bodyParser = require('body-parser') const app = express(); app.use(bodyParser.text({type: "*/*"})); app.post("/", function(req, res) { res.send("Hello, " + req.body + "!"); }); const port = process.env.PORT || 8080; app.listen(port, function() { console.log("Server started on port", port); });
Nice and straightforward. This code will setup a webserver, listen on port 8080 (unless the PORT environment variable says otherwise), and respond to HTTP POST requests by saying Hello. Of course, there's also the package.json file that defines a few things (how to start the app, dependencies, etc.) but that's a bit outside of the scope of what we're looking at. The other half is the Dockerfile that describes how to package it all up into a container.
FROM node:10.15.1-stretch-slim WORKDIR /usr/src/app COPY . . RUN npm install ENV PORT 8080 EXPOSE $PORT ENTRYPOINT ["npm", "start"]
Again, nothing surprising here. We base our image off of the official node.js image, copy our code to the container and install the dependencies, then tell it how to run our app. All that's left is to upload it to Docker Hub.
$ docker build . -t brianmmcclain/knative-hello-world:prebuilt $ docker push brianmmcclain/knative-hello-world:prebuilt
All of this should look very familiar if you've ever ran an application on something like Kubernetes. Toss your code in a container and let the scheduler deal with making sure it stays up. We can actually tell Knative about this container, plus a little bit of metadata, and it will handle everything from there. It will scale up the number of instances as the number of requests grows, scale to zero, route requests, wire up events—the whole nine yards. All we really need to tell Knative is what to call our app, what namespace to run it in, and where the container image lives.
apiVersion: serving.knative.dev/v1alpha1 kind: Service metadata: name: knative-hello-world-prebuilt namespace: default spec: runLatest: configuration: revisionTemplate: spec: container: image: docker.io/brianmmcclain/knative-hello-world:prebuilt $ kubectl apply -f 01-prebuilt.yaml
A few moments later, we'll see a new pod spin up, ready to serve requests, and eventually scale back down to zero after a little while of not receiving any traffic. We can POST some data and see our response. First, let's get the ingress IP to our Kubernetes cluster and assign it to the $SERVICE_IP variable:
$ export SERVICE_IP=`kubectl get svc istio-ingressgateway -n istio-system -o jsonpath="{.status.loadBalancer.ingress[*].ip}"`
And then use the IP to send a request to our service, setting the Host header in our request:
$ curl -XPOST http://$SERVICE_IP -H "Host: knative-hello-world-prebuilt.default.example.com" -d "Prebuilt"
Hello, Prebuilt!
The Kaniko Container Builder
So that was all well and good, but we haven't even begun to touch the "Build" part of Knative. Literally, we didn't touch it, we built the container on our own. You can read all about builds and how they work in the Knative docs.. ,To sum it up—Knative has a concept called "Build Templates", and how I like to describe them is that they are the sharable logic on how to go from code to container. Most of these Build Templates will remove the need for us to build our own container or upload it to a container registry. The most basic of these templates is probably the Kaniko Build Template.
As the name might suggest, it's based off of Google's Kaniko, a tool for building container images inside a container, with no dependency on a running Docker daemon. You feed the Kaniko container image your Dockerfile and a place to upload the result, and it spits out a container image. Instead of pulling down our code, building our container locally, uploading it to Docker Hub and then pulling it back down into Knative, we can have Knative do this all for us with just a little bit more configuration.
Before we do this, though, we need to tell Knative how to authenticate against our container registry. To do this, we'll first need to create a secret in Kubernetes so that we can authenticate to Docker Hub, and then create a service account to use that secret and run our build. Let's start by creating the secret:
apiVersion: v1 kind: Secret metadata: name: dockerhub-account annotations: build.knative.dev/docker-0: https://index.docker.io/v1/ type: kubernetes.io/basic-auth data: # 'echo -n "username" | base64' username: dXNlcm5hbWUK # 'echo -n "password" | base64' password: cGFzc3dvcmQK
Our username and password are sent to Kubernetes as base64-encoded strings. (For the security-minded folks reading, this is a transport mechanism, not a security one. For more information on how Kubernetes stores secrets, make sure to check out the docs on encrypting secret data at rest.) Once applied, we'll then create a service account named build-bot and tell it to use this secret when pushing to Docker Hub:
apiVersion: v1 kind: ServiceAccount metadata: name: build-bot secrets: - name: dockerhub-account
For more information on authentication, make sure to check out the how-authentication-works-in-Knative doc.
The nice thing about Build Templates is that anyone can create and share them with the community. We can tell Knative to install this Build Template by passing it—you guessed it—some YAML:
$ kubectl apply -f https://raw.githubusercontent.com/knative/build-templates/master/kaniko/kaniko.yaml
Then we need to add a little bit more to our app's YAML:
apiVersion: serving.knative.dev/v1alpha1 kind: Service metadata: name: knative-hello-world-kaniko namespace: default spec: runLatest: configuration: build: serviceAccountName: build-bot source: git: url: https://github.com/BrianMMcClain/knative-hello-world.git revision: master template: name: kaniko arguments: - name: IMAGE value: docker.io/brianmmcclain/knative-hello-world:kaniko revisionTemplate: spec: container: image: docker.io/brianmmcclain/knative-hello-world:kaniko
While it's a bit hard to compare directly, we've actually only added one section—the “Build” section—to our YAML. What we've added might seem like a lot, but it's actually not bad when you take the time to look at it piece by piece:
-
serviceAccountName: In the Knative auth docs, it walks through the process of setting up a service account. All this is though is setting up a Kubernetes secret that can authenticate to our container image repository, and then encapsulating it in a service account.
-
source: Where our code lives. For example, a git repository.
-
template: Which Build Template to use. In our case, we'll use the Kaniko Build Template.
Let's send a request to the new version of our application to make sure everything is still in order:
$ curl -XPOST http://$SERVICE_IP -H "Host: knative-hello-world-kaniko.default.example.com" -d "Kaniko"
Hello, Kaniko!
So while this may be a bit more upfront configuration, the tradeoff is that now we don't have to build or push our own container image each time we update our code. Instead, Knative will handle these steps for us!
The Buildpack Build Template
So, the whole point of this blog is how we can write less code. And while we’ve removed an operation component of our deployments with the Kaniko Build Template, we’re still maintaining a Dockerfile and a configuration file on top of our code. But what if we could ditch that Dockerfile?
If you come from a PaaS background, you're probably used to simply pushing up your code, some magic happens, and suddenly you have a working application. You don't care how this is accomplished. All know is that you don't have to write up a Dockerfile to get it into a container, and it just works. In Cloud Foundry, this is done with something called buildpacks, a framework for providing the runtime and dependencies to your application.
We actually luck out twice here. Not only is there a Build Template to use buildpacks, there's also a buildpack for Node.js. Just like the Kaniko Build Template, we'll install the buildpack Build Template in Knative:
kubectl apply -f https://raw.githubusercontent.com/knative/build-templates/master/buildpack/buildpack.yaml
Now, let's take a look at what our YAML looks like using the Buildpack Build Template:
apiVersion: serving.knative.dev/v1alpha1 kind: Service metadata: name: knative-hello-world-buildpack namespace: default spec: runLatest: configuration: build: serviceAccountName: build-bot source: git: url: https://github.com/BrianMMcClain/knative-hello-world.git revision: master template: name: buildpack arguments: - name: IMAGE value: docker.io/brianmmcclain/knative-hello-world:buildpack revisionTemplate: spec: container: image: docker.io/brianmmcclain/knative-hello-world:buildpack
This is very similar to when we used the Kaniko Build Template. In fact, let's differentiate the two.
< name: knative-hello-world-kaniko > name: knative-hello-world-buildpack --- < name: kaniko > name: buildpack --- < value: docker.io/brianmmcclain/knative-hello-world:kaniko > value: docker.io/brianmmcclain/knative-hello-world:buildpack --- < image: docker.io/brianmmcclain/knative-hello-world:kaniko > image: docker.io/brianmmcclain/knative-hello-world:buildpack
So what's the difference? Well, for starters, we can completely ditch our Dockerfile. The Buildpack Build Template will analyze our code, see that it's a Node.js application, and build a container for us by downloading the Node.js runtime and our dependencies. While the Kaniko Build Template freed us up from managing the Docker container lifecycle, the Buildpack Build Template removes the need to manage the Dockerfile at all.
$ kubectl apply -f 03-buildpack.yaml service.serving.knative.dev "knative-hello-world-buildpack" configured $ curl -XPOST http://$SERVICE_IP -H "Host: knative-hello-world-buildpack.default.example.com" -d "Buildpacks" Hello, Buildpacks!
Pivotal Function Service
Let's take stock of what remains of our codebase. We have our Node code that responds to POST requests, using the Express framework to setup a webserver. The package.json file defines our dependencies. And while it’s not exactly code, we're also maintaining our YAML that defines our Knative service. We can keep whittling, though.
Enter Pivotal Function Service (PFS), Pivotal’s commercial serverless offering built on top of Knative. PFS aims to remove the need to manage anything other than your code. That includes the webserver we've been managing ourselves in our codebase. With PFS, our code looks like this:
module.exports = x => "Hello, " + x + "!";
That's it. No Dockerfile, no YAML. Just one line of code. Of course, like every good Node developer, we still have our package.json file, albeit it without the dependency on Express. Once deployed, riff will take this one line of code and wrap it up in it's own managed base container image. It will package it together with the logic required to invoke our code, and serve it up like any other function running on Knative.
The PFS CLI makes it extremely easy to deploy our function. We'll give our function the name pfs-hello-world, give it the link to our GitHub repository where our code lives, and tell it to upload the resulting container image to our private container registry.
pfs function create pfs-hello-world --git-repo https://github.com/BrianMMcClain/pfs-hello-world.git --image $REGISTRY/$REGISTRY_USER/pfs-hello-world --verbose
A few moments later we'll see our function up and running, which we can send requests to like any other Knative function:
$ curl -XPOST http://$SERVICE_IP -H "Host: pfs-hello-world.default.example.com" -H "Content-Type: text/plain" -d "PFS"
Hello, PFS!
Or, even easier, use the riff CLI to invoke our function:
$ pfs service invoke pfs-hello-world --text -- -d "PFS CLI"
Hello, PFS CLI!
There we have it! Upward of 23 lines of YAML, 14 lines of code, and a 10-line Dockerfile reduced down to 1 simple line of code.
Interested in kicking the tires on PFS? To request early access, just fill out this quick form!
What's Next?
More Build Templates. This is one of the most exciting features of Knative, because it has so much potential to open up a community of custom-built templates for all sorts of scenarios. Today, you can use templates for tools like Jib and BuildKit. There's already a pull request to update the Buildpack Build Template to support Cloud Native Buildpacks.
2018 was an exciting start, but I'm even more excited to see the Knative community grow in 2019. We can certainly expect more Build Templates and more event sources from the community. Not only that, but we can expect better integration with existing technology, such as Spring, which already has great support for functions.
If you’re looking to start developing with Knative, Bryan Friedman and I are hosting a great webinar on 2/21 to talk about developing serverless applications on Kubernetes with Knative. We’ll dive into Knative’s three components, how they work and how you as a developer can leverage them to make writing code even better.
If you happen to find yourself in Philadelphia April 2-4, join us at CF Summit! Bryan and I will be talking about the way to build serverless on Knative, or just say hi if you see us around!