Back in 2019 we published a review of the most relevant serverless frameworks available for Kubernetes. That article became one of the most visited in our blog in the past two years, so we decided to return to this subject and provide an update for our readers.
TL;DR: Serverless in Kubernetes in 2022 means, to a large extent, Knative.
What’s in a Word
The “Serverless” word is polarizing.
Robert Scoble, one of the first tech influencers, uttered it for the first time fourteen years ago, as James Potter reported recently. In those times, “Serverless” just meant “not having servers and being an AWS EC2 customer”. Because, yes, companies used to have their own physical servers back then. Amazing, isn’t it?
Fast forward to 2022, and the CNCF Serverless Landscape has grown to such an extent that it can be very hard to figure out what “serverless” truly means.
Even though for some it just represents the eternal return of 1970s style batch computing, in the past five years the “Serverless” word has taken a different and very specific meaning.
The winning definition caters for the complete abstraction of the infrastructure required to run individual pieces of software, at scale, on a cloud infrastructure provider.
Or, in less buzzword-y terms, just upload your code, and let the platform figure out how to run it for you: the famous “FaaS”, also known as Function as a Service.
The IaaS Front
The three major Infrastructure as a Service providers in the world offer their own, more-or-less-proprietary version of the Serverless paradigm: AWS Lambda, Azure Functions, and Google Cloud Run (which complemented the previous Google Cloud Functions service). These are three different approaches to the subject of FaaS, each with its advantages and caveats.
Some companies, like A Cloud Guru, have successfully embraced the Serverless architecture from the very start (in this case, based on AWS Lambda), creating cost-effective solutions with incredible scalability.
But one of the aforementioned caveats, and not a small one for that matter, is platform lock-in. Portability has always been a major concern for enterprise computing: if building apps on AWS Lambda is an interesting prospect, could we move them to a different provider later on?
Well, we now have an answer to that question, thanks to our good old friend: the container.
The Debate Is Over
Almost exactly three years ago, Anthony Skipper wrote:
We will say it again… packaging code into containers should be considered a FaaS anti-pattern!
Containers or not? This was still a big debate at the time of our original article in 2019.
Some frameworks like Fission and IaaS services such as AWS Lambda and Google Cloud Functions did not require developers to package their apps as containers; just upload your code and watch it run. On the other hand, OpenFaaS and Knative-based offerings did require containers. Who would win this fight?
The world of Serverless solutions in 2022 has decided that wrapping functions in containers is the way to go. Even AWS Lambda started offering this option in December 2020. This is a huge move, allowing enterprises to run their code in whichever infrastructure they would like to.
In retrospect, the market has chosen wisely. Containers are now a common standard, allowing the same code to run unchanged, from a Raspberry Pi to your laptop to an IBM Mainframe. It is a natural choice, and it turned out that it was a matter of time before this happened.
Even better, with increased industry experience, container images got smaller and smaller, thanks to Alpine-based,
scratch-based, and distroless-based images. Being lightweight allows containers to start and stop almost instantly, and makes scaling applications faster and easier than ever.
And this choice turned out to benefit one specific framework among all: Knative.
The Age of Knative
In the Kubernetes galaxy, Knative has slowly by steadily imposed its mark as the core infrastructure of Kubernetes serverless workloads.
In 2019, our article compared six different mechanisms to run serverless payloads on Kubernetes:
Of those, Fn Project and Kubeless have been simply abandoned. Other frameworks suffered the same fate: Riff has disappeared, just like Gestalt, including its parent company Galatic Fog. IronFunctions moved away from Kubernetes into its own PaaS product. Funktion has been sandboxed and abandoned; Leveros is abandoned too; BlueNimble does not show much activity.
On the other hand, new players have appeared in the serverless market: Rainbond, for example; or Nuclio, targeting the scientific computation market.
But many new contenders are based on Knative: apart from TriggerMesh, which we mentioned in 2019 already, we have now Kyma, Knix, and Red Hat’s OpenShift 4 serverless, all powered by Knative.
The interest in Knative is steadily growing these days: CERN uses it. IBM is talking about it. The Serverless Framework has a provider for it. Even Google Cloud Run is based on it! Which shouldn’t be surprising, knowing that Knative was originally created by Google, just like Kubernetes.
And now Knative has just been accepted as a CNCF incubating project!
Even though Knative is not exactly a FaaS per se, it deserves the top spot in our review of 2022 FaaS-on-K8s technologies, being the platform upon which other serverless services are built, receiving huge support from the major names of the cloud-native industry.
Getting Started with Knative
Want to see Knative in action? Getting started with Knative on your laptop now is easier than ever.
- Install kind.
- Run the following command on your terminal:
$ curl -sL install.konk.dev | bash
To work with Knative objects on your cluster, install the
kn command-line tool. Once you have launched your new Knative-powered Kind cluster, create a file called
knative-service.yaml with the contents below:
apiVersion: serving.knative.dev/v1 kind: Service metadata: name: hello spec: template: metadata: name: hello-world spec: containers: - image: gcr.io/knative-samples/helloworld-go ports: - containerPort: 8080 env: - name: TARGET value: "World"
And then just apply it:
kubectl apply -f knative-service.yaml.
kn service list command should now display your “helloworld” service, which should become available after a few seconds. Once it’s ready, you can execute it simply with curl:
$ curl http://hello.default.127.0.0.1.sslip.io
(If you prefer to use Minikube, you can follow this tutorial instead.)
Thanks to Knative, developers can roll out new versions of their services (called “revisions” in Knative terminology) while the old ones are still running, and even distribute traffic among them. This can be very useful in A/B testing sessions, for example. Knative services can be triggered by a large array of events, with great flexibility.
A full introduction to Knative is outside of the scope of this review, so here are some resources we recommend to learn everything about Knative serving and eventing:
- The excellent and funny “Knative in Action” (2021) book by Jacques Chester, available for free courtesy of VMWare.
- A free, full introduction to Knative (July 2021) by Sebastian Goasguen, the founder of TriggerMesh; a video and its related source code are provided as well.
- And to top it off, the “Knative Cookbook” (April 2020) by Burr Suter and Kamesh Sampath, also available for free, courtesy of Red Hat.
Interested in Knative and Red Hat OpenShift Serverless? Get in touch with us and let us help you in your FaaS journey!