Kubernetes has become the default orchestration tool for large scale containerised workloads, however not all containerised workloads require such a heavyweight cluster. For development, proof of concept and testing workloads k3s can provide you with a fully Kubernetes compliant environment without the need for a multinode cluster.

Installing k3s

To install k3s on a systemd-based Linux distribution, simply run:

curl -sfL https://get.k3s.io | sh -

If curling scripts into bash makes you uncomfortable you can curl it to a file and inspect it before you running.

K3s is automatically set to start on boot and the installation includes management tools like kubectl and a removal script. Kubectl is available to use immediately via a kubeconfig file written to /etc/rancher/k3s/k3s.yaml.

Accessing kubectl outside the cluster

By taking a copy of the kubeconfig in /etc/rancher/k3s/k3s.yaml and replacing localhost with the IP address of the remote cluster, you can manage your k3s cluster remotely. This may be useful if you are using k3s as a remote development cluster.

Helm 3

As Helm 3 no longer requires Tiller, k3s and Helm are best of friends and no additional configuration is required.

k3s as a PoC development environment

While k3s is not a full cluster, and as your production environment likely consists of a full cluster, you must test your application against a full Kubernetes implementation. In proof of concept phases however, k3s can provide you with a complete Kubernetes API to check your application is working the way you intended it to.

k3s for testing Kubernetes code

As k3s implements the full Kubernetes API, it is excellent as an tool for testing the code that deploys your Kubernetes objects is doing what you expect it to, without requiring a full cluster to be stood up. As k3s is so lightweight and quick to stand up, you can create a k3s environment quicker than a full Kubernetes cluster, thus reducing the time taken to complete automated testing in a Continous Delivery pipeline.