MicroK8s

  • Canonical Kubernetes
Channel Revision Published Runs on
latest/edge 242 02 Dec 2024
Ubuntu 22.04 Ubuntu 20.04
latest/edge 241 02 Dec 2024
Ubuntu 22.04 Ubuntu 20.04
latest/edge 240 02 Dec 2024
Ubuntu 22.04 Ubuntu 20.04
latest/edge 239 02 Dec 2024
Ubuntu 22.04 Ubuntu 20.04
latest/edge 238 02 Dec 2024
Ubuntu 22.04 Ubuntu 20.04
latest/edge 237 02 Dec 2024
Ubuntu 22.04 Ubuntu 20.04
legacy/stable 124 17 Aug 2023
Ubuntu 22.04 Ubuntu 20.04
legacy/stable 121 17 Aug 2023
Ubuntu 22.04 Ubuntu 20.04
legacy/edge 124 10 Aug 2023
Ubuntu 22.04 Ubuntu 20.04
legacy/edge 125 10 Aug 2023
Ubuntu 22.04 Ubuntu 20.04
legacy/edge 123 10 Aug 2023
Ubuntu 22.04 Ubuntu 20.04
legacy/edge 122 10 Aug 2023
Ubuntu 22.04 Ubuntu 20.04
legacy/edge 121 10 Aug 2023
Ubuntu 22.04 Ubuntu 20.04
1.28/stable 213 20 Sep 2023
Ubuntu 22.04 Ubuntu 20.04
1.28/edge 218 19 Sep 2023
Ubuntu 22.04 Ubuntu 20.04
1.28/edge 217 19 Sep 2023
Ubuntu 22.04 Ubuntu 20.04
1.28/edge 216 19 Sep 2023
Ubuntu 22.04 Ubuntu 20.04
1.28/edge 215 19 Sep 2023
Ubuntu 22.04 Ubuntu 20.04
1.28/edge 213 19 Sep 2023
Ubuntu 22.04 Ubuntu 20.04
juju deploy microk8s --channel edge
Show information

Platform:

Ubuntu
22.04 20.04

This how to guide describes how to use the MicroK8s charm to deploy and manage MicroK8s clusters

Requirements

  • An existing bootstrapped Juju controller on a cloud like AWS, Azure, OpenStack, LXD, etc

Deploy a single-node cluster

Create a new model called k8s-1, then deploy the microk8s charm. Ensure the specified constraints are enough to account for your workloads:

juju add-model k8s-1
juju deploy microk8s --constraints 'mem=4G root-disk=20G' --channel 1.28/stable

# make sure we can reach the microk8s cluster with a kubeconfig
juju expose microk8s

You can check the progress of the deployment with juju status. After some time, it should look like this:

Model  Controller  Cloud/Region   Version  SLA          Timestamp
k8s-1  zs          zerostack/KHY  3.1.5    unsupported  19:25:47+03:00

App       Version  Status  Scale  Charm     Channel      Rev  Exposed  Message
microk8s  1.28.1   active      1  microk8s  1.28/stable  213  no       node is ready

Unit         Workload  Agent  Machine  Public address  Ports      Message
microk8s/0*  active    idle   0        172.16.100.206  16443/tcp  node is ready

Machine  State    Address         Inst id                               Base          AZ    Message
0        started  172.16.100.206  ccef3b37-8696-4e8f-8d7f-fff3392748f6  ubuntu@22.04  nova  ACTIVE

You can also use juju exec to run kubectl commands:

juju exec --unit microk8s/leader -- microk8s kubectl get node

The kubectl output should look like this:

NAME                  STATUS   ROLES    AGE   VERSION
juju-131248-k8s-1-0   Ready    <none>   26m   v1.28.1

Scale to a 3-node cluster

In order to scale the cluster to 3 nodes, you can use juju add-unit

# add two more units to microk8s
juju add-unit microk8s -n 2

This will spin two new machines, install MicroK8s and form a 3-node cluster. You can check the progress with juju debug-log and juju status. After the cluster is formed, the status should look like this:

Model  Controller  Cloud/Region   Version  SLA          Timestamp
k8s-1  zs          zerostack/KHY  3.1.5    unsupported  19:48:47+03:00

App       Version  Status  Scale  Charm     Channel      Rev  Exposed  Message
microk8s  1.28.1   active      3  microk8s  1.28/stable  213  no       node is ready

Unit         Workload  Agent  Machine  Public address  Ports      Message
microk8s/0*  active    idle   0        172.16.100.206  16443/tcp  node is ready
microk8s/1   active    idle   1        172.16.100.180  16443/tcp  node is ready
microk8s/2   active    idle   2        172.16.100.85   16443/tcp  node is ready

Machine  State    Address         Inst id                               Base          AZ    Message
0        started  172.16.100.206  ccef3b37-8696-4e8f-8d7f-fff3392748f6  ubuntu@22.04  nova  ACTIVE
1        started  172.16.100.180  db1384ed-6b95-408a-8d63-e74175314168  ubuntu@22.04  nova  ACTIVE
2        started  172.16.100.85   83aca2ff-bb52-4b05-9a94-375a3501865d  ubuntu@22.04  nova  ACTIVE

We can also verify that the MicroK8s cluster is formed and has 3 nodes:

juju exec --unit microk8s/leader -- microk8s kubectl get node

now returns:

NAME                  STATUS   ROLES    AGE   VERSION
juju-131248-k8s-1-0   Ready    <none>   30m   v1.28.1
juju-131248-k8s-1-1   Ready    <none>   53s   v1.28.1
juju-131248-k8s-1-2   Ready    <none>   54s   v1.28.1

Add worker nodes

The 3 nodes act as control-plane nodes by default, but can also execute any workloads. For larger environments, it is desirable to limit the number of control plane nodes to an odd-number (e.g. 3 or 5) and then add more nodes as workers. The worker nodes will not run any of the control plane services:

To add worker nodes, deploy a new microk8s-worker application and set role=worker.

# deploy 3 microk8s worker nodes
juju deploy microk8s microk8s-worker --channel 1.28/stable --config role=worker -n 3

When deploying with role=worker, the charm will stay in a waiting state until it is related to a control plane:

Model  Controller  Cloud/Region   Version  SLA          Timestamp
k8s-1  zs          zerostack/KHY  3.1.5    unsupported  19:59:47+03:00

App              Version  Status   Scale  Charm     Channel      Rev  Exposed  Message
microk8s         1.28.1   active       3  microk8s  1.28/stable  213  no       node is ready
microk8s-worker           waiting      3  microk8s  1.28/stable  213  no       waiting for control plane

Unit                Workload  Agent      Machine  Public address  Ports      Message
microk8s-worker/0   waiting   idle       3        172.16.100.74              waiting for control plane
microk8s-worker/1*  waiting   idle       4        172.16.100.110             waiting for control plane
microk8s-worker/2   waiting   idle       5        172.16.100.161             waiting for control plane
microk8s/0*         active    idle       0        172.16.100.206  16443/tcp  node is ready
microk8s/1          active    idle       1        172.16.100.180  16443/tcp  node is ready
microk8s/2          active    idle       2        172.16.100.85   16443/tcp  node is ready

Machine  State    Address         Inst id                               Base          AZ    Message
0        started  172.16.100.206  ccef3b37-8696-4e8f-8d7f-fff3392748f6  ubuntu@22.04  nova  ACTIVE
1        started  172.16.100.180  db1384ed-6b95-408a-8d63-e74175314168  ubuntu@22.04  nova  ACTIVE
2        started  172.16.100.85   83aca2ff-bb52-4b05-9a94-375a3501865d  ubuntu@22.04  nova  ACTIVE
3        started  172.16.100.74   d5c83472-1e25-4980-9f2f-b5599f5035aa  ubuntu@22.04  nova  ACTIVE
4        started  172.16.100.110  e3b7ee86-804b-4d31-b4b9-83b2b2e1f770  ubuntu@22.04  nova  ACTIVE
5        started  172.16.100.161  ff2a84f9-2dba-413d-b803-bda3d41de9c7  ubuntu@22.04  nova  ACTIVE

To proceed, integrate microk8s-worker with microk8s, so that the nodes can join the cluster:

# 'microk8s' will be the control plane to 'microk8s-worker'
juju integrate microk8s:workers microk8s-worker:control-plane

After some time, the juju status output should look like this:

Model  Controller  Cloud/Region   Version  SLA          Timestamp
k8s-1  zs          zerostack/KHY  3.1.5    unsupported  20:01:00+03:00

App              Version  Status  Scale  Charm     Channel      Rev  Exposed  Message
microk8s         1.28.1   active      3  microk8s  1.28/stable  213  no       node is ready
microk8s-worker  1.28.1   active      3  microk8s  1.28/stable  213  no       node is ready

Unit                Workload  Agent  Machine  Public address  Ports      Message
microk8s-worker/0   active    idle   3        172.16.100.74              node is ready
microk8s-worker/1*  active    idle   4        172.16.100.110             node is ready
microk8s-worker/2   active    idle   5        172.16.100.161             node is ready
microk8s/0*         active    idle   0        172.16.100.206  16443/tcp  node is ready
microk8s/1          active    idle   1        172.16.100.180  16443/tcp  node is ready
microk8s/2          active    idle   2        172.16.100.85   16443/tcp  node is ready

Machine  State    Address         Inst id                               Base          AZ    Message
0        started  172.16.100.206  ccef3b37-8696-4e8f-8d7f-fff3392748f6  ubuntu@22.04  nova  ACTIVE
1        started  172.16.100.180  db1384ed-6b95-408a-8d63-e74175314168  ubuntu@22.04  nova  ACTIVE
2        started  172.16.100.85   83aca2ff-bb52-4b05-9a94-375a3501865d  ubuntu@22.04  nova  ACTIVE
3        started  172.16.100.74   d5c83472-1e25-4980-9f2f-b5599f5035aa  ubuntu@22.04  nova  ACTIVE
4        started  172.16.100.110  e3b7ee86-804b-4d31-b4b9-83b2b2e1f770  ubuntu@22.04  nova  ACTIVE
5        started  172.16.100.161  ff2a84f9-2dba-413d-b803-bda3d41de9c7  ubuntu@22.04  nova  ACTIVE

Verify with:

juju exec --unit microk8s/leader -- microk8s kubectl get node

Which should now print something like this:

NAME                  STATUS   ROLES    AGE   VERSION
juju-131248-k8s-1-1   Ready    <none>   13m   v1.28.1
juju-131248-k8s-1-2   Ready    <none>   13m   v1.28.1
juju-131248-k8s-1-3   Ready    <none>   58s   v1.28.1
juju-131248-k8s-1-0   Ready    <none>   42m   v1.28.1
juju-131248-k8s-1-5   Ready    <none>   56s   v1.28.1
juju-131248-k8s-1-4   Ready    <none>   56s   v1.28.1

Help improve this document in the forum (guidelines). Last updated 1 year, 2 months ago.