MicroK8s
- Canonical Kubernetes
Channel | Revision | Published | Runs on |
---|---|---|---|
latest/edge | 236 | 09 Jan 2024 | |
latest/edge | 235 | 09 Jan 2024 | |
latest/edge | 234 | 09 Jan 2024 | |
latest/edge | 233 | 09 Jan 2024 | |
latest/edge | 232 | 09 Jan 2024 | |
legacy/stable | 124 | 17 Aug 2023 | |
legacy/stable | 121 | 17 Aug 2023 | |
legacy/edge | 124 | 10 Aug 2023 | |
legacy/edge | 125 | 10 Aug 2023 | |
legacy/edge | 123 | 10 Aug 2023 | |
legacy/edge | 122 | 10 Aug 2023 | |
legacy/edge | 121 | 10 Aug 2023 | |
1.28/stable | 213 | 20 Sep 2023 | |
1.28/edge | 218 | 19 Sep 2023 | |
1.28/edge | 217 | 19 Sep 2023 | |
1.28/edge | 216 | 19 Sep 2023 | |
1.28/edge | 215 | 19 Sep 2023 | |
1.28/edge | 213 | 19 Sep 2023 |
juju deploy microk8s --channel edge
Deploy universal operators easily with Juju, the Universal Operator Lifecycle Manager.
Platform:
This how to guide describes how to deploy the MicroK8s charm and connect with Ceph CSI to consume Ceph storage.
Requirements
- An existing bootstrapped Juju controller on a cloud like AWS, Azure, OpenStack, etc
Deploy MicroK8s
Create a new model called k8s-1
, then deploy the microk8s
charm. Ensure the specified constraints are enough to account for your workloads:
juju add-model k8s-1
juju deploy microk8s --constraints 'mem=4G root-disk=20G' --channel 1.28/stable
Deploy Ceph
Deploy a Charmed Ceph cluster. In this example, we will deploy a simple Ceph cluster with 1 mon and 6 OSDs:
juju deploy ceph-mon --channel quincy/stable --config monitor-count=1
juju deploy ceph-osd --channel quincy/stable -n 2 --storage osd-devices=3,5G
juju integrate ceph-mon ceph-osd
Please refer to the linked Ceph documentation for more details around configuring and managing a production-grade Ceph cluster.
Deploy Ceph CSI
The Ceph CSI charm is used to deploy Ceph CSI on the MicroK8s cluster so that it can consume storage from our Ceph clsuter.
Deploy the ceph-csi
charm. Refer to the Configure Ceph CSI for the available configuration options.
juju deploy ceph-csi --channel 1.28/stable --config namespace=kube-system --config provisioner-replicas=1
Relate ceph-csi
with microk8s
over the kubernetes-info
interface:
juju integrate ceph-csi:kubernetes-info microk8s
Also, relate ceph-csi
with ceph-mon
over the ceph-client
interface:
juju integrate ceph-csi:ceph-client ceph-mon
Wait for everything to settle. After a while, the juju status
output should look like this:
Model Controller Cloud/Region Version SLA Timestamp
microk8s zs zerostack/KHY 3.1.5 unsupported 11:01:26+03:00
App Version Status Scale Charm Channel Rev Exposed Message
ceph-csi v3.7.2,v0,v3... active 1 ceph-csi 1.28/stable 36 no Versions: cephfs=v3.7.2, config=v0, rbd=v3.7.2
ceph-mon 17.2.6 active 1 ceph-mon quincy/stable 183 no Unit is ready and clustered
ceph-osd 17.2.6 active 2 ceph-osd quincy/stable 564 no Unit is ready (3 OSD)
microk8s 1.28.1 active 1 microk8s 1.28/stable 213 no node is ready
Unit Workload Agent Machine Public address Ports Message
ceph-mon/0* active idle 1 172.16.100.11 Unit is ready and clustered
ceph-osd/0* active idle 2 172.16.100.124 Unit is ready (3 OSD)
ceph-osd/1 active idle 3 172.16.100.59 Unit is ready (3 OSD)
microk8s/0* active idle 0 172.16.100.217 16443/tcp node is ready
ceph-csi/0* active idle 172.16.100.217 Unit is ready
Machine State Address Inst id Base AZ Message
0 started 172.16.100.217 9ef8f282-d39e-46bd-aa49-be56d52e3f6a ubuntu@22.04 nova ACTIVE
1 started 172.16.100.11 4e747c87-ad81-4bae-b971-f4539a431818 ubuntu@22.04 nova ACTIVE
2 started 172.16.100.124 718cceb2-0dd4-4311-9193-f7102b504f84 ubuntu@22.04 nova ACTIVE
3 started 172.16.100.59 274f87b3-4443-47d0-90e9-b7e38ef142b1 ubuntu@22.04 nova ACTIVE
(Optional) Deploy CephFS
CephFS supports will enable the cephfs
storage class, which can be used to provision ReadWriteMany
PVCs on the MicroK8s cluster.
First, deploy ceph-fs
and relate with ceph-mon
. Please refer to the Ceph documentation for instructions on adding CephFS for a productiong-grade deployment:
juju deploy ceph-fs --channel quincy/stable
juju integrate ceph-fs:ceph-mds ceph-mon
Wait for the deployment to settle, check progress with juju status
. The output should eventually look like this:
Model Controller Cloud/Region Version SLA Timestamp
microk8s zs zerostack/KHY 3.1.5 unsupported 11:04:18+03:00
App Version Status Scale Charm Channel Rev Exposed Message
ceph-csi v3.7.2,v0,v3... active 1 ceph-csi 1.28/stable 36 no Versions: cephfs=v3.7.2, config=v0, rbd=v3.7.2
ceph-fs 17.2.6 active 1 ceph-fs quincy/stable 60 no Unit is ready
ceph-mon 17.2.6 active 1 ceph-mon quincy/stable 183 no Unit is ready and clustered
ceph-osd 17.2.6 active 2 ceph-osd quincy/stable 564 no Unit is ready (3 OSD)
microk8s 1.28.1 active 1 microk8s 1.28/stable 213 no node is ready
Unit Workload Agent Machine Public address Ports Message
ceph-fs/0* active idle 4 172.16.100.134 Unit is ready
ceph-mon/0* active idle 1 172.16.100.11 Unit is ready and clustered
ceph-osd/0* active idle 2 172.16.100.124 Unit is ready (3 OSD)
ceph-osd/1 active idle 3 172.16.100.59 Unit is ready (3 OSD)
microk8s/0* active idle 0 172.16.100.217 16443/tcp node is ready
ceph-csi/1* active idle 172.16.100.217 Unit is ready
Machine State Address Inst id Base AZ Message
0 started 172.16.100.217 9ef8f282-d39e-46bd-aa49-be56d52e3f6a ubuntu@22.04 nova ACTIVE
1 started 172.16.100.11 4e747c87-ad81-4bae-b971-f4539a431818 ubuntu@22.04 nova ACTIVE
2 started 172.16.100.124 718cceb2-0dd4-4311-9193-f7102b504f84 ubuntu@22.04 nova ACTIVE
3 started 172.16.100.59 274f87b3-4443-47d0-90e9-b7e38ef142b1 ubuntu@22.04 nova ACTIVE
4 started 172.16.100.134 fe8a0e52-dff0-4432-bf6b-fac2cff4087f ubuntu@22.04 nova ACTIVE
After deployment is complete, enable CephFS support in the ceph-csi
charm:
juju config ceph-csi cephfs-enable=true
Using Ceph
First, you can test the available storage classes in the MicroK8s cluster:
juju exec --unit microk8s/0 -- microk8s kubectl get storageclass
The output should look similar to this:
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
ceph-xfs (default) rbd.csi.ceph.com Delete Immediate true 70m
ceph-ext4 rbd.csi.ceph.com Delete Immediate true 70m
cephfs cephfs.csi.ceph.com Delete Immediate true 69m
As an example, let’s create a simple pod with a PVC that uses the ceph-ext4
storage class:
juju ssh microk8s/0 -- sudo microk8s kubectl create -f - <<EOF
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
storageClassName: ceph-ext4
accessModes: [ReadWriteOnce]
resources: { requests: { storage: 1Gi } }
---
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
volumes:
- name: pvc
persistentVolumeClaim:
claimName: my-pvc
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: pvc
mountPath: /usr/share/nginx/html
EOF
Shortly, an RBD volume will be created and mounted to the pod. You can verify that everything is running with:
juju exec --unit microk8s/0 -- sudo microk8s kubectl get pod,pvc -o wide
The output should look similar to this:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/nginx 1/1 Running 0 43s 10.1.204.71 juju-9c4cd0-microk8s-0 <none> <none>
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE
persistentvolumeclaim/my-pvc Bound pvc-79aa802d-1864-4c28-9f97-5c34cf4cfcbc 1Gi RWO ceph-ext4 43s Filesystem