Your first Kubernetes operator

1. Overview

2. Create the charm

Install charmcraft with sudo snap install charmcraft.

charmcraft init

Make a clean directory for your operator (mkdir my-charm) and use charmcraft init in that directory to setup the base file structure of a charm:

$ mkdir hello-world
$ cd hello-world
$ charmcraft init
All done.
There are some notes about things we think you should do.
These are marked with ‘TODO:’, as is customary. Namely: fill out the description explain how to use the charm
  metadata.yaml: fill out the charm's description
  metadata.yaml: fill out the charm's summary

Now you’ll have a file tree that includes all the key elements of your charmed operator:

$ tree
├── actions.yaml
├── config.yaml
├── metadata.yaml
├── requirements-dev.txt
├── requirements.txt
├── run_tests
├── src
│   └──
└── tests

2 directories, 11 files

That is a minimal charmed operator! It has code we can extend, with tests, a requirements files for our Python dependencies, some YAML files that describe the charm (metadata.yaml for a name and description as well as key behavioural infromation, actions.yaml for day-2 operations like backup and restore that you want to offer, config.yaml for, unsurprisingly, any configuration options you want to offer on your operator) and other project files like README, LICENSE, etc.

The only things you really need to get right are src/ which is the operator code, and metadata.yaml which defines the behaviour of the operator on K8s.

This baby operator won’t do anything special, just log that it was installed properly.

The operator code in Python

#!/usr/bin/env python3

import logging

from ops.charm import CharmBase
from ops.main import main

logger = logging.getLogger(__name__)

class MyCharm(CharmBase):
    def __init__(self, *args):
        self.framework.observe(self.on.install, self.on_install)

    def on_install(self, event):"Congratulations, the charm was properly installed!")

if __name__ == "__main__":

A charmed operator is pure Python.

We have a couple imports .logging is from Python’s stdlib, and ops is the Python Operator Framework.

The charm itself is a straightforward Python class which inherits from CharmBase and which we pass to the framework’s main. This is all we need to start using the Operator Framework.

A charm class is a collection of event handling methods. The Operator Framework delivers events to your charm for things like install, remove, upgrade, configure, actions like backup/restore, and integration with other operators. Any events you want to handle become methods on the class.

In the minimal charm class, after calling the parent class __init__, we tell the framework to call our on_install method when the install event is triggered. That method only logs that all succeeded.

Metadata to define behaviour

You don’t deploy your operator directly. Instead, you ask the operator lifecycle manager to deploy it. The metadata.yaml file tells the OLM about the operator, so that it can launch it correctly and manage it successfully.

Let’s edit that file and make it our own:

name: hello-world
summary: My first operator from the tutorial
description: |
  A very simple charm to demonstrate charmcraft and the
  Python operator framework.
series: [kubernetes]

The series element in that metadata.yaml is a list of platforms that the operator will work on. In this case, the operator works on Kubernetes. You can also make operators for traditional applications on Linux or Windows, but we won’t go into the detail of that here.

Python dependencies

You may have spotted the requirements.txt file in that tree. It’s where we declare any Python dependencies. For this tutorial only need the Operator Framework which is ops and it should be there already thanks to charmcraft init.

$ cat requirements.txt 

You can of course add other PyPI packages which charmcraft will fetch and keep up to date for you as part of the build process.

3. Build the charm

Your operator is pure code. To share it with someone else, it helps to have a package for it, which we call a charm. A charm of an operator is like a deb of a binary: a package that can be shared, published and retrieved.

Let’s build our charm:

$ charmcraft build
Done, charm left in 'hello-world.charm'

A charm is really just a zip file of the code and all its dependencies, as well as the metadata. charmcraft build just fetches the dependencies, compiles any modules, makes sure you have all the right pieces of metadata, and zips it up for easy distribution.

Take a look inside. You’ll recognise your files, and then a virtualenv with the Python dependencies clearly mapped out:

$ unzip -l hello-world.charm
Archive:  hello-world.charm
  Length      Date    Time    Name
---------  ---------- -----   ----
        4  2020-11-12 20:48   requirements.txt
      429  2020-11-12 20:48   actions.yaml
      379  2020-11-12 20:48
      302  2020-11-12 20:48   run_tests
    35147  2020-11-12 20:48   LICENSE
       93  2020-11-12 22:06   dispatch
      222  2020-11-12 20:48   metadata.yaml
      305  2020-11-12 20:48   config.yaml
       27  2020-11-12 20:48   requirements-dev.txt
       99  2020-11-12 20:48   .flake8
       93  2020-11-12 22:06   hooks/install
       93  2020-11-12 22:06   hooks/upgrade-charm
       93  2020-11-12 22:06   hooks/start
     1112  2020-11-12 20:48   src/
     1170  2020-11-12 20:48   tests/
        0  2020-11-12 20:48   tests/
       92  2020-11-12 22:06   venv/ops-1.0.1.dist-info/WHEEL
        4  2020-11-12 22:06   venv/ops-1.0.1.dist-info/top_level.txt
    11358  2020-11-12 22:06   venv/ops-1.0.1.dist-info/LICENSE.txt
     1718  2020-11-12 22:06   venv/ops-1.0.1.dist-info/RECORD
        4  2020-11-12 22:06   venv/ops-1.0.1.dist-info/INSTALLER
     5531  2020-11-12 22:06   venv/ops-1.0.1.dist-info/METADATA
      103  2020-11-12 22:06   venv/PyYAML-5.3.1.dist-info/WHEEL
     1101  2020-11-12 22:06   venv/PyYAML-5.3.1.dist-info/LICENSE
       11  2020-11-12 22:06   venv/PyYAML-5.3.1.dist-info/top_level.txt
     2456  2020-11-12 22:06   venv/PyYAML-5.3.1.dist-info/RECORD
        4  2020-11-12 22:06   venv/PyYAML-5.3.1.dist-info/INSTALLER
     1758  2020-11-12 22:06   venv/PyYAML-5.3.1.dist-info/METADATA
    14192  2020-11-12 22:06   venv/ops/
      819  2020-11-12 22:06   venv/ops/
       46  2020-11-12 22:06   venv/ops/
    32748  2020-11-12 22:06   venv/ops/
     4139  2020-11-12 22:06   venv/ops/
    34898  2020-11-12 22:06   venv/ops/
    41255  2020-11-12 22:06   venv/ops/
     2155  2020-11-12 22:06   venv/ops/
    47924  2020-11-12 22:06   venv/ops/
    15459  2020-11-12 22:06   venv/ops/
    34825  2020-11-12 22:06   venv/ops/__pycache__/charm.cpython-38.pyc
     2129  2020-11-12 22:06   venv/ops/__pycache__/log.cpython-38.pyc
      184  2020-11-12 22:06   venv/ops/__pycache__/version.cpython-38.pyc
    35527  2020-11-12 22:06   venv/ops/__pycache__/framework.cpython-38.pyc
      281  2020-11-12 22:06   venv/ops/__pycache__/__init__.cpython-38.pyc
    47958  2020-11-12 22:06   venv/ops/__pycache__/model.cpython-38.pyc
    11195  2020-11-12 22:06   venv/ops/__pycache__/main.cpython-38.pyc
     3539  2020-11-12 22:06   venv/ops/__pycache__/jujuversion.cpython-38.pyc
    12911  2020-11-12 22:06   venv/ops/__pycache__/storage.cpython-38.pyc
    28496  2020-11-12 22:06   venv/ops/__pycache__/testing.cpython-38.pyc
     9213  2020-11-12 22:06   venv/ops/lib/
     7616  2020-11-12 22:06   venv/ops/lib/__pycache__/__init__.cpython-38.pyc
    25495  2020-11-12 22:06   venv/yaml/
     2533  2020-11-12 22:06   venv/yaml/
     2837  2020-11-12 22:06   venv/yaml/
     4883  2020-11-12 22:06   venv/yaml/
     2445  2020-11-12 22:06   venv/yaml/
     4165  2020-11-12 22:06   venv/yaml/
     2061  2020-11-12 22:06   venv/yaml/
    13170  2020-11-12 22:06   venv/yaml/
     2573  2020-11-12 22:06   venv/yaml/
    14184  2020-11-12 22:06   venv/yaml/
    43006  2020-11-12 22:06   venv/yaml/
    51277  2020-11-12 22:06   venv/yaml/
     1440  2020-11-12 22:06   venv/yaml/
     3846  2020-11-12 22:06   venv/yaml/
     8970  2020-11-12 22:06   venv/yaml/
     6794  2020-11-12 22:06   venv/yaml/
    28627  2020-11-12 22:06   venv/yaml/
    10104  2020-11-12 22:06   venv/yaml/__pycache__/representer.cpython-38.pyc
     2335  2020-11-12 22:06   venv/yaml/__pycache__/error.cpython-38.pyc
    11959  2020-11-12 22:06   venv/yaml/__pycache__/parser.cpython-38.pyc
    20857  2020-11-12 22:06   venv/yaml/__pycache__/constructor.cpython-38.pyc
     1760  2020-11-12 22:06   venv/yaml/__pycache__/nodes.cpython-38.pyc
     1858  2020-11-12 22:06   venv/yaml/__pycache__/dumper.cpython-38.pyc
     4572  2020-11-12 22:06   venv/yaml/__pycache__/reader.cpython-38.pyc
    25388  2020-11-12 22:06   venv/yaml/__pycache__/emitter.cpython-38.pyc
     5513  2020-11-12 22:06   venv/yaml/__pycache__/resolver.cpython-38.pyc
    11880  2020-11-12 22:06   venv/yaml/__pycache__/__init__.cpython-38.pyc
     4009  2020-11-12 22:06   venv/yaml/__pycache__/events.cpython-38.pyc
     3598  2020-11-12 22:06   venv/yaml/__pycache__/composer.cpython-38.pyc
     3355  2020-11-12 22:06   venv/yaml/__pycache__/serializer.cpython-38.pyc
     4970  2020-11-12 22:06   venv/yaml/__pycache__/tokens.cpython-38.pyc
     3441  2020-11-12 22:06   venv/yaml/__pycache__/cyaml.cpython-38.pyc
     2199  2020-11-12 22:06   venv/yaml/__pycache__/loader.cpython-38.pyc
    25304  2020-11-12 22:06   venv/yaml/__pycache__/scanner.cpython-38.pyc
---------                     -------
   812625                     84 files

4. Deploy the charm

You will first deploy the operator lifecycle manager. If you already have Juju running on a K8s cluster, you can skip ahead to Make a new model below.

Get access to a Kubernetes cluster

There are sooooo many ways to get a Kubernetes that we are not going to list them here. For a local K8s on your workstation we recommend MicroK8s. But really it shouldn’t matter.

You know you are good to go when you can type:

kubectl config get-clusters

You should see a cluster, or list of clusters, under NAME.

Launch the Juju OLM

You’ll need the operator lifecycle manager running on your K8s cluster.

$ sudo snap install juju --classic
$ juju add-k8s mycluster --cluster-name=my_cluster_name

Now you should have your cluster in the output of juju list-clouds.

XXX example output here.

Start the operator lifecycle manager on your K8s cluster:

$ juju bootstrap mycluster

You should now have a namespace on K8s for this OLM controller:

kubectl get namespaces
NAME                            STATUS   AGE
kube-system                     Active   10d
kube-public                     Active   10d
kube-node-lease                 Active   10d
default                         Active   10d
controller-XYZ                  Active   2m

Make a new model

Operators are deployed in groups, called models. Think of a model as a canvas where you can paint the software you want to deploy and integrate. Let’s add a new model:

Added 'hello-world' model on <XYZ> with credential 'XXX' for user 'XXX'

On Kubernetes, each model is put into a different namespace on the K8s. So you should see a hello-world namespace in your Kubernetes:

kubectl get namespaces
NAME                            STATUS   AGE
kube-system                     Active   10d
kube-public                     Active   10d
kube-node-lease                 Active   10d
default                         Active   10d
controller-XYZ                  Active   5m
hello-world                     Active   79s

And of course the new model in Juju:

$ juju models
Controller: XXX

Model         Cloud/Region  Type        Status      Units  Access  Last connection
controller    XXX           kubernetes  available   -       admin  just now
hello-world*  XXX           kubernetes  available   -       admin  never connected

And deploy your operator!

Switch to a different terminal and execute the following to see the logging:

$ juju debug-log
controller-0: 22:37:02 INFO juju.worker.apicaller [48d45f] "controller-0" successfully connected to "localhost:17070"
controller-0: 22:37:02 INFO juju.worker.logforwarder config change - log forwarding not enabled
controller-0: 22:37:02 INFO juju.worker.logger logger worker started
controller-0: 22:37:02 INFO juju.worker.pruner.statushistory status history config: max age: 336h0m0s, max collection size 5120M for hello-world (48d45fd3-bc24-467e-8f88-2378dda24211)
controller-0: 22:37:02 INFO juju.worker.pruner.action status history config: max age: 336h0m0s, max collection size 5120M for hello-world (48d45fd3-bc24-467e-8f88-2378dda24211)

Switch back to the original terminal, and let’s deploy our charm:

juju deploy ./hello-world.charm

You can watch the evolving status of the deployment with:

$ watch -n 1 juju status --color

Wait some moments to let the Juju OLM do its magic, and at some point, in the juju debug-log terminal, a line very similar to the following one will appear:

unit-hello-world-1: 13:26:10 INFO unit.hello-world/1.juju-log Congratulations, the charm was properly installed!

So, you have successfully created a minimal charmed operator, packaged it as a charm, deployed the operator lifecycle manager (OLM), and asked it to create a new model on the K8s, then asked the OLM to deploy your operator into that model.

To tear this all down:

$ juju destroy-model hello-world

You should now see only a controller model in your set of juju models:

$ juju models
Controller: XYZ

Model       Cloud/Region   Type        Status      Units  Access  Last connection
controller  cluster        kubernetes  available   -       admin  just now

Note the controller name at the top of that listing. Now you can tear down that OLM as well, with:

$ juju destroy-controller XYZ

That’s all!