What is a Kubernetes Operator?

An ‘operator’ in Kubernetes is a trusted container which drives other containers, simplifying the work of administration for you. Instead of handcrafting all the YAML for container operations of the workload, the operator generates the necessary K8s commands to cover the app lifecycle. Since the operator is software it can encode best practices and it never gets tired. The operator is available 24/7 to react to problems, and it can do the tedious work of getting all the low-level details right.

In Kubernetes, there are a *lot* of low-level details. Operators are a way to manage the complexity of application management on Kubernetes by having software do the work.

In this community we extend the idea of operators beyond Kubernetes, to deploy and run our legacy workloads on VMWare or bare metal or OpenStack. That legacy estate is huge and having operators do the work is exciting - it reduces cost and increases the quality of operations. Even though the term ‘operator’ started in Kubernetes, we use it more generally for ‘software that drives software’.

What is the Open Operator Collection?

The Open Operator Collection is an open-source initiative to provide a large number of interoperable, easily integrated operators for common workloads.

A diverse community of specialists in applications, operations and security contribute high–quality operators for these applications to the collection. The collection ensures a consistent experience across these operators, and uses a common Python Operator Framework for seamless integration between operators from different maintainers.

Our primary focus is Kubernetes operators because the complexity of daily operations in a sophisticated multi–cloud K8s environment lends itself to automation. However, we also accept and maintain operators for many workloads on traditional virtualization, public cloud and bare metal compute.

What is an Operator Lifecycle Manager?

An operator lifecycle manager provides a central view of operators in a deployment, the configuration, scale and status of each of them, and the integration lines between them. An operator lifecycle manager keeps track of potential updates and upgrades for each operator and coordinates the flow of events and messages between operators.

The lifecycle manager for the Open Operator Collection works across Kubernetes and traditional machine environments, and can integrate Kubernetes models and operators with machine-based models and operators transparently.

Administrators control operators through the operator lifecycle manager which handles role-based access controls, audit, logging, leadership election, message distribution, event serialization, operator status, updates, upgrades, integration and configuration.

What is devsecops?

Devsecops is the merging of product development, security, and operations. Practitioners encourage the formation of multi-disciplinary teams that consider operational and security aspects of enterprise software during product development. Instead of audits after the fact, or high-level recommendations about security, devsecops teams deliver automated solutions that cover all these elements together.

The Open Operator Collection welcomes experts in applications, operations, and security to encode the most resilient, most efficient and most secure practices for application lifecycle and integration. Making security part of the design process and the operator implementation, and treating the entire lifecycle as code, are the key aspects of devsecops.

Anybody using our operators is automatically working in a secure fashion, because the operators encapsulate knowledge from the wider security community. Anybody using our operators will also get the very best operational practices by default.

What is a charm?

A charm is a software package that bundles an operator together with metadata that supports the integration of many operators in a coherent aggregated system.

An operator packaged as a charm means that it is configured, operated and integrated in a standard way regardless of the vendor or the application. Charms enable multi-vendor operator collections with standardised behaviours, reducing the learning curve associated with each operator and creating richer application ecosystems.

How do operators handle integration?

Operators in the collection declare endpoints that represent potential forms of integration. For example, a MySQL operator can say that it can provide a MySQL database, and that it can stream its logs with the rsyslog protocol.

Each endpoint has a type and a direction, it can be ‘inbound’ or ‘outbound’. You can only integrate two endpoints if they have the same type and opposite directions.

Don’t confuse the name of the endpoint with the type of the endpoint. Just because an endpoint is named ‘mysql’ doesn’t mean that it has the type ‘mysql’. The type determines the nature of the handshaking that happens during integration. This allows an operator to have multiple endpoints that integrate the same way (‘the same type’) but for different purposes.

For example, consider an app that offers up two REST endpoints, both of which expect the caller to use the right username and password and certificates, but which serve different purposes. It would have two different endpoints with the same type which means they will handshake the same way to establish path, username, passwords and keys. Now, when I relate something to a particular endpoint, the handshaking will be identical to establish the path, username, password and keys, but the purpose will depend on the endpoint.

When two endpoints are integrated, or related, the operators configure their workloads appropriately for that integration. Here is the simplest example of a relation between the endpoints on two operators:

And of course, by repeating the process with different endpoints on different operators you can construct a rich application graph, or topology, of multiple operators, each of which is driving their own workload and aware of integration in the graph.

Operators learn about the application graph at deployment time, and are expected to handle changes in the application graph dynamically. So you can deploy a topology of applications, and then evolve that topology by adding new applications and integrating them whenever you like.