mongodb-k8s

Charmed Operator for MongoDB

Channel Revision Published Runs on
6/stable 61 15 Nov 2024
Ubuntu 22.04
6/candidate 61 15 Nov 2024
Ubuntu 22.04
6/beta 61 15 Nov 2024
Ubuntu 22.04
6/edge 62 12 Feb 2025
Ubuntu 22.04
5/edge 39 14 Dec 2023
Ubuntu 22.04
juju deploy mongodb-k8s --channel 6/edge
Show information

Platform:

How to deploy MongoDB K8s

This is a guide on how to deploy Charmed MongoDB K8s as a replica set or a sharded cluster.

Summary


Create a MongoDB replica set

You can create one or multiple replicas at once when deploying MongoDB K8s.

Deploy a single replica

To deploy a single unit of MongoDB as a replica set, run:

juju deploy mongodb-k8s --trust

The --trust flag is necessary to grant the application access to the Kubernetes cluster.

Deploy multiple replicas

To deploy MongoDB with multiple replicas, specify the number of desired replicas with the -n option:

juju deploy mongodb-k8s -n <number_of_replicas>

Create a MongoDB sharded cluster

To create a sharded cluster, we must first deploy each cluster component separately with a manually defined role, then integrate them.

Deploy cluster components with a single replica

Deploying a shard application will assign it one replica by default.

To deploy a sharded cluster with two shards, run:

juju deploy mongodb-k8s --config role="config-server" <config_server_name>
juju deploy mongodb-k8s --config role="shard" <shard_name_0>
juju deploy mongodb-k8s --config role="shard" <shard_name_1>

This will deploy the latest stable release . To specify a different version, use the --channel flag, e.g. --channel=6/beta.

Deploy cluster components with multiple replicas

To configure the amount of replicas for a shard or config-server during deployment, just use the -n flag.

To deploy a shard and config-server with 3 replicas each, run:

juju deploy mongodb-k8s --config role="shard" <shard_name> -n 3
juju deploy mongodb-k8s --config role="config-server" <config_server_name> -n 3

This will deploy the latest stable release . To specify a different version, use the --channel flag, e.g. --channel=6/beta.

Note that you can also change the number of replicas for a shard or config server after deployment. See Scale replicas and shards > How to add shards to a cluster.

Integrate cluster components

To create a cluster, integrate the shard applications with the config-server application.

For the case of two shards and one config server, you would run:

juju integrate config-server:config-server <shard_name_0>:sharding
juju integrate config-server:config-server <shard_name_1>:sharding

Deploy with Terraform

Make sure you have a working Terraform 1.8+ installed in your machine. You can install Terraform or OpenTofu via a snap.

Terraform modules make use of the Terraform Juju provider. More information about the Juju provider can be found here. For more information about Terraform, please refer to the official docs.

Charmed MongoDB K8s has a base module that bundles all the base resources of the Charmed MongoDB K8s solution. But it also provides two product modules that bundle all the resources and integrations for replica set deployments and sharded deployments. These deployments also integrate with necessary applications for: backups (s3-integrator), client connections (data-integrator), encryption (self-signed-certificates), and in the case of a sharded cluster a mongos router.

Deploy a replica set with terraform

Get access to the replica set product terraform module code:

git clone https://github.com/canonical/mongodb-k8s-operator.git
cd terraform/modules/replica_set

Then deploy Charmed MongoDB K8s using the stand Terraform commands:

terraform init
terraform plan -out myfile
terraform apply "myfile"

Then you can watch and wait for your replica set to become available.

Deploy a sharded cluster with terraform

Get access to the shareded cluster product terraform module code:

git clone https://github.com/canonical/mongodb-k8s-operator.git
cd terraform/modules/sharded_cluster

Then deploy Charmed MongoDB K8s using the stand Terraform commands:

terraform init
terraform plan -out myfile
terraform apply "myfile"

Then you can watch and wait for your shareded cluster to become available.