Charmed Operator for MongoDB
- Canonical
- Databases
Channel | Revision | Published | Runs on |
---|---|---|---|
6/stable | 61 | 15 Nov 2024 | |
6/candidate | 61 | 15 Nov 2024 | |
6/beta | 61 | 15 Nov 2024 | |
6/edge | 61 | 15 Nov 2024 | |
5/edge | 39 | 14 Dec 2023 |
juju deploy mongodb-k8s --channel 6/candidate
Deploy Kubernetes operators easily with Juju, the Universal Operator Lifecycle Manager. Need a Kubernetes cluster? Install MicroK8s to create a full CNCF-certified Kubernetes system in under 60 seconds.
Platform:
Charmed MongoDB K8s Tutorials > Deploy a replica set > 4. Scale your replicas
Scale your replicas
A replica set in MongoDB is a group of processes that copy stored data in order to make a database highly available. Replication provides redundancy, which means the application can provide self-healing capabilities in case one replica fails.
Disclaimer: This tutorial hosts all replicas on the same machine. This should never be done in a production environment.
To enable high availability in a production environment, replicas should be hosted on different servers to maintain isolation.
Summary
- Add replicas to your MongoDB cluster
- Remove replicas from your MongoDB cluster
Add replicas
You can add two replicas to your deployed MongoDB application with:
juju scale-application mongodb-k8s 3
The number is
3
because thescale-application
command takes the final number of units instead of the amount of units you want to add. Since we already had one replica, the final number after adding two replicas is three.
It usually takes several minutes for the replicas to be added to the replica set. You’ll know that all three replicas are ready when juju status --watch 1s
reports:
Model Controller Cloud/Region Version SLA Timestamp
Model Controller Cloud/Region Version SLA Timestamp
t6 overlord microk8s/localhost 3.1.6 unsupported 12:49:05Z
App Version Status Scale Charm Channel Rev Address Exposed Message
mongodb-k8s active 2 mongodb-k8s 6/beta 37 10.152.183.161 no Primary
Unit Workload Agent Address Ports Message
mongodb-k8s/0* active idle 10.1.138.17 Primary
mongodb-k8s/1 active idle 10.1.138.22
mongodb-k8s/2 active idle 10.1.138.26
If you wanted to verify the replica set configuration, you could connect to MongoDB via mongo
command in the pod. Since your replica set has 2 additional hosts, you will need to update the hosts in your URI. You can retrieve these host IPs with:
export HOST_IP_1="mongodb-k8s-1.mongodb-k8s-endpoints"
export HOST_IP_2="mongodb-k8s-2.mongodb-k8s-endpoints"
Then recreate the URI using your new hosts and reuse the username
, password
, database name
, and replica set name
that you previously used when you first connected to MongoDB:
export URI=mongodb://$DB_USERNAME:$DB_PASSWORD@$HOST_IP,$HOST_IP_1,$HOST_IP_2:27017/$DB_NAME?replicaSet=$REPL_SET_NAME
Now view and save the output of the URI:
echo $URI
Like earlier, we access mongo
by ssh
ing into one of the Charmed MongoDB K8s hosts:
juju ssh --container=mongod mongodb-k8s/0
While ssh
’d into mongodb-k8s/0
unit, we can access mongo
using our new URI that we saved above.
mongosh <saved URI>
Now type rs.status()
and you should see your replica set configuration. It should look something like this:
{
set: 'mongodb-k8s',
date: ISODate("2023-11-09T12:47:12.176Z"),
myState: 1,
term: Long("1"),
syncSourceHost: '',
syncSourceId: -1,
heartbeatIntervalMillis: Long("2000"),
majorityVoteCount: 2,
writeMajorityCount: 2,
votingMembersCount: 3,
writableVotingMembersCount: 3,
optimes: {
lastCommittedOpTime: { ts: Timestamp({ t: 1699534030, i: 2 }), t: Long("1") },
lastCommittedWallTime: ISODate("2023-11-09T12:47:10.212Z"),
readConcernMajorityOpTime: { ts: Timestamp({ t: 1699534030, i: 2 }), t: Long("1") },
appliedOpTime: { ts: Timestamp({ t: 1699534030, i: 2 }), t: Long("1") },
durableOpTime: { ts: Timestamp({ t: 1699534030, i: 2 }), t: Long("1") },
lastAppliedWallTime: ISODate("2023-11-09T12:47:10.212Z"),
lastDurableWallTime: ISODate("2023-11-09T12:47:10.212Z")
},
...
Return to original shell
Leave the MongoDB shell by typing exit
.
You will be back in the container of Charmed MongoDB K8s (mongodb-k8s/0
). Exit this container by typing exit
again.
You are not at the original shell where you can interact with Juju and MicroK8s.
Remove replicas
Removing a unit from the application scales the replicas down. Before we scale down the replicas, list all the units with juju status
, here you will see three units mongodb-k8s/0
, mongodb-k8s/1
, and mongodb-k8s/2
. Each of these units hosts a MongoDB replica.
To remove one unit, define a new final number of units for the scale-application
command:
juju scale-application mongodb-k8s 2
You’ll know that the replica was successfully removed when juju status --watch 1s
reports:
Model Controller Cloud/Region Version SLA Timestamp
t6 overlord microk8s/localhost 3.1.6 unsupported 12:49:05Z
App Version Status Scale Charm Channel Rev Address Exposed Message
mongodb-k8s active 2 mongodb-k8s 6/beta 37 10.152.183.161 no Primary
Unit Workload Agent Address Ports Message
mongodb-k8s/0* active idle 10.1.138.17 Primary
mongodb-k8s/1 active idle 10.1.138.22
You can check the status of pod in K8s to see that a pod was removed by using command kubectl get pods --namespace=tutorial
NAME READY STATUS RESTARTS AGE
modeloperator-69977f8c44-zh7pc 1/1 Running 0 52m
mongodb-k8s-0 2/2 Running 0 49m
mongodb-k8s-1 2/2 Running 0 19m
You can also see that the replica was successfully removed by using the new URI (where the removed host has been excluded).
Next step: 5. Manage passwords