MongoDB
- By Canonical Data Platform
- Databases
Channel | Revision | Published | Runs on |
---|---|---|---|
6/stable | 199 | 04 Oct 2024 | |
6/candidate | 199 | 04 Oct 2024 | |
6/beta | 199 | 04 Oct 2024 | |
6/edge | 201 | 10 Oct 2024 | |
5/stable | 117 | 20 Apr 2023 | |
5/candidate | 117 | 20 Apr 2023 | |
5/edge | 139 | 21 Nov 2023 | |
5/edge | 109 | 06 Mar 2023 | |
3.6/stable | 100 | 28 Apr 2023 | |
3.6/candidate | 100 | 13 Apr 2023 | |
3.6/edge | 100 | 03 Feb 2023 |
juju deploy mongodb --channel 6/beta
Deploy universal operators easily with Juju, the Universal Operator Lifecycle Manager.
Platform:
Charmed MongoDB Tutorials > Deploy a sharded cluster > 4. Add and remove shards
Add and remove shards
Shards spread data evenly throughout the cluster, making a database highly available. This part of the tutorial will teach you how to scale your cluster by adding and removing shards.
Summary
- Add shards to your MongoDB cluster
- Remove shards from your MongoDB cluster
Disclaimer: This tutorial hosts all shards on the same machine. This should never be done in a production environment.
To enable high availability in a production environment, replicas should be hosted on different servers to maintain isolation.
Add shards
Sharded clusters cannot be scaled via juju the way replica sets can. So, before adding a shard to the cluster, we need to create it.To create a new shard, deploy a Charmed MongoDB application with the shard
config option role:
juju deploy mongodb --config role="shard" shard2
Use juju status --watch 1s --relations
to wait until shard2
is blocked due to missing relation.
Model Controller Cloud/Region Version SLA Timestamp
tutorial overlord localhost/localhost 3.1.7 unsupported 15:15:53+01:00
App Version Status Scale Charm Channel Rev Exposed Message
config-server active 1 mongodb 6/beta 149 no Primary
shard0 active 1 mongodb 6/beta 149 no Primary
shard1 active 1 mongodb 6/beta 149 no Primary
shard2 blocked 1 mongodb 6/beta 149 no missing relation to config server
Unit Workload Agent Machine Public address Ports Message
config-server/0* active idle 6 10.17.247.150 27017-27018/tcp Primary
shard0/0* active idle 7 10.17.247.50 27017/tcp Primary
shard1/0* active idle 8 10.17.247.214 27017/tcp Primary
shard2/0* blocked executing 9 10.17.247.144 missing relation to config server
Machine State Address Inst id Base AZ Message
6 started 10.17.247.150 juju-3acea1-6 ubuntu@22.04 Running
7 started 10.17.247.50 juju-3acea1-7 ubuntu@22.04 Running
8 started 10.17.247.214 juju-3acea1-8 ubuntu@22.04 Running
9 started 10.17.247.144 juju-3acea1-9 ubuntu@22.04 Running
Integration provider Requirer Interface Type Message
config-server:config-server shard0:sharding shards regular
config-server:config-server shard1:sharding shards regular
config-server:database-peers config-server:database-peers mongodb-peers peer
shard0:database-peers shard0:database-peers mongodb-peers peer
shard1:database-peers shard1:database-peers mongodb-peers peer
shard2:database-peers shard2:database-peers mongodb-peers peer
Now that our shard is deployed, it is ready to be added to the cluster. For the shard to be added to the cluster it must be integrated with the config-server
:
juju integrate config-server:config-server shard2:sharding
Use juju status --watch 1s --relations
to watch the cluster until the new shard is active.
Model Controller Cloud/Region Version SLA Timestamp
tutorial overlord localhost/localhost 3.1.7 unsupported 15:29:18+01:00
App Version Status Scale Charm Channel Rev Exposed Message
config-server active 1 mongodb 6/beta 149 no Primary
shard0 active 1 mongodb 6/beta 149 no Primary
shard1 active 1 mongodb 6/beta 149 no Primary
shard2 active 1 mongodb 6/beta 149 no Primary
Unit Workload Agent Machine Public address Ports Message
config-server/0* active idle 6 10.17.247.150 27017-27018/tcp Primary
shard0/0* active idle 7 10.17.247.50 27017/tcp Primary
shard1/0* active idle 8 10.17.247.214 27017/tcp Primary
shard2/0* active idle 9 10.17.247.144 27017/tcp Primary
Machine State Address Inst id Base AZ Message
6 started 10.17.247.150 juju-3acea1-6 ubuntu@22.04 Running
7 started 10.17.247.50 juju-3acea1-7 ubuntu@22.04 Running
8 started 10.17.247.214 juju-3acea1-8 ubuntu@22.04 Running
9 started 10.17.247.144 juju-3acea1-9 ubuntu@22.04 Running
Integration provider Requirer Interface Type Message
config-server:config-server shard0:sharding shards regular
config-server:config-server shard1:sharding shards regular
config-server:config-server shard2:sharding shards regular
config-server:database-peers config-server:database-peers mongodb-peers peer
shard0:database-peers shard0:database-peers mongodb-peers peer
shard1:database-peers shard1:database-peers mongodb-peers peer
shard2:database-peers shard2:database-peers mongodb-peers peer
To verify that the shard is present in the cluster, we can check the cluster configuration through the mongos
router inside config-server
and listing shards like we did in 4. Access a sharded cluster | Connect via MongoDB URI.
To summarize:
echo $URI
to get your URIjuju ssh config-server/0
to enter the config-servercharmed-mongodb.mongosh <your-URI>
to enter the mongodb shellsh.status()
This will confirm that shard shard2
is in the cluster configuration:
shards
[
{
_id: 'shard0',
host: 'shard0/10.17.247.50:27017',
state: 1,
topologyTime: Timestamp({ t: 1708523939, i: 7 })
},
{
_id: 'shard1',
host: 'shard1/10.17.247.214:27017',
state: 1,
topologyTime: Timestamp({ t: 1708523945, i: 5 })
},
{
_id: 'shard2',
host: 'shard2/10.17.247.144:27017',
state: 1,
topologyTime: Timestamp({ t: 1708525386, i: 4 })
}
]
...
When you’re ready to leave the MongoDB shell, just type exit
. You will be back in the host of Charmed MongoDB (mongodb/0
).
Exit this host by once again typing exit
. Now you will be in your original shell where you can interact with Juju and LXD.
Remove shards
To remove a shard from Charmed MongoDB, we remove the integration from the shard and the config-server. This signals to the cluster that it is time to drain the shard and remove it from the config.First, break the integration:
juju remove-relation config-server:config-server shard2:sharding
Watch juju status --watch 1s --relations
until shard2
has finished being drained from the cluster and is no longer listed as a requirer of the config-server
integration provider.
Model Controller Cloud/Region Version SLA Timestamp
tutorial overlord localhost/localhost 3.1.7 unsupported 15:52:10+01:00
App Version Status Scale Charm Channel Rev Exposed Message
config-server active 1 mongodb 6/beta 149 no Primary
shard0 active 1 mongodb 6/beta 149 no Primary
shard1 active 1 mongodb 6/beta 149 no Primary
shard2 active 1 mongodb 6/beta 149 no Primary
Unit Workload Agent Machine Public address Ports Message
config-server/0* active idle 6 10.17.247.150 27017-27018/tcp Primary
shard0/0* active idle 7 10.17.247.50 27017/tcp Primary
shard1/0* active idle 8 10.17.247.214 27017/tcp Primary
shard2/0* active idle 9 10.17.247.144 27017/tcp Primary
Machine State Address Inst id Base AZ Message
6 started 10.17.247.150 juju-3acea1-6 ubuntu@22.04 Running
7 started 10.17.247.50 juju-3acea1-7 ubuntu@22.04 Running
8 started 10.17.247.214 juju-3acea1-8 ubuntu@22.04 Running
9 started 10.17.247.144 juju-3acea1-9 ubuntu@22.04 Running
Integration provider Requirer Interface Type Message
config-server:config-server shard0:sharding shards regular
config-server:config-server shard1:sharding shards regular
config-server:database-peers config-server:database-peers mongodb-peers peer
shard0:database-peers shard0:database-peers mongodb-peers peer
shard1:database-peers shard1:database-peers mongodb-peers peer
shard2:database-peers shard2:database-peers mongodb-peers peer
Now that the shard has been drained and removed from the configuration, it is safe to remove it altogether. To do so, run:
juju remove-application shard2
Watch the cluster with juju status --watch 1s --relations
until shard2
is no longer listed anywhere.
Next Step: 5. Manage passwords