James Page Ceph Osd Udev Fixes
- By James Page
- Cloud
Channel | Revision | Published | Runs on |
---|---|---|---|
latest/stable | 1 | 19 Mar 2021 |
juju deploy james-page-ceph-osd-udev-fixes
Deploy universal operators easily with Juju, the Universal Operator Lifecycle Manager.
Platform:
-
add-disk
Add disk(s) to Ceph
- Params
-
bucket string
The name of the bucket in Ceph to add these devices into
-
osd-devices string
The devices to format and set up as osd volumes.
- Required
osd-devices
-
blacklist-add-disk
Add disk(s) to blacklist. Blacklisted disks will not be initialized for use with Ceph even if listed in the application level osd-devices configuration option. . The current blacklist can be viewed with list-disks action. . NOTE: This action and blacklist will not have any effect on already initialized disks.
- Params
-
osd-devices string
A space-separated list of devices to add to blacklist. . Each element should be a absolute path to a device node or filesystem directory (the latter is supported for ceph >= 0.56.6). . Example: '/dev/vdb /var/tmp/test-osd'
- Required
osd-devices
-
blacklist-remove-disk
Remove disk(s) from blacklist.
- Params
-
osd-devices string
A space-separated list of devices to remove from blacklist. . Each element should be a existing entry in the units blacklist. Use list-disks action to list current blacklist entries. . Example: '/dev/vdb /var/tmp/test-osd'
- Required
osd-devices
-
list-disks
List the unmounted disk on the specified unit
-
pause
CAUTION - Set the local osd units in the charm to 'out' but does not stop the osds. Unless the osd cluster is set to noout (see below), this removes them from the ceph cluster and forces ceph to migrate the PGs to other OSDs in the cluster. See the following.
http://docs.ceph.com/docs/master/rados/operations/add-or-rm-osds/#removing-the-osd "Do not let your cluster reach its full ratio when removing an OSD. Removing OSDs could cause the cluster to reach or exceed its full ratio." Also note that for small clusters you may encounter the corner case where some PGs remain stuck in the active+remapped state. Refer to the above link on how to resolve this.
pause-health (on a ceph-mon) unit can be used before pausing a ceph-osd unit to stop the cluster rebalancing the data off this ceph-osd unit. pause-health sets 'noout' on the cluster such that it will not try to rebalance the data accross the remaining units.
It is up to the user of the charm to determine whether pause-health should be used as it depends on whether the osd is being paused for maintenance or to remove it from the cluster completely.
-
replace-osd
Replace a failed osd with a fresh disk
- Params
-
osd-number integer
The osd number to operate on. Example 99. Hint you can get this information from
ceph osd tree
. -
replacement-device string
The replacement device to use. Example /dev/sdb.
- Required
osd-number, replacement-device
-
resume
Set the local osd units in the charm to 'in'. Note that the pause option does NOT stop the osd processes.