Ceph Mon
- By OpenStack Charmers
- Cloud
Channel | Revision | Published | Runs on |
---|---|---|---|
latest/edge | 237 | 15 Aug 2024 | |
latest/edge | 236 | 15 Aug 2024 | |
latest/edge | 235 | 15 Aug 2024 | |
latest/edge | 234 | 15 Aug 2024 | |
latest/edge | 219 | 14 Jun 2024 | |
latest/edge | 210 | 29 Apr 2024 | |
latest/edge | 179 | 24 Jul 2023 | |
quincy/stable | 215 | 16 May 2024 | |
quincy/candidate | 239 | 03 Oct 2024 | |
squid/candidate | 237 | 15 Aug 2024 | |
squid/candidate | 236 | 15 Aug 2024 | |
squid/candidate | 235 | 15 Aug 2024 | |
squid/candidate | 234 | 15 Aug 2024 | |
squid/candidate | 219 | 14 Jun 2024 | |
squid/candidate | 207 | 22 Apr 2024 | |
reef/stable | 229 | 13 Aug 2024 | |
reef/stable | 210 | 26 Jun 2024 | |
reef/candidate | 238 | 05 Sep 2024 | |
reef/candidate | 210 | 29 Apr 2024 | |
pacific/stable | 217 | 03 Jun 2024 | |
octopus/stable | 177 | 15 Jul 2023 | |
octopus/stable | 89 | 23 Jan 2023 | |
nautilus/edge | 73 | 04 Mar 2022 | |
nautilus/edge | 90 | 25 Feb 2022 | |
mimic/edge | 73 | 04 Mar 2022 | |
mimic/edge | 90 | 25 Feb 2022 | |
luminous/edge | 73 | 04 Mar 2022 | |
luminous/edge | 87 | 24 Feb 2022 |
juju deploy ceph-mon --channel quincy/stable
Deploy universal operators easily with Juju, the Universal Operator Lifecycle Manager.
Platform:
Overview
Ceph is a unified, distributed storage system designed for excellent performance, reliability, and scalability.
The ceph-mon charm deploys Ceph monitor nodes, allowing one to create a monitor cluster. It is used in conjunction with the ceph-osd charm. Together, these charms can scale out the amount of storage available in a Ceph cluster.
Important: This documentation supports version 3.x
of the Juju client.
See the OpenStack Charm
guide if you are using the 2.9.x
client.
Usage
Configuration
This section covers common and/or important configuration options. See file
config.yaml
for the full list of options, along with their descriptions and
default values. See the Juju documentation for details
on configuring applications.
customize-failure-domain
The customize-failure-domain
option determines how a Ceph CRUSH map is
configured.
A value of ‘false’ (the default) will lead to a map that will replicate data across hosts (implemented as Ceph bucket type ‘host’). With a value of ‘true’ all MAAS-defined zones will be used to generate a map that will replicate data across Ceph availability zones (implemented as bucket type ‘rack’).
This option is also supported by the ceph-osd charm. Its value must be the same for both charms.
monitor-count
The monitor-count
option gives the number of ceph-mon units in the monitor
sub-cluster (where one ceph-mon unit represents one MON). The default value is
‘3’ and is generally a good choice, but it is good practice to set this
explicitly to avoid a possible race condition during the formation of the
sub-cluster. To establish quorum and enable partition tolerance an odd number
of ceph-mon units is required.
Important: A monitor count of less than three is not recommended for production environments. Test environments can use a single ceph-mon unit by setting this option to ‘1’.
expected-osd-count
The expected-osd-count
option states the number of OSDs expected to be
deployed in the cluster. This value can influence the number of placement
groups (PGs) to use per pool. The PG calculation is based either on the actual
number of OSDs or this option’s value, whichever is greater. The default value
is ‘0’, which tells the charm to only consider the actual number of OSDs. If
the actual number of OSDs is less than three then this option must explicitly
state that number. Only until a sufficient (or prescribed) number of OSDs has
been attained will the charm be able to create Ceph pools.
Note: The inability to create a pool due to an insufficient number of OSDs will cause any consuming application (characterised by a relation involving the
ceph-mon:client
endpoint) to remain in the ‘waiting’ state.
source
The source
option states the software sources. A common value is an OpenStack
UCA release (e.g. ‘cloud:xenial-queens’ or ‘cloud:bionic-ussuri’). See Ceph
and the UCA. The underlying host’s existing apt sources
will be used if this option is not specified (this behaviour can be explicitly
chosen by using the value of ‘distro’).
Deployment
A cloud with three MON nodes is a typical design whereas three OSDs are considered the minimum. For example, to deploy a Ceph cluster consisting of three OSDs (one per ceph-osd unit) and three MONs:
juju deploy -n 3 --config ceph-osd.yaml ceph-osd
juju deploy -n 3 --to lxd:0,lxd:1,lxd:2 ceph-mon
juju integrate ceph-osd:mon ceph-mon:osd
Here, a containerised MON is running alongside each storage node. We’ve assumed that the machines spawned in the first command are assigned IDs of 0, 1, and 2.
By default, the monitor cluster will not be complete until three ceph-mon units have been deployed. This is to ensure that a quorum is achieved prior to the addition of storage devices.
See the Ceph documentation for notes on monitor cluster deployment strategies.
Note: Refer to the Install OpenStack page in the OpenStack Charms Deployment Guide for instructions on installing a monitor cluster for use with OpenStack.
Network spaces
This charm supports the use of Juju network spaces (Juju
v.2.0
). This feature optionally allows specific types of the application’s
network traffic to be bound to subnets that the underlying hardware is
connected to.
Note: Spaces must be configured in the backing cloud prior to deployment.
The ceph-mon charm exposes the following Ceph traffic types (bindings):
- ‘public’ (front-side)
- ‘cluster’ (back-side)
For example, providing that spaces ‘data-space’ and ‘cluster-space’ exist, the deploy command above could look like this:
juju deploy -n 3 --config ceph-mon.yaml ceph-mon \
--bind "public=data-space cluster=cluster-space"
Alternatively, configuration can be provided as part of a bundle:
ceph-mon:
charm: cs:ceph-mon
num_units: 1
bindings:
public: data-space
cluster: cluster-space
Refer to the Ceph Network Reference to learn about the implications of segregating Ceph network traffic.
Note: Existing ceph-mon units configured with the
ceph-public-network
orceph-cluster-network
options will continue to honour them. Furthermore, these options override any space bindings, if set.
Monitoring
The charm supports Ceph metric monitoring with Prometheus. Add relations to the prometheus application in this way:
juju deploy prometheus2
juju integrate ceph-mon prometheus2
Note: Prometheus support is available starting with Ceph Luminous (xenial-queens UCA pocket).
Alternatively, integration with the COS Lite observability stack is available via the metrics-endpoint relation.
Relating to prometheus-k8s via the metrics-endpoint interface (as is
found in the COS Lite bundle) will send metrics to
prometheus. Additionally, alerting rules will be configured for
prometheus as well. Alerting rules are configured as a resource
alert-rules
; the default rules are taken from upstream ceph
rules. It is possible to replace the default with
customized rules by attaching a resource:
juju attach ceph-mon alert-rules=./my-prom-alerts.yaml.rules
Actions
This section lists Juju actions supported by the charm.
Actions allow specific operations to be performed on a per-unit basis. To
display action descriptions run juju actions ceph-mon
. If the charm is not
deployed then see file actions.yaml
.
change-osd-weight
copy-pool
create-cache-tier
create-crush-rule
create-erasure-profile
create-pool
crushmap-update
delete-erasure-profile
delete-pool
get-erasure-profile
get-health
list-erasure-profiles
list-inconsistent-objs
list-pools
pause-health
pool-get
pool-set
pool-statistics
purge-osd
remove-cache-tier
remove-pool-snapshot
rename-pool
resume-health
security-checklist
set-noout
set-pool-max-bytes
show-disk-free
snapshot-pool
unset-noout
Presenting the list of Ceph pools with details
The following example returns the list of pools with details: id
, name
,
size
and min_size
.
The jq utility has been used to parse the action output in json format.
juju run --wait ceph-mon/leader list-pools detail=true \
--format json | jq '.[].results.pools | fromjson | .[]
| {pool:.pool, name:.pool_name, size:.size, min_size:.min_size}'
Sample output:
{
"pool": 1,
"name": "test",
"size": 3,
"min_size": 2
}
{
"pool": 2,
"name": "test2",
"size": 3,
"min_size": 2
}
Bugs
Please report bugs on Launchpad.
For general charm questions refer to the OpenStack Charm Guide.