Ceph Mon
- By OpenStack Charmers
- Cloud
Channel | Revision | Published | Runs on |
---|---|---|---|
latest/edge | 237 | 15 Aug 2024 | |
latest/edge | 236 | 15 Aug 2024 | |
latest/edge | 235 | 15 Aug 2024 | |
latest/edge | 234 | 15 Aug 2024 | |
latest/edge | 219 | 14 Jun 2024 | |
latest/edge | 210 | 29 Apr 2024 | |
latest/edge | 179 | 24 Jul 2023 | |
quincy/stable | 215 | 16 May 2024 | |
quincy/candidate | 239 | 03 Oct 2024 | |
squid/candidate | 237 | 15 Aug 2024 | |
squid/candidate | 236 | 15 Aug 2024 | |
squid/candidate | 235 | 15 Aug 2024 | |
squid/candidate | 234 | 15 Aug 2024 | |
squid/candidate | 219 | 14 Jun 2024 | |
squid/candidate | 207 | 22 Apr 2024 | |
reef/stable | 229 | 13 Aug 2024 | |
reef/stable | 210 | 26 Jun 2024 | |
reef/candidate | 238 | 05 Sep 2024 | |
reef/candidate | 210 | 29 Apr 2024 | |
pacific/stable | 217 | 03 Jun 2024 | |
octopus/stable | 177 | 15 Jul 2023 | |
octopus/stable | 89 | 23 Jan 2023 | |
nautilus/edge | 73 | 04 Mar 2022 | |
nautilus/edge | 90 | 25 Feb 2022 | |
mimic/edge | 73 | 04 Mar 2022 | |
mimic/edge | 90 | 25 Feb 2022 | |
luminous/edge | 73 | 04 Mar 2022 | |
luminous/edge | 87 | 24 Feb 2022 |
juju deploy ceph-mon --channel quincy/stable
Deploy universal operators easily with Juju, the Universal Operator Lifecycle Manager.
Platform:
-
change-osd-weight
Set the crush weight of an OSD to the new value supplied.
- Params
-
osd integer
ID of the OSD to operate on, e.g. for osd.53, supply 53.
-
weight number
The new weight of the OSD, must be a decimal number, e.g. 1.04
- Required
osd, weight
-
copy-pool
Copy contents of a pool to a new pool.
- Params
-
source string
Pool to copy data from.
-
target string
Pool to copy data to.
- Required
source, target
-
create-cache-tier
Create a new cache tier
- Params
-
backer-pool string
The name of the pool that will back the cache tier. Also known as the cold pool
-
cache-mode string
The mode of the caching tier. Please refer to the Ceph docs for more information
-
cache-pool string
The name of the pool that will be the cache pool. Also known as the hot pool
- Required
backer-pool, cache-pool
-
create-crush-rule
Create a new replicated CRUSH rule to use on a pool.
- Params
-
device-class string
CRUSH device class to use for new rule.
-
failure-domain string
The failure-domain=host will create a CRUSH ruleset that ensures no two chunks are stored in the same host.
-
name string
The name of the rule
- Required
name
-
create-erasure-profile
Create a new erasure code profile to use on a pool.
- Params
-
coding-chunks integer
The number of coding chunks, i.e. the number of additional chunks computed by the encoding functions. If there are 2 coding chunks, it means 2 OSDs can be out without losing data.
-
crush-locality string
LRC plugin - The type of CRUSH bucket in which each set of chunks defined by locality-chunks will be stored.
-
data-chunks integer
The number of data chunks, i.e. the number of chunks in which the original object is divided. For instance if K = 2 a 10KB object will be divided into K objects of 5KB each.
-
device-class string
CRUSH device class to use for erasure profile.
-
durability-estimator integer
SHEC plugin - the number of parity chunks each of which includes each data chunk in its calculation range. The number is used as a durability estimator. For instance, if c=2, 2 OSDs can be down without losing data.
-
failure-domain string
The failure-domain=host will create a CRUSH ruleset that ensures no two chunks are stored in the same host.
-
helper-chunks integer
CLAY plugin - number of OSDs requests to send data during recovery of a single chunk.
-
locality-chunks integer
LRC plugin - Group the coding and data chunks into sets of size locality. For instance, for k=4 and m=2, when locality=3 two groups of three are created. Each set can be recovered without reading chunks from another set.
-
name string
The name of the profile
-
plugin string
The erasure plugin to use for this profile. See http://docs.ceph.com/docs/master/rados/operations/erasure-code-profile/ for more details
-
scalar-mds string
CLAY plugin - specifies the plugin that is used as a building block in the layered construction.
- Required
name
-
create-pool
Creates a pool
- Params
-
allow-ec-overwrites boolean
Permit overwrites for erasure coded pool types.
-
app-name string
App name to set on the newly created pool.
-
erasure-profile-name string
The name of the erasure coding profile to use for this pool. Note this profile must exist before calling create-pool
-
name string
The name of the pool
-
percent-data integer
The percentage of data that is expected to be contained in the pool for the specific OSD set. Default value is to assume 10% of the data is for this pool, which is a relatively low % of the data but allows for the pg_num to be increased.
-
pool-type string
The pool type which may either be replicated to recover from lost OSDs by keeping multiple copies of the objects or erasure to get a kind of generalized RAID5 capability.
-
profile-name string
The crush profile to use for this pool. The ruleset must exist first.
-
replicas integer
For the replicated pool this is the number of replicas to store of each object.
- Required
name
-
crushmap-update
Apply a json crushmap definition. This will throw away the existing ceph crushmap and apply the new definition. Use with extreme caution. WARNING - This function is extremely dangerous if misused. It can very easily break your cluster in unexpected ways.
- Params
-
map string
The json crushmap blob
- Required
map
-
delete-erasure-profile
Deletes an erasure code profile.
- Params
-
name string
The name of the profile
- Required
name
-
delete-pool
Deletes the named pool
- Params
-
name string
The name of the pool
- Required
name
-
delete-user
Delete a user.
- Params
-
username string
User ID to delete.
- Required
username
-
get-erasure-profile
Display an erasure code profile.
- Params
-
name string
The name of the profile
- Required
name
-
get-health
Output the current cluster health reported by
ceph health
-
get-or-create-user
Get or create a user and it's capabilities.
- Params
-
mon-caps string
Monitor capabilities include r, w, x access settings or profile {name}.
-
osd-caps string
OSD capabilities include r, w, x, class-read, class-write access settings or profile {name}.
-
username string
User ID to get or create.
- Required
username
-
get-quorum-status
Return lists of the known mons, and online mons, to determine if there is quorum.
- Params
-
format string
Specify output format (text|json).
-
list-crush-rules
List CEPH crush rules
- Params
-
format string
The output format, either json, yaml or text (default)
-
list-entities
Returns a list of entities recognized by the Ceph cluster.
- Params
-
format string
The output format, either json, yaml or text (default)
-
list-erasure-profiles
List the names of all erasure code profiles
-
list-inconsistent-objs
List the names of the inconsistent objects per PG
- Params
-
format string
The output format, either json, yaml or text (default)
-
list-pools
List your cluster's pools
- Params
-
format string
Specify output format (text|text-full|json). The formats
text-full
andjson
provide the same level of details.
-
pause-health
Pause ceph health operations across the entire ceph cluster
-
pg-repair
Repair inconsistent placement groups, if safe to do so.
-
pool-get
Get a value for the pool
- Params
-
key string
Any valid Ceph key from http://docs.ceph.com/docs/master/rados/operations/pools/#get-pool-values
-
name string
The pool to get this variable from.
- Required
key, name
-
pool-set
Set a value for the pool
- Params
-
key string
Any valid Ceph key from http://docs.ceph.com/docs/master/rados/operations/pools/#set-pool-values
-
name string
The pool to set this variable on.
-
value
The value to set
- Required
key, value, name
-
pool-statistics
Show a pool's utilization statistics
-
purge-osd
Removes an OSD from a cluster map, removes its authentication key, removes the OSD from the OSD map. The OSD must have zero weight before running this action, to avoid excessive I/O on the cluster.
- Params
-
i-really-mean-it boolean
This must be toggled to enable actually performing this action.
-
osd integer
ID of the OSD to remove, e.g. for osd.53, supply 53.
- Required
osd, i-really-mean-it
-
remove-cache-tier
Remove an existing cache tier
- Params
-
backer-pool string
The name of the pool that backs the cache tier. Also known as the cold pool
-
cache-pool string
The name of the pool that is the cache pool. Also known as the hot pool
- Required
backer-pool, cache-pool
-
remove-pool-snapshot
Remove a pool snapshot
- Params
-
name string
The name of the pool
-
snapshot-name string
The name of the snapshot
- Required
snapshot-name, name
-
rename-pool
Rename a pool
- Params
-
name string
The name of the pool
-
new-name string
The new name of the pool
- Required
name, new-name
-
reset-osd-count-report
Update report of osds present in osd tree. Used for monitoring.
-
resume-health
Resume ceph health operations across the entire ceph cluster
-
rotate-key
Rotate the key of an entity in the Ceph cluster
- Params
-
entity string
The entity for which to rotate the key
- Required
entity
-
security-checklist
Validate the running configuration against the OpenStack security guides checklist
-
set-noout
Set ceph noout across the cluster.
-
set-pool-max-bytes
Set pool quotas for the maximum number of bytes.
- Params
-
max integer
The name of the pool
-
name string
The name of the pool
- Required
name, max
-
show-disk-free
Show disk utilization by host and OSD.
- Params
-
format string
Output format, either json, json-pretty, xml, xml-pretty, plain; defaults to plain
-
snapshot-pool
Snapshot a pool
- Params
-
name string
The name of the pool
-
snapshot-name string
The name of the snapshot
- Required
snapshot-name, name
-
unset-noout
Unset ceph noout across the cluster.