Ceph Fs
- OpenStack Charmers
- Cloud
Channel | Revision | Published | Runs on |
---|---|---|---|
latest/edge | 94 | 28 Nov 2024 | |
latest/edge | 93 | 20 Nov 2024 | |
latest/edge | 92 | 20 Nov 2024 | |
latest/edge | 90 | 20 Nov 2024 | |
latest/edge | 91 | 20 Nov 2024 | |
latest/edge | 63 | 08 May 2024 | |
latest/edge | 56 | 21 Mar 2023 | |
quincy/stable | 79 | 30 Jul 2024 | |
quincy/stable | 60 | 30 Aug 2023 | |
squid/candidate | 94 | 28 Nov 2024 | |
squid/candidate | 93 | 20 Nov 2024 | |
squid/candidate | 92 | 20 Nov 2024 | |
squid/candidate | 90 | 20 Nov 2024 | |
squid/candidate | 91 | 20 Nov 2024 | |
squid/candidate | 63 | 08 May 2024 | |
reef/stable | 80 | 13 Aug 2024 | |
reef/stable | 62 | 26 Jun 2024 | |
reef/candidate | 80 | 09 Aug 2024 | |
reef/candidate | 62 | 29 Apr 2024 | |
pacific/stable | 47 | 05 Aug 2022 | |
octopus/stable | 45 | 23 Jan 2023 | |
nautilus/edge | 36 | 04 Mar 2022 | |
nautilus/edge | 46 | 25 Feb 2022 | |
mimic/edge | 36 | 04 Mar 2022 | |
mimic/edge | 46 | 25 Feb 2022 | |
luminous/edge | 36 | 04 Mar 2022 | |
luminous/edge | 43 | 24 Feb 2022 |
juju deploy ceph-fs --channel quincy/stable
Deploy universal operators easily with Juju, the Universal Operator Lifecycle Manager.
Platform:
Overview
Ceph is a unified, distributed storage system designed for excellent performance, reliability, and scalability.
The ceph-fs charm deploys the metadata server daemon (MDS) for the Ceph distributed file system (CephFS). The deployment is done within the context of an existing Ceph cluster.
Important: This documentation supports version 3.x
of the Juju client.
See the OpenStack Charm
guide if you are using the 2.9.x
client.
Usage
Configuration
This section covers common and/or important configuration options. See file config.yaml
for the full list of options, along with their descriptions and default values. A YAML file (e.g. ceph-osd.yaml
) is often used to store configuration options. See the Juju documentation for details on configuring applications.
pool-type
The pool-type
option dictates the storage pool type. See section ‘Ceph pool type’ for more information.
source
The source
option states the software sources. A common value is an OpenStack UCA release (e.g. ‘cloud:xenial-queens’ or ‘cloud:bionic-ussuri’). See Ceph and the UCA. The underlying host’s existing apt sources will be used if this option is not specified (this behaviour can be explicitly chosen by using the value of ‘distro’).
Ceph pool type
Ceph storage pools can be configured to ensure data resiliency either through replication or by erasure coding. This charm supports both types via the pool-type
configuration option, which can take on the values of ‘replicated’ and ‘erasure-coded’. The default value is ‘replicated’.
For this charm, the pool type will be associated with CephFS volumes.
Note: Erasure-coded pools are supported starting with Ceph Luminous.
Replicated pools
Replicated pools use a simple replication strategy in which each written object is copied, in full, to multiple OSDs within the cluster.
The ceph-osd-replication-count
option sets the replica count for any object stored within the ‘ceph-fs-data’ cephfs pool. Increasing this value increases data resilience at the cost of consuming more real storage in the Ceph cluster. The default value is ‘3’.
Important: The
ceph-osd-replication-count
option must be set prior to adding the relation to the ceph-mon application. Otherwise, the pool’s configuration will need to be set by interfacing with the cluster directly.
Erasure coded pools
Erasure coded pools use a technique that allows for the same resiliency as replicated pools, yet reduces the amount of space required. Written data is split into data chunks and error correction chunks, which are both distributed throughout the cluster.
Note: Erasure coded pools require more memory and CPU cycles than replicated pools do.
When using erasure coded pools for CephFS file systems two pools will be created: a replicated pool (for storing MDS metadata) and an erasure coded pool (for storing the data written into a CephFS volume). The ceph-osd-replication-count
configuration option only applies to the metadata (replicated) pool.
Erasure coded pools can be configured via options whose names begin with the ec-
prefix.
Important: It is strongly recommended to tailor the
ec-profile-k
andec-profile-m
options to the needs of the given environment. These latter options have default values of ‘1’ and ‘2’ respectively, which result in the same space requirements as those of a replicated pool.
See Ceph Erasure Coding in the OpenStack Charms Deployment Guide for more information.
Ceph BlueStore compression
This charm supports BlueStore inline compression for its associated Ceph storage pool(s). The feature is enabled by assigning a compression mode via the bluestore-compression-mode
configuration option. The default behaviour is to disable compression.
The efficiency of compression depends heavily on what type of data is stored in the pool and the charm provides a set of configuration options to fine tune the compression behaviour.
Note: BlueStore compression is supported starting with Ceph Mimic.
Deployment
To deploy a single MDS node within an existing Ceph cluster:
juju deploy ceph-fs
juju integrate ceph-fs:ceph-mds ceph-mon:mds
High availability
Highly available CephFS is achieved by deploying multiple MDS servers (i.e. multiple ceph-fs units).
Actions
This section lists Juju actions supported by the charm. Actions allow specific operations to be performed on a per-unit basis. To display action descriptions run juju actions ceph-fs
. If the charm is not deployed then see file actions.yaml
.
get-quota
remove-quota
set-quota
Bugs
Please report bugs on Launchpad.
For general charm questions refer to the OpenStack Charm Guide.