Microceph

  • By OpenStack Charmers
Channel Revision Published Runs on
latest/edge 68 12 Jul 2024
Ubuntu 22.04
reef/stable 50 25 Jun 2024
Ubuntu 22.04
reef/candidate 59 25 Jun 2024
Ubuntu 22.04
reef/beta 67 10 Jul 2024
Ubuntu 22.04
reef/edge 67 04 Jul 2024
Ubuntu 22.04
juju deploy microceph --channel edge
Show information

Platform:

Ubuntu
22.04

Overview

MicroCeph charm enables users to deploy, scale and operate on a ceph cluster using the MicroCeph snap package and Juju.

Usage

Configuration

To display all configuration option information run juju config microceph. If the application is not deployed then see the charm’s Configure tab in the Charmhub. Finally, the Juju documentation provides general guidance on configuring applications.

Deployment

A cloud with three nodes is a typical design to deploy a minimal Ceph cluster.

juju deploy -n 3 microceph --channel latest/edge --to 0,1,2

Add the disks on each node

juju run microceph/0 add-osd <DISK PATH>,<DISK PATH>

Juju storage

This charm supports Juju storage. Disks based on such storage can either be enrolled as standalone OSDs or be provisioned as simple block storage devices.

A standalone OSD in this context is a regular OSD with an on-device WAL/DB area, whereas a block storage device requires manual configuration by the administrator (e.g. by employing the add-osd action to integrate additional OSDs into the cluster).

These two provisioning types are identified by the osd-standalone and manual storage names when using the juju add-storage command.

Examples,

To add four OSDs on unit microceph/1 of 10 GiB each using the cinder storage provider:

juju add-storage microceph/1 osd-standalone='cinder,10G,4'

To provision five block devices on unit microceph/6 of 8 GiB each using the ebs storage provider:

juju add-storage microceph/6 manual='ebs,8G,5'

To add two OSDs on unit microceph/3 of 16 GiB each using the loop storage provider (local loopback block device):

juju add-storage microceph/3 osd-standalone='loop,16G,2'

Network spaces

This charm supports the use of Juju network spaces. This feature optionally allows specific types of the application’s network traffic to be bound to subnets that the underlying hardware is connected to.

Note: Spaces must be configured in the backing cloud prior to deployment.

The ceph-mon charm exposes the following traffic types (bindings):

  1. public: Cluster public services bind to this space for consumption by Ceph clients
  2. cluster: Cluster internal services bind to this space for osd replication, heartbeat and other internal traffic.
  3. admin: MicroCeph daemons bind to this space for cluster orchestration.

For example, providing that spaces ‘data-space’ and ‘cluster-space’ exist, the deploy command above could look like this:

juju deploy ch:microceph --storage osd-standalone="loop,4G,3" --bind "public=data-space cluster=cluster-space"

Alternatively, configuration can be provided as part of a bundle:

    microceph:
      charm: ch:microceph
      num_units: 1
      bindings:
        public: data-space
        cluster: cluster-space
        admin: alpha

Refer to the Ceph Network Reference to learn about the implications of segregating Ceph network traffic.

Actions

This section lists Juju actions supported by the charm. Actions allow specific operations to be performed on a per-unit basis. To display action descriptions run juju actions microceph.

  • list-disks
  • add-osd

Upgrades

Due to MicroCeph being installed via a snap, upon installation, the snap-channel configuration option determines the MicroCeph version to be deployed. This option accepts a channel specification in the form of track/risklevel (e.g. quincy/stable or reef/edge).

Upgrades are initiated by changing the snap-channel option, where an attempt will be made to upgrade MicroCeph to the specified channel. To minimise downtime, the upgrade will be performed as a rolling upgrade: each node will be upgraded sequentially (only services running on a single node will be offline at any given time).

Caution: During an upgrade, all MicroCeph services on a node will nonetheless be restarted. Depending on cluster topology, this still might result in a service interruption.

The following safety mechanisms are in place that will prevent the upgrade from proceeding:

  • Any running non-MicroCeph service programs (e.g. the ceph admin tool) will block the upgrade. You will need to manually stop them.
  • In accordance with upstream Ceph constraints, downgrades of a major version are not supported and will block the upgrade. For example, upgrading from the quincy to the reef track is supported, while downgrading from reef to quincy is not. This does not apply to risk levels (e.g. an upgrade from stable to candidate is supported, as well as a downgrade from candidate to stable).
  • Upgrades are only performed on healthy clusters. Upgrading a cluster that does not have a Ceph health state of HEALTH_OK will block the upgrade.

Integrations

MicroCeph charm is expected to be integrated with the openstack control plane via cross model relations. For example to integrate glance application in k8s model to microceph, run the below commands:

juju offer microceph:ceph
juju integrate -m k8s glance:ceph admin/controller.microceph

Integrating with ceph-radosgw

With a ceph-radosgw unit in a juju model, it’s possible to relate it to charm-microceph:

juju integrate ceph-radosgw charm-microceph

Keep in mind, however, that at least one OSD is required for storage. In other words, the add-osd action must have run at least once for a charm-microceph unit. If there is no storage available when the integration is made, the ceph-radosgw will remain blocked until an OSD is added to charm-microceph.

Bugs

Please report bugs on Github. For general charm questions refer to the OpenStack Charm Guide.


Help improve this document in the forum (guidelines). Last updated 2 months ago.