Ceph Osd

  • OpenStack Charmers
  • Cloud
Channel Revision Published Runs on
latest/edge 626 17 Nov 2024
Ubuntu 24.04 Ubuntu 23.10 Ubuntu 23.04 Ubuntu 22.10 Ubuntu 22.04 Ubuntu 20.04 Ubuntu 14.04 Ubuntu 12.04
latest/edge 623 17 Nov 2024
Ubuntu 24.04 Ubuntu 23.10 Ubuntu 23.04 Ubuntu 22.10 Ubuntu 22.04 Ubuntu 20.04 Ubuntu 14.04 Ubuntu 12.04
latest/edge 625 17 Nov 2024
Ubuntu 24.04 Ubuntu 23.10 Ubuntu 23.04 Ubuntu 22.10 Ubuntu 22.04 Ubuntu 20.04 Ubuntu 14.04 Ubuntu 12.04
latest/edge 624 17 Nov 2024
Ubuntu 24.04 Ubuntu 23.10 Ubuntu 23.04 Ubuntu 22.10 Ubuntu 22.04 Ubuntu 20.04 Ubuntu 14.04 Ubuntu 12.04
latest/edge 590 12 Jun 2024
Ubuntu 24.04 Ubuntu 23.10 Ubuntu 23.04 Ubuntu 22.10 Ubuntu 22.04 Ubuntu 20.04 Ubuntu 14.04 Ubuntu 12.04
latest/edge 584 12 Apr 2024
Ubuntu 24.04 Ubuntu 23.10 Ubuntu 23.04 Ubuntu 22.10 Ubuntu 22.04 Ubuntu 20.04 Ubuntu 14.04 Ubuntu 12.04
latest/edge 562 31 Jul 2023
Ubuntu 24.04 Ubuntu 23.10 Ubuntu 23.04 Ubuntu 22.10 Ubuntu 22.04 Ubuntu 20.04 Ubuntu 14.04 Ubuntu 12.04
latest/edge 260 17 Dec 2020
Ubuntu 24.04 Ubuntu 23.10 Ubuntu 23.04 Ubuntu 22.10 Ubuntu 22.04 Ubuntu 20.04 Ubuntu 14.04 Ubuntu 12.04
latest/edge 32 17 Dec 2020
Ubuntu 24.04 Ubuntu 23.10 Ubuntu 23.04 Ubuntu 22.10 Ubuntu 22.04 Ubuntu 20.04 Ubuntu 14.04 Ubuntu 12.04
quincy/stable 622 22 Oct 2024
Ubuntu 23.04 Ubuntu 22.10 Ubuntu 22.04 Ubuntu 20.04
quincy/candidate 622 15 Oct 2024
Ubuntu 23.04 Ubuntu 22.10 Ubuntu 22.04 Ubuntu 20.04
squid/candidate 626 17 Nov 2024
Ubuntu 24.04 Ubuntu 23.10 Ubuntu 23.04 Ubuntu 22.04 Ubuntu 20.04
squid/candidate 623 17 Nov 2024
Ubuntu 24.04 Ubuntu 23.10 Ubuntu 23.04 Ubuntu 22.04 Ubuntu 20.04
squid/candidate 625 17 Nov 2024
Ubuntu 24.04 Ubuntu 23.10 Ubuntu 23.04 Ubuntu 22.04 Ubuntu 20.04
squid/candidate 624 17 Nov 2024
Ubuntu 24.04 Ubuntu 23.10 Ubuntu 23.04 Ubuntu 22.04 Ubuntu 20.04
squid/candidate 590 12 Jun 2024
Ubuntu 24.04 Ubuntu 23.10 Ubuntu 23.04 Ubuntu 22.04 Ubuntu 20.04
squid/candidate 584 22 Apr 2024
Ubuntu 24.04 Ubuntu 23.10 Ubuntu 23.04 Ubuntu 22.04 Ubuntu 20.04
reef/stable 616 13 Aug 2024
Ubuntu 23.10 Ubuntu 23.04 Ubuntu 22.04 Ubuntu 20.04
reef/stable 577 01 Dec 2023
Ubuntu 23.10 Ubuntu 23.04 Ubuntu 22.04 Ubuntu 20.04
reef/candidate 616 12 Aug 2024
Ubuntu 23.10 Ubuntu 23.04 Ubuntu 22.04 Ubuntu 20.04
reef/candidate 584 12 Apr 2024
Ubuntu 23.10 Ubuntu 23.04 Ubuntu 22.04 Ubuntu 20.04
pacific/stable 615 12 Aug 2024
Ubuntu 20.04
octopus/stable 617 12 Aug 2024
Ubuntu 20.04 Ubuntu 18.04
octopus/stable 526 23 Jan 2023
Ubuntu 20.04 Ubuntu 18.04
nautilus/edge 513 04 Mar 2022
Ubuntu 21.10 Ubuntu 21.04 Ubuntu 20.10 Ubuntu 20.04 Ubuntu 18.04 Ubuntu 16.04
nautilus/edge 527 25 Feb 2022
Ubuntu 21.10 Ubuntu 21.04 Ubuntu 20.10 Ubuntu 20.04 Ubuntu 18.04 Ubuntu 16.04
mimic/edge 513 04 Mar 2022
Ubuntu 21.10 Ubuntu 21.04 Ubuntu 20.10 Ubuntu 20.04 Ubuntu 18.04 Ubuntu 16.04
mimic/edge 527 25 Feb 2022
Ubuntu 21.10 Ubuntu 21.04 Ubuntu 20.10 Ubuntu 20.04 Ubuntu 18.04 Ubuntu 16.04
luminous/edge 513 04 Mar 2022
Ubuntu 21.10 Ubuntu 21.04 Ubuntu 20.10 Ubuntu 20.04 Ubuntu 18.04 Ubuntu 16.04
luminous/edge 524 24 Feb 2022
Ubuntu 21.10 Ubuntu 21.04 Ubuntu 20.10 Ubuntu 20.04 Ubuntu 18.04 Ubuntu 16.04
juju deploy ceph-osd --channel quincy/stable
Show information

Platform:

Ubuntu
24.04 23.10 23.04 22.10 22.04 21.10 21.04 20.10 20.04 +4

Learn about configurations >

  • aa-profile-mode | string

    Default: disable

    Enable apparmor profile. Valid settings: 'complain', 'enforce' or 'disable'. . NOTE: changing the value of this option is disruptive to a running Ceph cluster as all ceph-osd processes must be restarted as part of changing the apparmor profile enforcement mode. Always test in pre-production before enabling AppArmor on a live cluster. NOTE: apparmor 'enforce' profile is supported only if osd-device name starts with "/dev"

  • autotune | boolean

    Enabling this option will attempt to tune your network card sysctls and hard drive settings. This changes hard drive read ahead settings and max_sectors_kb. For the network card this will detect the link speed and make appropriate sysctl changes. WARNING: This option is DEPRECATED and will be removed in the next release. Exercise caution when enabling this feature; examine and confirm sysctl values are appropriate for your environment. See http://pad.lv/1798794 for a full discussion.

  • availability_zone | string

    Custom availability zone to provide to Ceph for the OSD placement

  • bdev-enable-discard | string

    Default: auto

    Enables async discard on devices. This option will enable/disable both bdev-enable-discard and bdev-async-discard options in ceph configuration at the same time. The default value "auto" will try to autodetect and should work in most cases. If you need to force a behaviour you can set it to "enable" or "disable". Only applies for Ceph Mimic or later.

  • bluestore | boolean

    Default: True

    Enable BlueStore storage backend for OSD devices. . Only supported with ceph >= 12.2.0. . Setting to 'False' will use FileStore as the storage format.

  • bluestore-block-db-size | int

    Size (in bytes) of a partition, file or LV to use for BlueStore metadata or RocksDB SSTs, provided on a per backend device basis. . Example: 128 GB device, 8 data devices provided in "osd-devices" gives 128 / 8 GB = 16 GB = 16000000000 bytes per device. . A default value is not set as it is calculated by ceph-disk (before Luminous) or the charm itself, when ceph-volume is used (Luminous and above).

  • bluestore-block-wal-size | int

    Size (in bytes) of a partition, file or LV to use for BlueStore WAL (RocksDB WAL), provided on a per backend device basis. . Example: 128 GB device, 8 data devices provided in "osd-devices" gives 128 / 8 GB = 16 GB = 16000000000 bytes per device. . A default value is not set as it is calculated by ceph-disk (before Luminous) or the charm itself, when ceph-volume is used (Luminous and above).

  • bluestore-compression-algorithm | string

    Default: lz4

    The default compressor to use (if any) if the per-pool property compression_algorithm is not set. . NOTE: The recommended approach is to adjust this configuration option on the charm responsible for creating the specific pool you are interested in tuning. Changing the configuration option on the ceph-osd charm will affect ALL pools on the OSDs managed by the named application of the ceph-osd charm in the Juju model.

  • bluestore-compression-max-blob-size | int

    Chunks larger than this are broken into smaller blobs sizing bluestore compression max blob size before being compressed. The per-pool property compression_max_blob_size overrides this setting. . NOTE: The recommended approach is to adjust this configuration option on the charm responsible for creating the specific pool you are interested in tuning. Changing the configuration option on the ceph-osd charm will affect ALL pools on the OSDs managed by the named application of the ceph-osd charm in the Juju model.

  • bluestore-compression-max-blob-size-hdd | int

    Default value of bluestore compression max blob size for rotational media. The per-pool property compression-max-blob-size-hdd overrides this setting. . NOTE: The recommended approach is to adjust this configuration option on the charm responsible for creating the specific pool you are interested in tuning. Changing the configuration option on the ceph-osd charm will affect ALL pools on the OSDs managed by the named application of the ceph-osd charm in the Juju model.

  • bluestore-compression-max-blob-size-ssd | int

    Default value of bluestore compression max blob size for solid state media. The per-pool property compression-max-blob-size-ssd overrides this setting. . NOTE: The recommended approach is to adjust this configuration option on the charm responsible for creating the specific pool you are interested in tuning. Changing the configuration option on the ceph-osd charm will affect ALL pools on the OSDs managed by the named application of the ceph-osd charm in the Juju model.

  • bluestore-compression-min-blob-size | int

    Chunks smaller than this are never compressed. The per-pool property compression_min_blob_size overrides this setting. . NOTE: The recommended approach is to adjust this configuration option on the charm responsible for creating the specific pool you are interested in tuning. Changing the configuration option on the ceph-osd charm will affect ALL pools on the OSDs managed by the named application of the ceph-osd charm in the Juju model.

  • bluestore-compression-min-blob-size-hdd | int

    Default value of bluestore compression min blob size for rotational media. The per-pool property compression-min-blob-size-hdd overrides this setting. . NOTE: The recommended approach is to adjust this configuration option on the charm responsible for creating the specific pool you are interested in tuning. Changing the configuration option on the ceph-osd charm will affect ALL pools on the OSDs managed by the named application of the ceph-osd charm in the Juju model.

  • bluestore-compression-min-blob-size-ssd | int

    Default value of bluestore compression min blob size for solid state media. The per-pool property compression-min-blob-size-ssd overrides this setting. . NOTE: The recommended approach is to adjust this configuration option on the charm responsible for creating the specific pool you are interested in tuning. Changing the configuration option on the ceph-osd charm will affect ALL pools on the OSDs managed by the named application of the ceph-osd charm in the Juju model.

  • bluestore-compression-mode | string

    The default policy for using compression if the per-pool property compression_mode is not set. 'none' means never use compression. 'passive' means use compression when clients hint that data is compressible. 'aggressive' means use compression unless clients hint that data is not compressible. 'force' means use compression under all circumstances even if the clients hint that the data is not compressible. . NOTE: The recommended approach is to adjust this configuration option on the charm responsible for creating the specific pool you are interested in tuning. Changing the configuration option on the ceph-osd charm will affect ALL pools on the OSDs managed by the named application of the ceph-osd charm in the Juju model.

  • bluestore-compression-required-ratio | float

    The ratio of the size of the data chunk after compression relative to the original size must be at least this small in order to store the compressed version. The per-pool property compression-required-ratio overrides this setting. . NOTE: The recommended approach is to adjust this configuration option on the charm responsible for creating the specific pool you are interested in tuning. Changing the configuration option on the ceph-osd charm will affect ALL pools on the OSDs managed by the named application of the ceph-osd charm in the Juju model.

  • bluestore-db | string

    Path to a BlueStore WAL db block device or file. If you have a separate physical device faster than the block device this will store all of the filesystem metadata (RocksDB) there and also integrates the Write Ahead Log (WAL) unless a further separate bluestore-wal device is configured which is not needed unless it is faster again than the bluestore-db device. This block device is used as an LVM PV and then space is allocated for each block device as needed based on the bluestore-block-db-size setting.

  • bluestore-wal | string

    Path to a BlueStore WAL block device or file. Should only be set if using a separate physical device that is faster than the DB device (such as an NVDIMM or faster SSD). Otherwise BlueStore automatically maintains the WAL inside of the DB device. This block device is used as an LVM PV and then space is allocated for each block device as needed based on the bluestore-block-wal-size setting.

  • ceph-cluster-network | string

    The IP address and netmask of the cluster (back-side) network (e.g., 192.168.0.0/24) . If multiple networks are to be used, a space-delimited list of a.b.c.d/x can be provided.

  • ceph-public-network | string

    The IP address and netmask of the public (front-side) network (e.g., 192.168.0.0/24) . If multiple networks are to be used, a space-delimited list of a.b.c.d/x can be provided.

  • config-flags | string

    User provided Ceph configuration. Supports a string representation of a python dictionary where each top-level key represents a section in the ceph.conf template. You may only use sections supported in the template. . WARNING: this is not the recommended way to configure the underlying services that this charm installs and is used at the user's own risk. This option is mainly provided as a stop-gap for users that either want to test the effect of modifying some config or who have found a critical bug in the way the charm has configured their services and need it fixed immediately. We ask that whenever this is used, that the user consider opening a bug on this charm at http://bugs.launchpad.net/charms providing an explanation of why the config was needed so that we may consider it for inclusion as a natively supported config in the charm.

  • crush-initial-weight | float

    The initial crush weight for newly added osds into crushmap. Use this option only if you wish to set the weight for newly added OSDs in order to gradually increase the weight over time. Be very aware that setting this overrides the default setting, which can lead to imbalance in the cluster, especially if there are OSDs of different sizes in use. By default, the initial crush weight for the newly added osd is set to its volume size in TB. Leave this option unset to use the default provided by Ceph itself. This option only affects NEW OSDs, not existing ones.

  • customize-failure-domain | boolean

    Setting this to true will tell Ceph to replicate across Juju's Availability Zone instead of specifically by host.

  • ephemeral-unmount | string

    Cloud instances provide ephemeral storage which is normally mounted on /mnt. . Setting this option to the path of the ephemeral mountpoint will force an unmount of the corresponding device so that it can be used as a OSD storage device. This is useful for testing purposes (cloud deployment is not a typical use case).

  • harden | string

    Apply system hardening. Supports a space-delimited list of modules to run. Supported modules currently include os, ssh, apache and mysql.

  • ignore-device-errors | boolean

    By default, the charm will raise errors if a whitelisted device is found, but for some reason the charm is unable to initialize the device for use by Ceph. . Setting this option to 'True' will result in the charm classifying such problems as warnings only and will not result in a hook error.

  • key | string

    Key ID to import to the apt keyring to support use with arbitrary source configuration from outside of Launchpad archives or PPA's. The accepted formats should be a GPG key in ASCII armor format, including BEGIN and END markers or a keyid.

  • loglevel | int

    Default: 1

    OSD debug level. Max is 20.

  • max-sectors-kb | int

    Default: 1048576

    This parameter will adjust every block device in your server to allow greater IO operation sizes. If you have a RAID card with cache on it consider tuning this much higher than the 1MB default. 1MB is a safe default for spinning HDDs that don't have much cache.

  • nagios_context | string

    Default: juju

    Used by the nrpe-external-master subordinate charm. A string that will be prepended to instance name to set the hostname in nagios. So for instance the hostname would be something like: . juju-myservice-0 . If you're running multiple environments with the same services in them this allows you to differentiate between them.

  • nagios_servicegroups | string

    A comma-separated list of nagios servicegroups. If left empty, the nagios_context will be used as the servicegroup

  • osd-devices | string

    The devices to format and set up as OSD volumes. . These devices are the range of devices that will be checked for and used across all service units, in addition to any volumes attached via the --storage flag during deployment. . For ceph < 14.2.0 (Nautilus) these can also be directories instead of devices. If the value does not start with "/dev" then it will be interpreted as a directory. NOTE: if the value does not start with "/dev" then apparmor "enforce" profile is not supported.

  • osd-encrypt | boolean

    By default, the charm will not encrypt Ceph OSD devices; however, by setting osd-encrypt to True, Ceph's dmcrypt support will be used to encrypt OSD devices. . Specifying this option on a running Ceph OSD node will have no effect until new disks are added, at which point new disks will be encrypted.

  • osd-encrypt-keymanager | string

    Default: ceph

    Keymanager to use for storage of dm-crypt keys used for OSD devices; by default 'ceph' itself will be used for storage of keys, making use of the key/value storage provided by the ceph-mon cluster. . Alternatively 'vault' may be used for storage of dm-crypt keys. Both approaches ensure that keys are never written to the local filesystem. This also requires a relation to the vault charm.

  • osd-format | string

    Default: xfs

    Format of filesystem to use for OSD devices. Supported formats include: . xfs (Default with >= ceph 0.48.3) ext4 (Only option < ceph 0.48.3) btrfs (experimental and not recommended) . Only supported with >= ceph 0.48.3. . Used with FileStore storage backend. . Always applies prior to ceph 12.2.0. Otherwise, only applies when the "bluestore" option is False.

  • osd-journal | string

    The device to use as a shared journal drive for all OSDs on a node. By default a journal partition will be created on each OSD volume device for use by that OSD. The default behaviour is also the fallback for the case where the specified journal device does not exist on a node. . Only supported with ceph >= 0.48.3.

  • osd-journal-size | int

    Default: 1024

    Ceph OSD journal size. The journal size should be at least twice the product of the expected drive speed multiplied by filestore max sync interval. However, the most common practice is to partition the journal drive (often an SSD), and mount it such that Ceph uses the entire partition for the journal. . Only supported with ceph >= 0.48.3.

  • osd-max-backfills | int

    The maximum number of backfills allowed to or from a single OSD. . Setting this option on a running Ceph OSD node will not affect running OSD devices, but will add the setting to ceph.conf for the next restart.

  • osd-recovery-max-active | int

    The number of active recovery requests per OSD at one time. More requests will accelerate recovery, but the requests places an increased load on the cluster. . Setting this option on a running Ceph OSD node will not affect running OSD devices, but will add the setting to ceph.conf for the next restart.

  • prefer-ipv6 | boolean

    If True enables IPv6 support. The charm will expect network interfaces to be configured with an IPv6 address. If set to False (default) IPv4 is expected. . NOTE: these charms do not currently support IPv6 privacy extension. In order for this charm to function correctly, the privacy extension must be disabled and a non-temporary address must be configured/available on your network interface.

  • source | string

    Default: yoga

    Optional configuration to support use of additional sources such as: . option.

    • ppa:myteam/ppa
    • cloud:bionic-ussuri
    • cloud:xenial-proposed/queens
    • http://my.archive.com/ubuntu main . The last option should be used in conjunction with the key configuration

  • sysctl | string

    Default: { kernel.pid_max : 2097152, vm.max_map_count : 524288, kernel.threads-max: 2097152 }

    YAML-formatted associative array of sysctl key/value pairs to be set persistently. By default we set pid_max, max_map_count and threads-max to a high value to avoid problems with large numbers (>20) of OSDs recovering. very large clusters should set those values even higher (e.g. max for kernel.pid_max is 4194303).

  • tune-osd-memory-target | string

    Set to tune the value of osd_memory_target.

    If unset or set to an empty string, the charm will not update the value for ceph. This means that a new deployment with this value unset will default to ceph's default (4GB). And if a value was set, but then later unset, ceph will remain configured with the last set value. This is to allow for manually configuring this value in ceph without interference from the charm.

    If set to "{n}%" (where n is an integer), the value will be set as follows:

    total ram * (n/100) / number of osds on the host

    If set to "{n}GB" (n is an integer), osd_memory_target will be set per OSD directly.

    Take care when choosing a value that it both provides enough memory for ceph and leave enough memory for the system and other workloads to function. For common cases, it is recommended to stay within the bounds of 4GB < value < 90% of system memory. If these bounds are broken, a warning will be emitted by the charm, but the value will still be set.

  • use-direct-io | boolean

    Default: True

    Configure use of direct IO for OSD journals.

  • use-syslog | boolean

    If set to True, supporting services will log to syslog.