Bcache Tuning

  • Canonical BootStack Charmers
  • Storage
Channel Revision Published Runs on
latest/stable 21 01 Nov 2023
Ubuntu 22.04 Ubuntu 20.04 Ubuntu 18.10 Ubuntu 18.04 Ubuntu 16.04
latest/stable 18 28 Jul 2023
Ubuntu 22.04 Ubuntu 20.04 Ubuntu 18.10 Ubuntu 18.04 Ubuntu 16.04
latest/stable 10 28 Apr 2022
Ubuntu 22.04 Ubuntu 20.04 Ubuntu 18.10 Ubuntu 18.04 Ubuntu 16.04
latest/stable 8 14 Oct 2021
Ubuntu 22.04 Ubuntu 20.04 Ubuntu 18.10 Ubuntu 18.04 Ubuntu 16.04
latest/candidate 21 18 Oct 2023
Ubuntu 22.04 Ubuntu 20.04 Ubuntu 18.10 Ubuntu 18.04 Ubuntu 16.04
latest/candidate 18 05 Jul 2023
Ubuntu 22.04 Ubuntu 20.04 Ubuntu 18.10 Ubuntu 18.04 Ubuntu 16.04
latest/candidate 10 21 Apr 2022
Ubuntu 22.04 Ubuntu 20.04 Ubuntu 18.10 Ubuntu 18.04 Ubuntu 16.04
latest/candidate 8 14 Oct 2021
Ubuntu 22.04 Ubuntu 20.04 Ubuntu 18.10 Ubuntu 18.04 Ubuntu 16.04
latest/edge 21 18 Oct 2023
Ubuntu 22.04 Ubuntu 20.04 Ubuntu 18.04 Ubuntu 16.04
latest/edge 19 14 Aug 2023
Ubuntu 22.04 Ubuntu 20.04 Ubuntu 18.04 Ubuntu 16.04
latest/edge 10 17 Mar 2022
Ubuntu 22.04 Ubuntu 20.04 Ubuntu 18.04 Ubuntu 16.04
juju deploy bcache-tuning
Show information

Platform:

Ubuntu
22.04 20.04 18.10 18.04 16.04

Learn about configurations >

  • cache_mode | string

    Default: unmanaged

    Can be one of either writethrough, writeback, writearound, none, or unmanaged. The unmanaged setting instructs the charm that the cache mode is managed externally, e.g. via MAAS or manually.

  • congested_read_threshold_us | int

    Traffic's still going to the spindle/still getting cache misses . In the real world, SSDs don't always keep up with disks - particularly with slower SSDs, many disks being cached by one SSD, or mostly sequential IO. So you want to avoid being bottlenecked by the SSD and having it slow everything down. . To avoid that bcache tracks latency to the cache device, and gradually throttles traffic if the latency exceeds a threshold (it does this by cranking down the sequential bypass). . You can disable this if you need to by setting the thresholds to 0. . Note that the kernel default is 2000 us (2 milliseconds) for reads.

  • congested_write_threshold_us | int

    Traffic's still going to the spindle/still getting cache misses . In the real world, SSDs don't always keep up with disks - particularly with slower SSDs, many disks being cached by one SSD, or mostly sequential IO. So you want to avoid being bottlenecked by the SSD and having it slow everything down. . To avoid that bcache tracks latency to the cache device, and gradually throttles traffic if the latency exceeds a threshold (it does this by cranking down the sequential bypass). . You can disable this if you need to by setting the thresholds to 0. . Note that the kernel default is 20000 us (20 milliseconds) for writes.

  • debug | boolean

    Enable debug logging.

  • nagios_context | string

    Default: juju

    Used by the nrpe subordinate charms. A string that will be prepended to instance name to set the host name in nagios. So for instance the hostname would be something like: juju-myservice-0 If you're running multiple environments with the same services in them this allows you to differentiate between them.

  • nagios_servicegroups | string

    A comma-separated list of nagios servicegroups. If left empty, the nagios_context will be used as the servicegroup

  • readahead | int

    Size of readahead that should be performed. Defaults to 0. If set to e.g. 1M, it will round cache miss reads up to that size, but without overlapping existing cache entries.

  • sequential_cutoff | int

    A sequential IO will bypass the cache once it passes this threshhold; the most recent 128 IOs are tracked so sequential IO can be detected even when it isn't all done at once.

  • writeback_percent | int

    Default: 10

    If nonzero, bcache tries to keep around this percentage of the cache dirty by throttling background writeback and using a PD controller to smoothly adjust the rate.

  • writeback_rate_fp_term_factor | int

    Default: 1

    This is a tuning option that will increase the value of writeback_rate_fp_term_{low|mid|high} by this factor . This is useful to optimize the rate when one cache device is shared by multiple backing devices, especially useful when those backing devices have the same size and similar workloads, e.g. OSD node . Suggesting set this value to the number of backing devices.