Apache Kafka

Channel Revision Published Runs on
3/stable 185 23 Oct 2024
Ubuntu 22.04
3/candidate 188 13 Nov 2024
Ubuntu 22.04
3/beta 188 13 Nov 2024
Ubuntu 22.04
3/edge 189 13 Nov 2024
Ubuntu 22.04
juju deploy kafka --channel 3/stable
Show information

Platform:

Ubuntu
22.04

Learn about configurations >

  • certificate_extra_sans | string

    Config options to add extra-sans to the ones used when requesting server certificates. The extra-sans are specified by comma-separated names to be added when requesting signed certificates. Use "{unit}" as a placeholder to be filled with the unit number, e.g. "worker-{unit}" will be translated as "worker-0" for unit 0 and "worker-1" for unit 1 when requesting the certificate.

  • compression_type | string

    Default: producer

    Specify the final compression type for a given topic. This configuration accepts the standard compression codecs ('gzip', 'snappy', 'lz4', 'zstd'). It additionally accepts 'uncompressed' which is equivalent to no compression; and 'producer' which means retain the original compression codec set by the producer.

  • log_cleaner_delete_retention_ms | string

    Default: 86400000

    How long are delete records retained.

  • log_cleaner_min_compaction_lag_ms | string

    Default: 0

    The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted.

  • log_cleanup_policy | string

    Default: delete

    The default cleanup policy for segments beyond the retention window. A comma separated list of valid policies. Valid policies are: 'delete' and 'compact'

  • log_flush_interval_messages | string

    Default: 9223372036854775807

    The number of messages accumulated on a log partition before messages are flushed to disk.

  • log_flush_interval_ms | string

    Default: 9223372036854775807

    The maximum time in ms that a message in any topic is kept in memory before flushed to disk.

  • log_flush_offset_checkpoint_interval_ms | int

    Default: 60000

    The frequency with which we update the persistent record of the last flush which acts as the log recovery point.

  • log_level | string

    Default: INFO

    Level of logging for the different components operated by the charm. Possible values: ERROR, WARNING, INFO, DEBUG

  • log_message_timestamp_type | string

    Default: CreateTime

    Define whether the timestamp in the message is message create time or log append time. The value should be either 'CreateTime' or 'LogAppendTime'.

  • log_retention_bytes | string

    Default: -1

    The maximum size of the log before deleting it.

  • log_retention_ms | string

    Default: -1

    The number of milliseconds to keep a log file before deleting it (in milliseconds).

  • log_segment_bytes | int

    Default: 1073741824

    The maximum size of a single log file.

  • message_max_bytes | int

    Default: 1048588

    The largest record batch size allowed by Kafka (after compression if compression is enabled). If this is increased and there are consumers older than 0.10.2, the consumers' fetch size must also be increased so that they can fetch record batches this large. In the latest message format version, records are always grouped into batches for efficiency. In previous message format versions, uncompressed records are not grouped into batches and this limit only applies to a single record in that case.This can be set per topic with the topic level max.message.bytes config.

  • offsets_topic_num_partitions | int

    Default: 50

    The number of partitions for the offset commit topic (should not change after deployment).

  • profile | string

    Default: production

    Profile representing the scope of deployment, and used to enable high-level customisation of sysconfigs, resource checks/allocation, warning levels, etc. Allowed values are: “production”, “staging” and “testing”

  • replication_quota_window_num | int

    Default: 11

    The number of samples to retain in memory for replication quotas.

  • ssl_cipher_suites | string

    A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported.

  • ssl_principal_mapping_rules | string

    Default: DEFAULT

    A list of rules for mapping from distinguished name from the client certificate to short name. Each rule starts with 'RULE:' and contains an expression as the following. 'RULE:pattern/replacement/[LU]'. A valid set of rules could look something like this 'RULE:^.[Cc][Nn]=([a-zA-Z0-9.-_@]).*$/$1/L,DEFAULT'

  • transaction_state_log_num_partitions | int

    Default: 50

    The number of partitions for the transaction topic (should not change after deployment).

  • unclean_leader_election_enable | boolean

    Indicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss.

  • zookeeper_ssl_cipher_suites | string

    Specifies the enabled cipher suites to be used in ZooKeeper TLS negotiation (csv). Overrides any explicit value set via the zookeeper.ssl.ciphersuites system property (note the single word "ciphersuites"). The default value of null means the list of enabled cipher suites is determined by the Java runtime being used.