Ovn Chassis

  • By OpenStack Charmers
  • Cloud
juju deploy ovn-chassis
Show information
You will need Juju 2.9 to be able to run this command. Learn how to upgrade to Juju 2.9.
Channel Version Revision Published Base
latest/stable 17 17 Yesterday
21.04 20.10 20.04 18.04
latest/candidate 18 18 14 Oct 2021
21.10 21.04 20.10 20.04 18.04


21.04 20.10 20.04 18.04
  • bridge-interface-mappings | string

    A space-delimited list of key-value pairs that map a network interface MAC address or name to a local ovs bridge to which it should be connected. Note: MAC addresses of physical interfaces that belong to a bond will be resolved to the bond name and the bond will be added to the ovs bridge. Bridges referenced here must be mentioned in the `ovn-bridge-mappings` configuration option. If a match is found the bridge will be created if it does not already exist, the matched interface will be added to it and the mapping found in `ovn-bridge-mappings` will be added to the local OVSDB under the `external_ids:ovn-bridge-mappings` key in the Open_vSwitch table. An example value mapping two network interface mac address to two ovs bridges would be: br-internet:00:00:5e:00:00:42 br-provider:enp3s0f0 Note: OVN gives you distributed East/West and highly available North/South routing by default. You do not need to add provider networks for use with external Layer3 connectivity to all chassis. Doing so will create a scaling problem at the physical network layer that needs to be resolved with globally shared Layer2 (does not scale) or tunneling at the top-of-rack switch layer (adds complexity) and is generally not a recommended configuration. Add provider networks for use with external Layer3 connectivity to individual chassis located near the datacenter border gateways by adding the MAC address of the physical interfaces of those units.

  • debug | boolean

    Enable debug logging

  • disable-mlockall | boolean

    Disable Open vSwitch use of mlockall(). . When mlockall() is enabled, all of ovs-vswitchd's process memory is locked into physical RAM and prevented from paging. This avoids network interruptions but can lead to memory exhaustion in memory-constrained environments. . By default, the charm will disable mlockall() if it is running in a container. Otherwise, the charm will default to mlockall() enabled if it is not running in a container. . Changing this config option will restart openvswitch-switch, resulting in an expected data plane outage while the service restarts.

  • dpdk-bond-config | string

    Default: :balance-tcp:active:fast

    Space delimited list of bond:mode:lacp:lacp-time, where the arguments meaning is: . * bond - the bond name. If not specified the configuration applies to all bonds * mode - the bond mode of operation. Possible values are: - active-backup - No load balancing is offered in this mode and only one of the member ports is active/used at a time. - balance-slb - Considered as a static load-balancing mode. Traffic is load balanced between member ports based on the source MAC and VLAN. - balance-tcp - This is the preferred bonding mode. It offers traffic load balancing based on 5-tuple header fields. LACP must be enabled at both endpoints to use this mode. The aggregate link will fall back to default mode (active-passive) in the event of LACP negotiation failure. * lacp - active, passive or off * lacp-time - fast or slow. LACP negotiation time interval - 30 ms or 1 second

  • dpdk-bond-mappings | string

    Space-delimited list of bond:port mappings. The DPDK assigned ports will be added to their corresponding bond, which in turn will be put into the bridge as specified in data-port. . This option is supported only when enable-dpdk is true.

  • dpdk-driver | string

    Kernel userspace device driver to use for DPDK devices, valid values include: . vfio-pci uio_pci_generic . Only used when DPDK is enabled.

  • dpdk-socket-cores | int

    Default: 1

    Number of cores to allocate to non-datapath DPDK threads per NUMA socket in deployed systems. . Only used when DPDK is enabled.

  • dpdk-socket-memory | int

    Default: 1024

    Amount of hugepage memory in MB to allocate per NUMA socket in deployed systems. . Only used when DPDK is enabled. NOTE: Please check that the value set here is large enough to accommodate the MTU size being used. For more information please refer to https://docs.openvswitch.org/en/latest/topics/dpdk/memory/#shared-memory-calculations

  • enable-auto-restarts | boolean

    Default: True

    Allow the charm and packages to restart services automatically when required.

  • enable-dpdk | boolean

    Enable DPDK fast userspace networking; this requires use of DPDK supported network interface drivers and must be used in conjunction with the data-port configuration option to configure each bridge with an appropriate DPDK enabled network device.

  • enable-hardware-offload | boolean

    NOTE: Support for hardware offload in conjunction with OVN is an experimental feature. . Enable support for hardware offload of flows from Open vSwitch to supported network adapters. This feature has only been tested on Mellanox ConnectX 5 adapters. . Enabling this option will make use of the sriov-numvfs option to configure the VF functions of the physical network adapters detected in each unit. . This option must not be enabled with either enable-sriov or enable-dpdk. . NOTE: Changing this value will not perform hardware specific adaption. A manual restart of the hardware specific adaption service or reboot of the system is required to apply configuration.

  • enable-sriov | boolean

    Enable SR-IOV NIC agent on deployed units; use with sriov-device-mappings to map SR-IOV devices to underlying provider networks. Enabling this option allows instances to be plugged into directly into SR-IOV VF devices connected to underlying provider networks alongside the default Open vSwitch networking options.

  • nagios_context | string

    Default: juju

    A string that will be prepended to instance name to set the host name in nagios. So for instance the hostname would be something like: juju-myservice-0 If you're running multiple environments with the same services in them this allows you to differentiate between them.

  • nagios_servicegroups | string

    Comma separated list of nagios servicegroups for the service checks.

  • networking-tools-source | string

    Default: ppa:openstack-charmers/networking-tools

    Package archive source to use for utilities associated with configuring SR-IOV VF's and switchdev mode in Mellanox network adapters. . This PPA can be mirrored for offline deployments.

  • new-units-paused | boolean

    Start new units of the application as paused. . When set to 'true' newly deployed units of the application will install the charm and any packages required on the sytem, but keep any services from actually starting. . To start the services the operator must run the `resume` action on each unit. . This is useful for use with OpenStack for controlled unit by unit migration of deployments from the legacy Neutron ML2 OVS topology to the OVN topology. Both topologies make use of Open vSwitch and the 'br-int' integration bridge on the hypervisor and during a migration the operator may want to shut down and clean up after the ML2 OVS components before the `ovn-controller` takes over and reprograms the bridge flow rules.

  • openstack-metadata-workers | int

    Default: 2

    When charm is related to OpenStack through the `nova-compute` relation endpoint, the Neutron OVN Metadata service will be activated on the host. . Use this configuration option to control the number of workers the Neutron OVN Metadata service should run. . Each of the workers will establish a connection to the OVN Southbound database. Events the worker respond to is for example the first time a hypervisor hosts an instance in a subnet, so the volume should be relatively low. If you set this number too high you may put an unnecessary toll on the OVN Southbound database server.

  • ovn-bridge-mappings | string

    A space-delimited list of key-value pairs that map a physical network name to a local ovs bridge that provides connectivity to that network. The physical network name can be referenced when the administrator programs the OVN logical flows either by talking directly to the Northbound database or by interfacing with a Cloud Management System (CMS). Each charm unit will evaluate each key-value pair and determine if the configuration is relevant for the host it is running on based on matches found in the `bridge-interface-mappings` configuration option. If a match is found the bridge will be created if it does not already exist, the matched interface will be added to it and the mapping will be added to the local OVSDB under the `external_ids:ovn-bridge-mappings` key in the Open_vSwitch table. An example value mapping two physical network names to two ovs bridges would be: physnet1:br-internet physnet2:br-provider NOTE: Values in this configuration option will only have effect for units that have a interface referenced in the `bridge-interface-mappings` configuration option.

  • prefer-chassis-as-gw | boolean

    Prefer units of this application in CMS (Cloud Management System) scheduling of HA chassis groups (aka. gateways) over units of other OVN chassis applications present in this deployment. . By default the CMS will schedule HA chassis groups across all chassis with bridge- and bridge interface mappings configured. . This configuration option would allow you to influence where gateways are scheduled when all units have equal bridge- and bridge interface mapping configuration. . NOTE: If none of the OVN chassis named applications in the deployment have this option enabled, the CMS will fall back to schedule gateways to chassis with bridge- and bridge interface mapping configured. . NOTE: It is also possible to enable this option on several OVN chassis applications at the same time, e.g. on 2 out of 3.

  • sriov-device-mappings | string

    Space-delimited list of SR-IOV device mappings with format . <provider>:<interface> . Multiple mappings can be provided, delimited by spaces.

  • sriov-numvfs | string

    Default: auto

    Number of VF's to configure each PF with; by default, each SR-IOV PF will be configured with the maximum number of VF's it can support. In the case sriov-device-mappings is set, only the devices in the mapping are configured. Either use a single integer to apply the same VF configuration to all detected SR-IOV devices or use a per-device configuration in the following format . <device>:<numvfs> . Multiple devices can be configured by providing multi values delimited by spaces. . NOTE: Changing this value will have no effect on runtime configuration. A manual restart of the `sriov-netplan-shim` service or reboot of the system is required to apply configuration.