Kubeflow

  • Kubeflow Charmers | bundle
  • Cloud
Channel Revision Published
latest/candidate 294 24 Jan 2022
latest/beta 430 30 Aug 2024
latest/edge 423 26 Jul 2024
1.9/stable 432 Today
1.9/beta 420 19 Jul 2024
1.9/edge 431 Today
1.8/stable 414 22 Nov 2023
1.8/beta 411 22 Nov 2023
1.8/edge 413 22 Nov 2023
1.7/stable 409 27 Oct 2023
1.7/beta 408 27 Oct 2023
1.7/edge 407 27 Oct 2023
juju deploy kubeflow --channel 1.9/edge
Show information

Platform:

By default, Istio is deployed with a single replica of the Gateway workload pod.

This guide shows how to deploy multiple replicas of the pod and spread them across different nodes by configuring High Availability (HA) for Istio Gateway. The configuration uses Kubernetes inter-pod anti-affinity.

Configuring HA makes your Istio deployment more resilient, reducing the risk of a single point of failure. That is, this ensures that if an Istio Gateway workload pod is down, the rest of Kubeflow can still be accessed.

Requirements

The Istio HA configuration is only available in version 1.22/* or above. To upgrade to a higher version, see Upgrading Istio instructions.

Configure High Availability

You can enable the Istio HA by setting the replicas configuration value for the istio-ingressgateway charm. You can do so as follows:


juju config istio-ingressgateway replicas=<desired number of replicas>

The number of replicas must be less than or equal to the number of available nodes in your cluster. Otherwise, the additional pods will remain in Pending status.

Verify High Availability

Once the istio-ingressgateway charm is configured, you can verify it’s running with HA by checking the pods with the istio-ingressgateway label:


kubectl get po -n kubeflow -l app=istio-ingressgateway -o wide

For example, consider your cluster consists of two or more nodes. If you set the replicas config to 2, you should see 2 running pods:


NAME READY STATUS RESTARTS AGE IP NODE

istio-ingressgateway-workload-86d4dd6dff-84g6l 1/1 Running 0 6m 10.1.58.136 node1

istio-ingressgateway-workload-86d4dd6dff-j9fhv 1/1 Running 0 4m 10.1.179.133 node2

Each pod is always scheduled on a different node due to inter-pod anti-affinity being set in the Istio Gateway deployment.