GCP-Integrator

  • Canonical Kubernetes
Channel Revision Published Runs on
latest/stable 73 04 Sep 2024
Ubuntu 22.04 Ubuntu 20.04 Ubuntu 18.04 Ubuntu 16.04
latest/stable 35 30 Sep 2022
Ubuntu 22.04 Ubuntu 20.04 Ubuntu 18.04 Ubuntu 16.04
latest/stable 11 05 May 2022
Ubuntu 22.04 Ubuntu 20.04 Ubuntu 18.04 Ubuntu 16.04
latest/candidate 74 13 Dec 2024
Ubuntu 22.04 Ubuntu 20.04 Ubuntu 18.04 Ubuntu 16.04
latest/candidate 35 28 Sep 2022
Ubuntu 22.04 Ubuntu 20.04 Ubuntu 18.04 Ubuntu 16.04
latest/candidate 4 11 Mar 2022
Ubuntu 22.04 Ubuntu 20.04 Ubuntu 18.04 Ubuntu 16.04
latest/beta 73 14 Aug 2024
Ubuntu 22.04 Ubuntu 20.04 Ubuntu 18.04 Ubuntu 16.04
latest/beta 23 01 Sep 2022
Ubuntu 22.04 Ubuntu 20.04 Ubuntu 18.04 Ubuntu 16.04
latest/beta 9 21 Apr 2022
Ubuntu 22.04 Ubuntu 20.04 Ubuntu 18.04 Ubuntu 16.04
latest/edge 72 03 Aug 2024
Ubuntu 22.04 Ubuntu 20.04 Ubuntu 18.04 Ubuntu 16.04
latest/edge 37 23 Oct 2022
Ubuntu 22.04 Ubuntu 20.04 Ubuntu 18.04 Ubuntu 16.04
latest/edge 16 28 Jun 2022
Ubuntu 22.04 Ubuntu 20.04 Ubuntu 18.04 Ubuntu 16.04
1.31/stable 73 04 Sep 2024
Ubuntu 22.04 Ubuntu 20.04
1.31/candidate 74 13 Dec 2024
Ubuntu 22.04 Ubuntu 20.04
1.31/beta 73 13 Aug 2024
Ubuntu 22.04 Ubuntu 20.04
1.31/edge 72 03 Aug 2024
Ubuntu 22.04 Ubuntu 20.04
1.30/stable 69 11 Jul 2024
Ubuntu 22.04 Ubuntu 20.04
1.30/beta 69 19 Apr 2024
Ubuntu 22.04 Ubuntu 20.04
1.30/edge 71 26 Jul 2024
Ubuntu 22.04 Ubuntu 20.04
1.29/stable 68 21 Apr 2024
Ubuntu 22.04 Ubuntu 20.04
1.29/candidate 68 15 Apr 2024
Ubuntu 22.04 Ubuntu 20.04
1.29/beta 62 14 Dec 2023
Ubuntu 22.04 Ubuntu 20.04
1.29/edge 63 06 Mar 2024
Ubuntu 22.04 Ubuntu 20.04 Ubuntu 18.04
1.29/edge 37 25 Aug 2023
Ubuntu 22.04 Ubuntu 20.04 Ubuntu 18.04
1.28/stable 61 26 Sep 2023
Ubuntu 22.04 Ubuntu 20.04
1.28/candidate 61 22 Sep 2023
Ubuntu 22.04 Ubuntu 20.04
1.28/beta 55 07 Aug 2023
Ubuntu 22.04 Ubuntu 20.04
1.28/edge 58 25 Aug 2023
Ubuntu 22.04 Ubuntu 20.04 Ubuntu 18.04
1.28/edge 37 25 Aug 2023
Ubuntu 22.04 Ubuntu 20.04 Ubuntu 18.04
1.27/stable 50 12 Jun 2023
Ubuntu 22.04 Ubuntu 20.04
1.27/candidate 50 12 Jun 2023
Ubuntu 22.04 Ubuntu 20.04
1.27/beta 46 09 Apr 2023
Ubuntu 22.04 Ubuntu 20.04
1.27/edge 44 07 Apr 2023
Ubuntu 22.04 Ubuntu 20.04
1.26/stable 43 27 Feb 2023
Ubuntu 22.04 Ubuntu 20.04
1.26/candidate 43 25 Feb 2023
Ubuntu 22.04 Ubuntu 20.04
1.26/beta 38 09 Apr 2023
Ubuntu 22.04 Ubuntu 20.04
1.26/edge 38 23 Nov 2022
Ubuntu 22.04 Ubuntu 20.04 Ubuntu 18.04
1.26/edge 37 23 Oct 2022
Ubuntu 22.04 Ubuntu 20.04 Ubuntu 18.04
1.25/stable 35 30 Sep 2022
Ubuntu 22.04 Ubuntu 20.04 Ubuntu 18.04
1.25/candidate 35 28 Sep 2022
Ubuntu 22.04 Ubuntu 20.04 Ubuntu 18.04
1.25/beta 39 01 Dec 2022
Ubuntu 22.04 Ubuntu 20.04 Ubuntu 18.04
1.25/beta 23 01 Sep 2022
Ubuntu 22.04 Ubuntu 20.04 Ubuntu 18.04
1.25/edge 26 08 Sep 2022
Ubuntu 22.04 Ubuntu 20.04 Ubuntu 18.04
1.24/stable 18 04 Aug 2022
Ubuntu 22.04 Ubuntu 20.04 Ubuntu 18.04 Ubuntu 16.04
1.24/stable 11 05 May 2022
Ubuntu 22.04 Ubuntu 20.04 Ubuntu 18.04 Ubuntu 16.04
1.24/candidate 18 01 Aug 2022
Ubuntu 22.04 Ubuntu 20.04 Ubuntu 18.04
1.24/beta 11 03 May 2022
Ubuntu 20.04 Ubuntu 18.04 Ubuntu 16.04
1.24/edge 17 22 Jul 2022
Ubuntu 22.04 Ubuntu 20.04 Ubuntu 18.04 Ubuntu 16.04
1.24/edge 16 28 Jun 2022
Ubuntu 22.04 Ubuntu 20.04 Ubuntu 18.04 Ubuntu 16.04
1.23/beta 5 22 Mar 2022
Ubuntu 20.04 Ubuntu 18.04 Ubuntu 16.04
1.23/edge 3 24 Feb 2022
Ubuntu 20.04 Ubuntu 18.04 Ubuntu 16.04
juju deploy gcp-integrator
Show information

Platform:

Ubuntu
22.04 20.04 18.04 16.04

Charmed Kubernetes will run seamlessly on Google Cloud Platform(GCP). With the addition of the gcp-integrator, your cluster will also be able to use GCP native features directly.

GCP Credentials

If you have set up a service account with IAM roles as your credential for Juju, there may be some additional authorisations you will need to make to access all features of GCP with Charmed Kubernetes.

If you have a GCP project set up specifically for Charmed Kubernetes, the quickest route is to simply add the service account as an Owner of that project in the [GCP console][owner].

If you chose a more fine-grained approach to role administration, the service account should have at least:

  • roles/compute.loadBalancerAdmin
  • roles/compute.instanceAdmin.v1
  • roles/compute.securityAdmin
  • roles/iam.serviceAccountUser

A full description of the various pre-defined roles is available in the [GCP Documentation][iam-roles].

GCP integrator

The gcp-integrator charm simplifies working with Charmed Kubernetes on GCP. Using the credentials provided to Juju, it acts as a proxy between Charmed Kubernetes and the underlying cloud, granting permissions to dynamically create, for example, storage volumes.

Installing

If you install Charmed Kubernetes [using the Juju bundle][install], you can add the gcp-integrator at the same time by using the following overlay file ([download it here][asset-gcp-overlay]):

description: Charmed Kubernetes overlay to add native GCP support.
applications:
  gcp-integrator:
    annotations:
      gui-x: "600"
      gui-y: "300"
    charm: cs:~containers/gcp-integrator
    num_units: 1
    trust: true
relations:
  - ['gcp-integrator', 'kubernetes-master']
  - ['gcp-integrator', 'kubernetes-worker']

To use this overlay with the Charmed Kubernetes bundle, it is specified during deploy like this:

juju deploy charmed-kubernetes --overlay ~/path/gcp-overlay.yaml --trust

… and remember to fetch the configuration file!

juju scp kubernetes-master/0:config ~/.kube/config

For more configuration options and details of the permissions which the integrator uses, please see the [charm readme][gcp-integrator-readme].

Using persistent storage

Many pods you may wish to deploy will require storage. Although you can use any type of storage supported by Kubernetes (see the [storage documentation][storage]), you also have the option to use the native GCP storage types.

GCP storage currently comes in two types - SSD (pd-ssd) or ‘standard’(pd-standard). To use these, we need to create a storage classes in Kubernetes.

For the standard disks:

kubectl create -f - <<EOY
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: gcp-standard
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-standard
EOY

Or for SSD:

kubectl create -f - <<EOY
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: gcp-ssd
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-ssd
EOY

You can confirm this has been added by running:

kubectl get sc

which should return:

NAME           PROVISIONER            AGE
gcp-ssd        kubernetes.io/gce-pd   9s
gcp-standard   kubernetes.io/gce-pd   45s

To actually create storage using this new class, you can make a Persistent Volume Claim:

kubectl create -f - <<EOY
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: testclaim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi
  storageClassName: gcp-standard
EOY

This should finish with a confirmation. You can check the current PVCs with:

kubectl get pvc

…which should return something similar to:

NAME        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
testclaim   Bound    pvc-e1d42bae-44e6-11e9-8dff-42010a840007   1Gi        RWO            gcp-standard   15s

This PVC can then be used by pods operating in the cluster. As an example, the following deploys a busybox pod:

kubectl create -f - <<EOY
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
    - image: busybox
      command:
        - sleep
        - "3600"
      imagePullPolicy: IfNotPresent
      name: busybox
      volumeMounts:
        - mountPath: "/pv"
          name: testvolume
  restartPolicy: Always
  volumes:
    - name: testvolume
      persistentVolumeClaim:
        claimName: testclaim
EOY

To set this type of storage as the default, you can use the command:

kubectl patch storageclass gcp-standard -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Note: If you create persistent disks and subsequently tear down the cluster, check with the GCP console to make sure all the associated resources have also been released.

Using GCP Loadbalancers

With the gcp-integrator charm in place, actions which invoke a loadbalancer in Kubernetes will automatically generate a GCP [Target Pool][target-pool] and the relevant forwarding rules. This can be demonstrated with a simple application. Here we will create a simple application and scale it to five pods:

kubectl create deployment hello-world --image=gcr.io/google-samples/node-hello:1.0
kubectl scale deployment hello-world --replicas=5

You can verify that the application and replicas have been created with:

kubectl get deployments hello-world

Which should return output similar to:

NAME              READY   UP-TO-DATE   AVAILABLE   AGE
hello-world      5/5               5                            5             2m38s

To create a target pool load balancer, the application should now be exposed as a service:

kubectl expose deployment hello-world --type=LoadBalancer --name=hello --port 8080

To check that the service is running correctly:

kubectl describe service hello

…which should return output similar to:

Name:                     hello
Namespace:                default
Labels:                   run=load-balancer-example
Annotations:              <none>
Selector:                 run=load-balancer-example
Type:                     LoadBalancer
IP:                       10.152.183.63
LoadBalancer Ingress:     34.76.144.215
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31864/TCP
Endpoints:                10.1.54.11:8080,10.1.54.12:8080,10.1.54.13:8080 + 2 more...
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type    Reason                Age    From                Message
  ----    ------                ----   ----                -------
  Normal  EnsuringLoadBalancer  9m21s  service-controller  Ensuring load balancer
  Normal  EnsuredLoadBalancer   7m37s  service-controller  Ensured load balancer

You can see that the LoadBalancer Ingress is now associated with a new ingress address in front of the five endpoints of the example deployment. You can test this address:

curl 34.76.144.215:8080
Hello Kubernetes!

Upgrading the integrator-charm

The gcp-integrator is not specifically tied to the version of Charmed Kubernetes installed and may generally be upgraded at any time with the following command:

juju upgrade-charm gcp-integrator

Troubleshooting

If you have any specific problems with the gcp-integrator, you can report bugs on [Launchpad][bugs].

Any activity in GCP can be monitored from the [Operations][operations] console. If you are using a service account with IAM roles, it is relatively easy to see the actions that particular account is responsible for.

For logs of what the charm itself believes the world to look like, you can use Juju to replay the log history for that specific unit:

juju debug-log --replay --include gcp-integrator/0


Help improve this document in the forum (guidelines). Last updated 1 year, 4 months ago.