Charmed PostgreSQL K8s
- By Canonical Data Platform
- Databases
Channel | Revision | Published | Runs on |
---|---|---|---|
latest/stable | 20 | 20 Sep 2022 | |
14/stable | 193 | 13 Mar 2024 | |
14/candidate | 193 | 31 Jan 2024 | |
14/beta | 211 | 13 Mar 2024 | |
14/edge | 237 | 16 Apr 2024 |
juju deploy postgresql-k8s --channel 14/stable
Deploy Kubernetes operators easily with Juju, the Universal Operator Lifecycle Manager. Need a Kubernetes cluster? Install MicroK8s to create a full CNCF-certified Kubernetes system in under 60 seconds.
Platform:
Deploy Charmed PostgreSQL K8s on EKS
Amazon Elastic Kubernetes Service (EKS) is a popular, fully automated Kubernetes service. To access the EKS Web interface, go to console.aws.amazon.com/eks/home.
Note: All commands are written for juju >= v.3.0
If you are using an earlier version, be aware that:
juju run
replacesjuju run-action --wait
injuju v.2.9
juju integrate
replacesjuju relate
andjuju add-relation
injuju v.2.9
For more information, check the Juju 3.0 Release Notes.
Install EKS and Juju tooling
Install:
- Juju - an open source orchestration engine from Canonical
- kubectl - the Kubernetes command line tool
- eksctl - the official CLI for Amazon EKS
- AWC CLI the Amazon Web Services CLI
To check they are all correctly installed, you can run the commands demonstrated below with sample outputs:
~$ juju version
3.1.7-ubuntu-amd64
~$ kubectl version --client
Client Version: v1.28.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
~$ eksctl info
eksctl version: 0.159.0
kubectl version: v1.28.2
~$ aws --version
aws-cli/2.13.25 Python/3.11.5 Linux/6.2.0-33-generic exe/x86_64.ubuntu.23 prompt/off
Create IAM account (or legacy Access keys) and login to AWS:
~$ aws configure
AWS Access Key ID [None]: SECRET_ACCESS_KEY_ID
AWS Secret Access Key [None]: SECRET_ACCESS_KEY_VALUE
Default region name [None]: eu-west-3
Default output format [None]:
~$ aws sts get-caller-identity
{
"UserId": "1234567890",
"Account": "1234567890",
"Arn": "arn:aws:iam::1234567890:root"
}
Bootstrap Kubernetes cluster (EKS)
Export the deployment name for further use:
export JUJU_NAME=eks-$USER-$RANDOM
Feel free to fine-tune the location (eu-west-3
) and/or K8s version (1.27
):
~$ cat <<-EOF > cluster.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: ${JUJU_NAME}
region: eu-west-3
version: "1.27"
iam:
withOIDC: true
addons:
- name: aws-ebs-csi-driver
wellKnownPolicies:
ebsCSIController: true
nodeGroups:
- name: ng-1
minSize: 3
maxSize: 5
iam:
attachPolicyARNs:
- arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
- arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
- arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
- arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
- arn:aws:iam::aws:policy/AmazonS3FullAccess
instancesDistribution:
maxPrice: 0.15
instanceTypes: ["m5.xlarge", "m5.2xlarge"] # At least two instance types should be specified
onDemandBaseCapacity: 0
onDemandPercentageAboveBaseCapacity: 50
spotInstancePools: 2
EOF
Bootstrap EKS cluster:
~$ eksctl create cluster -f cluster.yaml
...
2023-10-12 11:13:58 [ℹ] using region eu-west-3
2023-10-12 11:13:59 [ℹ] using Kubernetes version 1.27
...
2023-10-12 11:40:00 [✔] EKS cluster "eks-taurus-27506" in "eu-west-3" region is ready
Bootstrap Juju on EKS
There is a known bug for juju v.3.1
users:
Add Juju k8s clouds:
juju add-k8s $JUJU_NAME
Bootstrap Juju controller:
juju bootstrap $JUJU_NAME
Create a new Juju model (k8s namespace)
juju add-model welcome
[Optional] Increase DEBUG level if you are troubleshooting charms
juju model-config logging-config='<root>=INFO;unit=DEBUG'
Deploy Charms
juju deploy postgresql-k8s-bundle --channel 14/edge --trust
juju deploy postgresql-test-app
juju integrate postgresql-test-app:first-database postgresql-k8s
juju status --watch 1s
List
Display information about the current deployments with the following commands:
~$ kubectl cluster-info
Kubernetes control plane is running at https://AAAAAAAAAAAAAAAAAAAAAAA.gr7.eu-west-3.eks.amazonaws.com
CoreDNS is running at https://AAAAAAAAAAAAAAAAAAAAAAA.gr7.eu-west-3.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
~$ eksctl get cluster -A
NAME REGION EKSCTL CREATED
eks-taurus-27506 eu-west-3 True
~$ kubectl get node
NAME STATUS ROLES AGE VERSION
ip-192-168-14-61.eu-west-3.compute.internal Ready <none> 19m v1.27.5-eks-43840fb
ip-192-168-51-96.eu-west-3.compute.internal Ready <none> 19m v1.27.5-eks-43840fb
ip-192-168-78-167.eu-west-3.compute.internal Ready <none> 19m v1.27.5-eks-43840fb
Cleanup
Always clean EKS resources that are no longer necessary - they could be costly!
To clean the EKS cluster, resources and juju cloud, run the following commands:
juju destroy-controller $JUJU_NAME --yes --destroy-all-models --destroy-storage --force
juju remove-cloud $JUJU_NAME
List all services and then delete those that have an associated EXTERNAL-IP value (load balancers, …):
kubectl get svc --all-namespaces
kubectl delete svc <service-name>
Next, delete the EKS cluster (source: Deleting an Amazon EKS cluster)
eksctl get cluster -A
eksctl delete cluster <cluster_name> --region eu-west-3 --force --disable-nodegroup-eviction
Finally, remove AWS CLI user credentials (to avoid forgetting and leaking):
rm -f ~/.aws/credentials