Charmed MySQL K8s
- Canonical
- Databases
Channel | Revision | Published | Runs on |
---|---|---|---|
8.0/stable | 180 | 02 Sep 2024 | |
8.0/stable | 181 | 02 Sep 2024 | |
8.0/candidate | 180 | 26 Aug 2024 | |
8.0/candidate | 181 | 26 Aug 2024 | |
8.0/beta | 207 | 15 Nov 2024 | |
8.0/beta | 206 | 15 Nov 2024 | |
8.0/edge | 207 | 09 Oct 2024 | |
8.0/edge | 206 | 09 Oct 2024 |
juju deploy mysql-k8s --channel 8.0/stable
Deploy Kubernetes operators easily with Juju, the Universal Operator Lifecycle Manager. Need a Kubernetes cluster? Install MicroK8s to create a full CNCF-certified Kubernetes system in under 60 seconds.
Platform:
Note: All commands are written for juju >= v.3.0
If you are using an earlier version, check the Juju 3.0 Release Notes.
How to deploy on EKS
Amazon Elastic Kubernetes Service (EKS) is a popular, fully automated Kubernetes service. To access the EKS Web interface, go to console.aws.amazon.com/eks/home.
Summary
- Install EKS and Juju tooling
- Create a new EKS cluster
- Bootstrap Juju on EKS
- Deploy charms
- Display deployment information
- Clean up
Install EKS and Juju tooling
Install Juju and the kubectl
CLI tools via snap:
sudo snap install juju
sudo snap install kubectl --classic
Follow the installation guides for:
To check they are all correctly installed, you can run the commands demonstrated below with sample outputs:
> juju version
3.1.7-ubuntu-amd64
> kubectl version --client
Client Version: v1.28.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
> eksctl info
eksctl version: 0.159.0
kubectl version: v1.28.2
> aws --version
aws-cli/2.13.25 Python/3.11.5 Linux/6.2.0-33-generic exe/x86_64.ubuntu.23 prompt/off
Authenticate
Create an IAM account (or use legacy access keys) and login to AWS:
> aws configure
AWS Access Key ID [None]: SECRET_ACCESS_KEY_ID
AWS Secret Access Key [None]: SECRET_ACCESS_KEY_VALUE
Default region name [None]: eu-west-3
Default output format [None]:
> aws sts get-caller-identity
{
"UserId": "1234567890",
"Account": "1234567890",
"Arn": "arn:aws:iam::1234567890:root"
}
Create a new EKS cluster
Export the deployment name for further use:
export JUJU_NAME=eks-$USER-$RANDOM
This following examples in this guide will use the location eu-west-3
and K8s v.1.27
- feel free to change this for your own deployment.
Sample cluster.yaml
:
> cat <<-EOF > cluster.yaml
---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: ${JUJU_NAME}
region: eu-west-3
version: "1.27"
iam:
withOIDC: true
addons:
- name: aws-ebs-csi-driver
wellKnownPolicies:
ebsCSIController: true
nodeGroups:
- name: ng-1
minSize: 3
maxSize: 5
iam:
attachPolicyARNs:
- arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
- arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
- arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
- arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
- arn:aws:iam::aws:policy/AmazonS3FullAccess
instancesDistribution:
maxPrice: 0.15
instanceTypes: ["m5.xlarge", "m5.2xlarge"] # At least two instance types should be specified
onDemandBaseCapacity: 0
onDemandPercentageAboveBaseCapacity: 50
spotInstancePools: 2
EOF
Bootstrap EKS cluster with the following command:
eksctl create cluster -f cluster.yaml
Sample output:
...
2023-10-12 11:13:58 [ℹ] using region eu-west-3
2023-10-12 11:13:59 [ℹ] using Kubernetes version 1.27
...
2023-10-12 11:40:00 [✔] EKS cluster "eks-taurus-27506" in "eu-west-3" region is ready
Bootstrap Juju on EKS
There is a known bug for juju v.3.1
users:
Add Juju k8s clouds:
juju add-k8s $JUJU_NAME
Bootstrap Juju controller:
juju bootstrap $JUJU_NAME
Create a new Juju model (k8s namespace)
juju add-model welcome
[Optional] Increase DEBUG level if you are troubleshooting charms
juju model-config logging-config='<root>=INFO;unit=DEBUG'
Deploy charms
The following commands deploy and integrate the MySQL K8s Bundle and MySQL Test App:
juju deploy mysql-k8s-bundle --channel 8.0/edge --trust
juju deploy mysql-test-app
juju integrate mysql-test-app mysql-k8s:database
To track the status of the deployment, run
juju status --watch 1s
Display deployment information
Display information about the current deployments with the following commands:
> juju controllers
Controller Model User Access Cloud/Region Models Nodes HA Version
eks-taurus-27506* welcome admin superuser eks-taurus-27506 2 1 - 2.9.45
> kubectl cluster-info
Kubernetes control plane is running at https://AAAAAAAAAAAAAAAAAAAAAAA.gr7.eu-west-3.eks.amazonaws.com
CoreDNS is running at https://AAAAAAAAAAAAAAAAAAAAAAA.gr7.eu-west-3.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
> eksctl get cluster -A
NAME REGION EKSCTL CREATED
eks-taurus-27506 eu-west-3 True
> kubectl get node
NAME STATUS ROLES AGE VERSION
ip-192-168-14-61.eu-west-3.compute.internal Ready <none> 19m v1.27.5-eks-43840fb
ip-192-168-51-96.eu-west-3.compute.internal Ready <none> 19m v1.27.5-eks-43840fb
ip-192-168-78-167.eu-west-3.compute.internal Ready <none> 19m v1.27.5-eks-43840fb
Clean up
Always clean EKS resources that are no longer necessary - they could be costly!
To clean the EKS cluster, resources and juju cloud, run the following commands:
juju destroy-controller $JUJU_NAME --yes --destroy-all-models --destroy-storage --force
juju remove-cloud $JUJU_NAME
List all services and then delete those that have an associated EXTERNAL-IP value (e.g. load balancers):
kubectl get svc --all-namespaces
kubectl delete svc <service-name>
Next, delete the EKS cluster (source: Deleting an Amazon EKS cluster)
eksctl get cluster -A
eksctl delete cluster <cluster_name> --region eu-west-3 --force --disable-nodegroup-eviction
Finally, remove AWS CLI user credentials (to avoid forgetting and leaking):
rm -f ~/.aws/credentials