Charmed PostgreSQL VM
- Canonical
- Databases
Channel | Revision | Published | Runs on |
---|---|---|---|
latest/stable | 345 | 09 Nov 2023 | |
latest/stable | 239 | 09 Feb 2022 | |
latest/stable | 226 | 01 Apr 2021 | |
14/stable | 468 | 11 Sep 2024 | |
14/stable | 467 | 11 Sep 2024 | |
14/candidate | 529 | 29 Nov 2024 | |
14/candidate | 528 | 29 Nov 2024 | |
14/beta | 529 | 28 Nov 2024 | |
14/beta | 528 | 28 Nov 2024 | |
14/edge | 541 | 10 Jan 2025 | |
14/edge | 540 | 10 Jan 2025 | |
16/edge | 527 | 27 Nov 2024 | |
16/edge | 526 | 27 Nov 2024 |
juju deploy postgresql --channel 14/stable
Deploy universal operators easily with Juju, the Universal Operator Lifecycle Manager.
Platform:
Deploy for external TLS VIP access
This document combines the external DB access definition with well detailed dedicated PostgreSQL TLS and PgBouncer TLS tutorials to provide the single page document for the Extrernal DB access using TLS/SSL.
Requirements
Canonical does not prescribe how you should set up your specific environment, while provide recommendations and listing the necessary components. Consider to contact us with your use-case.
This document assumes you have:
- Fully deployed and running Juju machine environment
- Spare VIP (virtual IP address) for hacluster
- DNS record pointing to VIP above (we will use
my-tls-example-db.local
as an example below)
Design
NOTE: this setup applicable for VM/machine charms only. Do NOT use it for K8s deployments!
We are deploying here:
- PostgreSQL VM charm (3 units, as a single cluster)
- Self-signed-certificates as the TLS provider. Use proper TLS provider for production usage.
- Data-integrator charm as a principal charm for subordinated charms below (3 units for HA):
- PgBouncer VM charm as load-ballancer and connection pooller (3 units for HA. Use modern charm).
- HACluster VM charm for VIP handling (3 units in minimum for HA)
Deployment
Let’s deploy the designed above setup:
juju add-model my-external-tls-db
juju deploy postgresql -n 3
juju deploy self-signed-certificates
juju relate postgresql self-signed-certificates
juju deploy pgbouncer --channel 1/stable --config vip=10.20.30.40 # Use your VIP here !!!
juju relate pgbouncer postgresql
juju relate pgbouncer self-signed-certificates
juju deploy data-integrator -n 3 --config database-name=mytestdb # Choose your DB name!!!
juju relate data-integrator pgbouncer
juju deploy hacluster
juju relate hacluster pgbouncer
juju relate hacluster:juju-info data-integrator
The deployment results (call juju status
):
Model Controller Cloud/Region Version SLA Timestamp
my-external-tls-db lxd localhost/localhost 3.5.5 unsupported 12:53:35+01:00
App Version Status Scale Charm Channel Rev Exposed Message
data-integrator active 3 data-integrator latest/stable 78 no
hacluster 2.1.2 active 3 hacluster 2.4/stable 131 no Unit is ready and clustered
pgbouncer 1.21.0 active 3 pgbouncer 1/stable 396 no VIP: 10.78.217.100
postgresql 14.12 active 3 postgresql 14/stable 468 no
self-signed-certificates active 1 self-signed-certificates latest/stable 155 no
Unit Workload Agent Machine Public address Ports Message
data-integrator/0 active idle 4 10.78.217.30
hacluster/3 active idle 10.78.217.30 Unit is ready and clustered
pgbouncer/0* active idle 10.78.217.30 6432/tcp VIP: 10.78.217.100
data-integrator/1 active idle 5 10.78.217.132
hacluster/2 active idle 10.78.217.132 Unit is ready and clustered
pgbouncer/1 active idle 10.78.217.132 6432/tcp
data-integrator/2* active idle 6 10.78.217.93
hacluster/1* active idle 10.78.217.93 Unit is ready and clustered
pgbouncer/2 active idle 10.78.217.93 6432/tcp
postgresql/0 active idle 0 10.78.217.254 5432/tcp
postgresql/1 active idle 1 10.78.217.245 5432/tcp
postgresql/2* active idle 2 10.78.217.192 5432/tcp Primary
self-signed-certificates/0* active idle 3 10.78.217.79
Machine State Address Inst id Base AZ Message
0 started 10.78.217.254 juju-318984-0 ubuntu@22.04 Running
1 started 10.78.217.245 juju-318984-1 ubuntu@22.04 Running
2 started 10.78.217.192 juju-318984-2 ubuntu@22.04 Running
3 started 10.78.217.79 juju-318984-3 ubuntu@22.04 Running
4 started 10.78.217.30 juju-318984-4 ubuntu@22.04 Running
5 started 10.78.217.132 juju-318984-5 ubuntu@22.04 Running
6 started 10.78.217.93 juju-318984-6 ubuntu@22.04 Running
Check connectivity
> juju run data-integrator/leader get-credentials
...
postgresql:
data: '{"database": "mytestdb", "external-node-connectivity": "true", "requested-secrets":
"[\"username\", \"password\", \"tls\", \"tls-ca\", \"uris\"]"}'
database: mytestdb
endpoints: 10.78.217.100:6432
password: V7kHqHyapIphkUS0cHoOtP3j
subordinated: "true"
uris: postgresql://relation_id_9:V7kHqHyapIphkUS0cHoOtP3j@10.78.217.100:6432/mytestdb
username: relation_id_9
version: "14.12"
Now use the received credentials (uris
) to connect PostgreSQL (via TLS/SSL):
> psql postgresql://relation_id_9:V7kHqHyapIphkUS0cHoOtP3j@10.78.217.100:6432/mytestdb
psql (14.15 (Ubuntu 14.15-0ubuntu0.22.04.1), server 14.12 (Ubuntu 14.12-0ubuntu0.22.04.1))
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)
Type "help" for help.
mytestdb=> select now();
now
-------------------------------
2025-01-14 11:51:04.646245+00
(1 row)
Ensure your DNS records points to the virtual IP and VIP is routable/reachable from outside of your network to connect using DNS:
psql postgresql://relation_id_9:V7kHqHyapIphkUS0cHoOtP3j@my-tls-example-db.local:6432/mytestdb
Monitoring
Consider to add COS monitoring for your setup, both COS for PostgreSQL and COS for PgBouncer.
High-availability
Important: deploy different units into separate availability zones (AZ) in production.
At this point Juju is responsible for clusters/applications survival:
- Juju Charm will restart workload if PostgreSQL is not healthy.
- Juju Agent will restart unit/vm/container if it is no longer reachable/healthy (in the same AZ).
- Juju Controller will make sure Juju Agent is up and running and Charm is healthy.
- HACluster will make sure VIP is always reachable and routes to a single PgBouncer only.
- PgBouncer will balance incoming connections and makes sure write traffic goes to Primary PostgreSQL unit.
- TLS operator is responsible for providing all components with signed read-to-use TLS artifacts.
Troubleshooting
Consider to contact us if you have any issues with this setup.