Temporal Server
Platform:
| Channel | Revision | Published | Runs on |
|---|---|---|---|
| latest/stable | 43 | 28 Jan 2025 | |
| latest/edge | 52 | 08 Aug 2025 | |
| 1.23/stable | 60 | 30 Apr 2026 | |
| 1.23/stable | 68 | 30 Apr 2026 | |
| 1.23/edge | 68 | 29 Apr 2026 | |
| 1.23/edge | 60 | 09 Dec 2025 |
juju deploy temporal-k8s --channel 1.23/stable
Upgrade Charmed Temporal latest/stable to 1.23/stable
This guide describes how to upgrade an existing Charmed Temporal deployment from the latest/stable channel to the 1.23/stable channel. It is intended for operators who already have a Charmed Temporal deployment running.
The guide has been validated end-to-end on Juju 3.6.21.
Important
Read these notes before starting the upgrade.
- The schema migration is forward-only. There is no in-place downgrade. If the upgrade fails, the recovery path is restore-from-PostgreSQL-backup either back to
latest/stableor directly on1.23/stableonce the underlying issue is resolved. - The Temporal charms change their base from
ubuntu@22.04toubuntu@24.04betweenlatest/stableand1.23/stable. temporal-ui-k8srequires the newtemporal-host-inforelation and will block until it is integrated.temporal-admin-k8saccepts the deprecatedserver-nameconfig as a fallback.- The
tctlaction ontemporal-admin-k8swas renamed tocliin 1.23, and the underlying binary changed from the legacytctlto the moderntemporalCLI with different argument grammar. Operator scripts that calljuju run temporal-admin/0 tctl args="..."must be updated.
Deprecation notice:
server-nameThe
server-nameconfig option ontemporal-admin-k8sis deprecated in 1.23 and will be removed in a future release. Its only purpose in 1.23 is to act as a fallback when thetemporal-host-inforelation is absent. The supported configuration is to integrate thetemporal-host-inforelation and leaveserver-nameunset. Do not introduce new dependencies onserver-name.
Step 1: Back up PostgreSQL
The single source of truth for workflow history and namespace metadata is the PostgreSQL backend. Back it up before the refresh.
Recommendation: stop active work before the backup.
Temporal can recover from a database backup without needing a clean shutdown first, so this step is optional. That said, backing up while the system is quiet lowers the risk of capturing incomplete workflow state. If you can, do the following before running the backup:
- Stop new workflow starts - pause or disable anything that triggers new executions (cron jobs, API clients, schedules).
- Wait for running workflows to finish - or at least reach a point where they are waiting on a timer, activity, or signal.
- Shut down workers gracefully - send a shutdown signal so the SDK can finish cleanly. In the Python SDK,
worker.run()handlesSIGINT/SIGTERMthis way by default.
Follow the official postgresql-k8s backup procedure: Create a backup. The how-to covers configuring an S3 backup target, taking a backup, and verifying it.
If the upgrade fails at any point, restoring this dump and redeploying on latest/stable returns the cluster to its pre-upgrade state. See Recovery for the full procedure.
Step 2: Refresh temporal-admin-k8s
The admin charm owns schema management and must be refreshed first.
juju refresh temporal-admin --channel 1.23/stable --base ubuntu@24.04
After settling, temporal-admin reports active status.
The default server-name config (temporal-k8s) places the charm on the deprecated fallback path, this is expected at this stage and is corrected at Step 5.
Step 3: Refresh temporal-ui-k8s
The UI charm in 1.23 strictly requires the new temporal-host-info relation. After refresh, it will go to blocked status with the message temporal-host-info relation not established. This is the expected state until Step 5.
Service availability. The Temporal UI is unavailable from the moment the UI pod starts replacing until the
temporal-host-inforelation is integrated in Step 5.
juju refresh temporal-ui --channel 1.23/stable --base ubuntu@24.04
Step 4: Refresh temporal-k8s
juju refresh temporal-k8s --channel 1.23/stable --base ubuntu@24.04
Step 5: Integrate the temporal-host-info relation
Both temporal-ui and temporal-admin need this relation in 1.23. Add both, and clear the deprecated server-name fallback so the relation becomes the single source of truth.
Reset the deprecated server-name config on temporal-admin:
juju config temporal-admin --reset server-name
Integrate the temporal-host-info relation:
juju integrate temporal-k8s:temporal-host-info temporal-ui:temporal-host-info
juju integrate temporal-k8s:temporal-host-info temporal-admin:temporal-host-info
After this step every application should be active/idle. Confirm the admin cli action is on the relation path (no longer the deprecated fallback):
juju run temporal-admin/0 cli args="operator namespace describe --namespace <your-namespace>"
Step 6: (Optional) Insert pgbouncer-k8s between Temporal and PostgreSQL
The 1.23 solution recommends placing pgbouncer-k8s between Temporal services and PostgreSQL for connection pooling. This is a topology change, separate from the channel refresh.
juju deploy pgbouncer-k8s pgbouncer --channel 1/stable --trust
juju integrate pgbouncer:backend-database postgres:database
# Remove the direct postgres relations, then add the pgbouncer-fronted ones
juju remove-relation temporal-k8s:db postgres:database
juju remove-relation temporal-k8s:visibility postgres:database
juju integrate temporal-k8s:db pgbouncer:database
juju integrate temporal-k8s:visibility pgbouncer:database
Recovery (if the upgrade fails)
There is no in-place rollback. Temporal’s schema migration is forward-only. If the upgrade fails at any step, restore from the PostgreSQL backup taken in Step 1, then choose one of two redeployment paths.
-
Stop applying further refreshes.
-
Capture diagnostics:
juju status > failure-status.txt juju show-unit <stuck-unit> > failure-unit.yaml juju debug-log --replay --level WARNING > failure-log.txt -
Destroy the model:
juju destroy-model <model> --no-prompt --destroy-storage -
Restore PostgreSQL from your backup (see the postgresql-k8s restore how-to).
-
Choose one of:
Option A: Re-attempt the upgrade on
1.23/stable(recommended). If the failure was caused by something outside the schema (e.g. a transient network issue, a misconfiguration), redeploy directly on1.23/stableagainst the restored database. This is the preferred path for production deployments because it avoids leaving operators on the unsupportedlatest/stabletrack.Option B: Roll back to
latest/stable. Redeploy onlatest/stableand reconnect to the restored database. Use this only if the failure was caused by a1.23/stableregression that needs upstream investigation.
Report any reproducible failure to the temporal-k8s-operator issues with the diagnostics from step 2.