Content Cache
- Content Cache Charmers
- Monitoring
Channel | Revision | Published | Runs on |
---|---|---|---|
latest/stable | 415 | 17 Nov 2024 | |
latest/stable | 413 | 17 Nov 2024 | |
latest/stable | 414 | 17 Nov 2024 | |
latest/stable | 346 | 14 Mar 2024 | |
latest/stable | 345 | 14 Mar 2024 | |
latest/stable | 344 | 14 Mar 2024 | |
latest/stable | 341 | 14 Mar 2024 | |
latest/stable | 340 | 14 Mar 2024 | |
latest/stable | 334 | 14 Mar 2024 | |
latest/stable | 91 | 01 Feb 2022 | |
latest/candidate | 385 | 11 Sep 2024 | |
latest/candidate | 384 | 11 Sep 2024 | |
latest/candidate | 383 | 11 Sep 2024 | |
latest/candidate | 346 | 14 Mar 2024 | |
latest/candidate | 345 | 14 Mar 2024 | |
latest/candidate | 344 | 14 Mar 2024 | |
latest/candidate | 23 | 08 Apr 2021 | |
latest/beta | 385 | 11 Sep 2024 | |
latest/beta | 384 | 11 Sep 2024 | |
latest/beta | 383 | 11 Sep 2024 | |
latest/beta | 346 | 14 Mar 2024 | |
latest/beta | 345 | 14 Mar 2024 | |
latest/beta | 344 | 14 Mar 2024 | |
latest/edge | 419 | 17 Nov 2024 | |
latest/edge | 418 | 17 Nov 2024 | |
latest/edge | 417 | 17 Nov 2024 | |
latest/edge | 416 | 17 Nov 2024 | |
latest/edge | 385 | 11 Sep 2024 | |
latest/edge | 384 | 11 Sep 2024 | |
latest/edge | 383 | 11 Sep 2024 | |
latest/edge | 346 | 14 Mar 2024 | |
latest/edge | 345 | 14 Mar 2024 | |
latest/edge | 344 | 14 Mar 2024 | |
latest/edge | 89 | 13 Jan 2022 |
juju deploy content-cache --channel edge
Deploy universal operators easily with Juju, the Universal Operator Lifecycle Manager.
Platform:
Content-cache can be used to deploy your own content distribution network (CDN) and provides:
- Full end-to-end encryption - TLS/SSL termination both between clients and caching frontends as well as between CDN and backend servers.
- Caching - objects are to be cached and stored locally to reduce network bandwidth to shared infrastructure and reduce load. Note that currently the cache is provided on a per-unit basis, and is not shared between deployed units.
In a content-cache deployment, each unit is composed of HAProxy frontend which forwards traffic to Nginx. Nginx then forwards traffic to an HAProxy backend and from there traffic is forwarded to the upstream site as configured in the sites
configuration option.
This architecture was chosen after extensive performance and feature testing of the following possible solutions:
- Squid
- HAProxy & Squid
- Nginx
- HAProxy & Nginx
- HAProxy & Nginx & HAProxy
- HAProxy & Varnish HTTP Cache & HAProxy
- Hitch & Varnish HTTP Cache & HAProxy
When testing against large files ( ~100MB), both Nginx and HAProxy & Squid solutions faired equally. However, for smaller files (~32kbytes), the Nginx solution seemed to be a better choice - lower overall system load with Nginx processes consuming less CPU time than HAProxy for SSL/TLS termination.
Nginx was chosen due to better overall performance and features than the others. However, the Open Source version of Nginx only provides very basic metrics, therefore content-cache was designed with HAProxy on either side of it. This allows for detailed metrics about traffic to and from each unit.