Content Cache
- Content Cache Charmers
- Monitoring
Channel | Revision | Published | Runs on |
---|---|---|---|
latest/stable | 440 | 08 Dec 2024 | |
latest/stable | 439 | 08 Dec 2024 | |
latest/stable | 438 | 08 Dec 2024 | |
latest/stable | 346 | 14 Mar 2024 | |
latest/stable | 345 | 14 Mar 2024 | |
latest/stable | 344 | 14 Mar 2024 | |
latest/stable | 341 | 14 Mar 2024 | |
latest/stable | 340 | 14 Mar 2024 | |
latest/stable | 334 | 14 Mar 2024 | |
latest/stable | 91 | 01 Feb 2022 | |
latest/candidate | 385 | 11 Sep 2024 | |
latest/candidate | 384 | 11 Sep 2024 | |
latest/candidate | 383 | 11 Sep 2024 | |
latest/candidate | 346 | 14 Mar 2024 | |
latest/candidate | 345 | 14 Mar 2024 | |
latest/candidate | 344 | 14 Mar 2024 | |
latest/candidate | 23 | 08 Apr 2021 | |
latest/beta | 437 | 07 Dec 2024 | |
latest/beta | 436 | 07 Dec 2024 | |
latest/beta | 435 | 07 Dec 2024 | |
latest/beta | 346 | 14 Mar 2024 | |
latest/beta | 345 | 14 Mar 2024 | |
latest/beta | 344 | 14 Mar 2024 | |
latest/edge | 426 | 02 Dec 2024 | |
latest/edge | 425 | 02 Dec 2024 | |
latest/edge | 424 | 02 Dec 2024 | |
latest/edge | 423 | 02 Dec 2024 | |
latest/edge | 385 | 11 Sep 2024 | |
latest/edge | 384 | 11 Sep 2024 | |
latest/edge | 383 | 11 Sep 2024 | |
latest/edge | 346 | 14 Mar 2024 | |
latest/edge | 345 | 14 Mar 2024 | |
latest/edge | 344 | 14 Mar 2024 | |
latest/edge | 89 | 13 Jan 2022 | |
1/edge | 428 | 04 Dec 2024 |
juju deploy content-cache
Deploy universal operators easily with Juju, the Universal Operator Lifecycle Manager.
Platform:
Content-cache can be used to deploy your own content distribution network (CDN) and provides:
- Full end-to-end encryption - TLS/SSL termination both between clients and caching frontends as well as between CDN and backend servers.
- Caching - objects are to be cached and stored locally to reduce network bandwidth to shared infrastructure and reduce load. Note that currently the cache is provided on a per-unit basis, and is not shared between deployed units.
In a content-cache deployment, each unit is composed of HAProxy frontend which forwards traffic to Nginx. Nginx then forwards traffic to an HAProxy backend and from there traffic is forwarded to the upstream site as configured in the sites
configuration option.
This architecture was chosen after extensive performance and feature testing of the following possible solutions:
- Squid
- HAProxy & Squid
- Nginx
- HAProxy & Nginx
- HAProxy & Nginx & HAProxy
- HAProxy & Varnish HTTP Cache & HAProxy
- Hitch & Varnish HTTP Cache & HAProxy
When testing against large files ( ~100MB), both Nginx and HAProxy & Squid solutions faired equally. However, for smaller files (~32kbytes), the Nginx solution seemed to be a better choice - lower overall system load with Nginx processes consuming less CPU time than HAProxy for SSL/TLS termination.
Nginx was chosen due to better overall performance and features than the others. However, the Open Source version of Nginx only provides very basic metrics, therefore content-cache was designed with HAProxy on either side of it. This allows for detailed metrics about traffic to and from each unit.