Glance Sync

juju deploy glance-sync

20.04 LTS 18.04 LTS 16.04 LTS

Discuss this charm

Share your thoughts on this charm with the community on discourse.

Join the discussion

Overview

The Glance sync charm provides a cross-model/cross-cloud image sync, from a glance-sync 'master' application, to any number of client applications, referred to as glance-sync 'slaves'. The charm functions in either the following two modes:

Master mode

The charm will function as the master source within a glance image sync. The charm will perform a glance image and metadata sync from the local glance application, over to the local disk on the unit. The glance data is then accessible from the glance-sync-slave units on other models/clouds.

Slave mode

In this mode, the 'master_mode' juju config parameter is set to 'false', and the charm acts as a client to a master OpenstStack installation, with respect to glance images. The master OpenStack must have the glance-sync application deployed, with the parameter 'master_mode' set to 'true'. We refer to this configuration as the glance-sync-master, or 'master' for short.

The 'slave' unit will then sync up to the glance-sync-master, and look for any new glance images and metadata that may reside there, pulling them down using rsync over SSH. Finally, the slave will store the images and metadata to local disk, and import them into the local glance application.

Requirements

In order to let this charm work successfully, it is required that there is a network path between the deployments of the master and slaves. It should be possible to run multiple slaves, but this (multiple slaves) has not been tested with this version (yet). The slave instances also need to be able to reach the openstack api of the master and the glance instance of the master site.

Configuration

The configuration options are:

  • m / s admin_email - This option is used to send notification to.
  • m authorized_keys - base64 encoded file containing the slave keys / commands allowed to sync
  • m / s config_dir - directory where the configuration files are stored
  • m / s cron_frequency - defaults to `10 /3 ", ie running 10 minutes past the hour, every 3 hours
  • m / s data_dir - location where the images and metadata files are cached
  • m / s log_dir - location where the logs are stored created by the command run in the crontab
  • s master_creds - Openstack credentials needed to download images from the glance master
  • m / s master_node - Is this instance running as master or as slave
  • m / s nagios_context - used by the nrpe-external-master subordinate for proper signaling of issues
  • m / s novarc - base64 encoded novarc, used to contact the local openstack api to retrieve the information of the images in glance. This file also should contain the information needed to access the mysql database. This is needed in order to be able to retrieve the images marked as 'community' which are not exposed by the glance api directly (while the api allows them to download)
  • m / s script_dir - location where the scripts are installed
  • m / s sync_enabled - is the sync running?
  • s sync_source - rsync URL pointing to the master
  • m / s trusted_ssl_ca - the CA needed for secure authentication of the openstack API

Build

To deploy the charm, first build the charm from its respective charm layer.

Clone the repository for the glance-sync-layer, down to where you have your local Juju environment, and install the charm snap, if you do not already have it installed on the system.

cd /path/to/glance-sync-layer/

make requirements

Now run the following command to build the charm for the 'xenial' series:

cd /path/to/glance-sync-layer/

make build

The charm also features a Makefile in the root directory of the repository, which can be used to automatically build the charm for you, as well as provide other additional functionality.

The Makefile also has the functionality to install package dependencies for the build process, perform unit and/or functional tests, and also run the built-in linter, providing you with the output from flake8. Use make help for more details:

cd /path/to/glance-sync-layer/

make help

Master Installation

To deploy the charm in master-mode:

juju deploy --series=xenial --config master_mode=true ~/charms/glance-sync-layer/xenial/glance-sync gsm

To configure an already deployed charm to run in master-mode:

juju config <glance-sync-application> master_mode=true

WARNING: this will wipe ALL data that exists within the data directory, which has its path defined by the data_dir juju config parameter.

It is assumed that if you wish to keep a copy of the contents of the data directory, that you have already made a backup of it, since performing a mode change inherently changes the behaviour and the file structure of the deployed unit.

Then add the relations to keystone and mysql:

juju add-relation keystone gsm juju add-relation mysql gsm

Master Configuration

In order to allow the slave unit to run rsync against the glance data directory of the master unit, the public ssh key of the ubuntu user in the slave unit needs to be added to the authorized_keys file of the ubuntu user in the master unit. you need to add to the authorized_keys file of the "ubuntu" user of the master, the public keys of the ubuntu users on the slaves.

Get the slave public key:

juju scp gss/0:/home/ubuntu/.ssh/id_rsa.pub .

Configure on the master:

juju config gsm authorized_keys="$(base64 id_rsa.pub)"

Slave Installation

Assuming that you have the juju package installed, along with a controller/model added, then first switch to the model that you wish to deploy the glance-sync charm on, and run:

juju deploy --series=xenial /path/to/glance-sync-layer/<series>/glance-sync gss

If you wish to reconfigure a deployed master unit to run as a slave, you should run:

juju config <glance-sync-application> master_mode=false

WARNING: this will wipe ALL data that exists within the data directory, which has its path defined by the data_dir juju config parameter.

It is assumed that if you wish to keep a copy of the contents of the data directory, that you have already made a backup of it, since performing a mode change inherently changes the behaviour and the file structure of the deployed unit.

Then add the relation to keystone and mysql:

juju add-relation keystone gsm juju add-relation mysql gsm

Slave Configuration

Next, Juju will ask you to set a 'sync_source' for the charm, (see juju status output), and then run:

juju config gss sync_source='ubuntu@<master-hostname-or-ip>:/srv/glance_sync/data'

Which assumes that the parameter 'data_dir' has been left to its default value '/srv/glance_sync/data'. Otherwise, correct the path for to the data directory within the 'sync_source' parameter above.

Configure the OpenStack credentials of the master region on the master_creds parameter of the slave application (slave region).

Note: It is not recommended to use the admin user. A new user with limited privileges should be created.

juju config gss master_creds="username=$OS_MASTER_USERNAME, password=$OS_MASTER_PASSWORD, project=$OS_MASTER_PROJECT, region=$OS_MASTER_REGION, auth_url=$OS_AUTH_URL, domain=$OS_MASTER_DOMAIN"