
Wazuh Server
Channel | Revision | Published | Runs on |
---|---|---|---|
4.11/stable | 167 | 07 Jul 2025 | |
4.11/edge | 196 | Today |
juju deploy wazuh-server --channel 4.11/stable
Deploy Kubernetes operators easily with Juju, the Universal Operator Lifecycle Manager. Need a Kubernetes cluster? Install MicroK8s to create a full CNCF-certified Kubernetes system in under 60 seconds.
Platform:
How to run the integration tests
The integration tests for this charm are designed to be run by canonical/operator-workflows/integration_test.
To run them locally, your environment should be as similar as possible to the one created on the GitHub actions runner.
Development environment setup
Starting from a fresh Ubuntu 24.04 LTS (Noble Numbat) virtual machine, follow these instructions:
sudo
has been omitted from all commands, though many of the below
commands require root access.
Clone the repository
git clone git+ssh://git@github.com:canonical/wazuh-server-operator.git ~/wazuh-server-operator
cd wazuh-sever-operator
Install Charmcraft
snap install charmcraft --classic
Install tox
if ! (which pipx &> /dev/null); then
apt update && apt install -y pipx
pipx ensurepath
export PATH="${PATH}:${HOME}/.local/bin"
fi
pipx install tox
Install LXD
snap install lxd
newgrp lxd
lxd init --auto
Install Canonical Kubernetes
snap install k8s --classic
If you encounter an error running the next command, log out and log back in.
cat << EOF | k8s bootstrap --file -
containerd-base-dir: /opt/containerd
EOF
k8s enable network dns local-storage gateway
k8s status --wait-ready --timeout 5m
mkdir -p ~/.kube
k8s config > ~/.kube/config
Configure the Kubernetes load-balancer
The following commands will configure the Kubernetes metallb
plugin to use IP
addresses between .225 and .250 on the machine’s ‘real’ subnet. If this creates
IP address conflict(s) in your environment, please modify the commands.
k8s enable load-balancer
IPADDR=$(ip -4 -j route get 2.2.2.2 | jq -r '.[] | .prefsrc')
LB_FIRST_ADDR="$(echo "${IPADDR}" | awk -F'.' '{print $1,$2,$3,225}' OFS='.')"
LB_LAST_ADDR="$(echo "${IPADDR}" | awk -F'.' '{print $1,$2,$3,250}' OFS='.')"
LB_ADDR_RANGE="${LB_FIRST_ADDR}-${LB_LAST_ADDR}"
k8s set \
load-balancer.cidrs=$LB_ADDR_RANGE \
load-balancer.enabled=true \
load-balancer.l2-mode=true
Install kubectl
snap install kubectl --classic
Install Juju
snap install juju
Bootstrap the Kubernetes controller
juju bootstrap k8s
Run the pre-run script
bash -xe ~/wazuh-server-operator/tests/integration/pre_run_script.sh
(Optional) Install rock dependencies
If you anticipate changing the rock container image, follow these additional steps:
Install Rockcraft
snap install rockcraft --classic
Install and configure Docker
apt install -y docker.io
echo '{ "insecure-registries": ["localhost:5000"] }' > /etc/docker/daemon.json
systemctl restart docker
Install skopeo
apt install -y skopeo
Testing
This project uses tox
for managing test environments. There are some
pre-configured environments that can be used for linting and formatting code
when you’re preparing contributions to the charm:
tox
: Executes all of the basic checks and tests (lint
,unit
,static
, andformat
).tox run -e fmt
: Update your code according to linting rules.tox run -e lint
: Runs a range of static code analysis to check the code.tox run -e static
: Runs other checks such asbandit
for security issues.tox run -e unit
: Runs unit tests.
Integration testing
Integration testing is a multi-step process that requires:
- Building the charm
- (Optionally) building the rock
- Running the tests
Build the charm
cd ~/wazuh-server-operator
charmcraft pack
(Optional) Build the rock
If you have not made any changes to the rock, you do not need to rebuild it.
The GitHub integration test workflow builds and uploads the rock to ghcr.io
for its own tests. If you haven’t changed the rock since the last GitHub action
run, you might as well reuse that artifact.
Check here for the latest build’s tag.
If you did change the rock configuration:
cd ~/wazuh-server-operator/rock
rockcraft pack
Upload the rock into a local registry:
docker run -d -p 5000:5000 --restart always --name registry registry:2
skopeo --insecure-policy copy --dest-tls-verify=false \
oci-archive:wazuh-server_1.0_amd64.rock \
docker://localhost:5000/wazuh-server:latest
Set the container location
If you rebuilt the rock:
export IMAGE_URL="localhost:5000/wazuh-server:latest"
If you did not rebuild the rock:
# find latest tag here:
# https://github.com/canonical/wazuh-server-operator/pkgs/container/wazuh-server
export IMAGE_URL="ghcr.io/canonical/wazuh-server:a063aca515693126206e4dfa6ba6eba4bac43698-_1.0_amd64"
Run tests
With three wazuh-indexer
nodes
minimum 32 GB of RAM suggested
tox run -e integration -- \
--charm-file=wazuh-server_ubuntu-22.04-amd64.charm \
--wazuh-server-image $IMAGE_URL \
--controller k8s --model test-wazuh
With a single wazuh-indexer
node
tox run -e integration -- \
--charm-file=wazuh-server_ubuntu-22.04-amd64.charm \
--wazuh-server-image $IMAGE_URL \
--single-node-indexer \
--controller k8s --model test-wazuh
Reuse environments
To get faster test results over multiple iterations you may want to reuse your integration environments. To do so, you can initially run:
tox run -e integration -- \
--charm-file=wazuh-server_ubuntu-22.04-amd64.charm \
--wazuh-server-image $IMAGE_URL \
--single-node-indexer --keep-models \
--controller k8s --model test-wazuh
For subsequent runs:
tox run -e integration -- \
--charm-file=wazuh-server_ubuntu-22.04-amd64.charm \
--wazuh-server-image $IMAGE_URL \
--single-node-indexer --keep-models \
--controller k8s --model test-wazuh \
--no-deploy
Troubleshooting
IO
Running the integration tests is IO-intensive. If you receive frequent errors
related to Kubernetes timeouts, it may be related to disk IO limitations. If
your environment is running on top of ZFS, consider setting sync=disabled
.
AppArmor
Errors can also be related to the installed snaps’ AppArmor restrictions. You can review AppArmor ‘block’ decisions by searching kernel logs:
dmesg | grep 'apparmor="DENIED"'
To test whether AppArmor restrictions are causing an error, install each snap in
developer mode by
appending --devmode
to the installation command. You can further reduce
AppArmor restrictions by enabling the ‘devmode-debug’ setting.
Example:
snap install lxd --channel=6/stable --revision=34285 --devmode
snap stop lxd
snap set lxd devmode-debug=true
snap start lxd