This role helps with running OpenShift Agent-based Installer to create OpenShift clusters and OKD clusters.
First, a directory for all installer related files such as install-config.yaml
, agent-config.yaml
and other
manifests for openshift-install
will be created from variable openshift_abi_config_dir
. Files insides this
directory such as install-config.yaml
will be created from variable openshift_abi_config
which defines a list of
tasks to be run by this role. Each task calls an Ansible module similar to tasks in roles or playbooks except that only
few keywords such as become
and when
are supported.
When a pull secret has been defined in variable openshift_abi_pullsecret
, then it will be
written to file openshift_abi_pullsecret_file
.
Next, the openshift-install
binary will be extracted from container image defined in openshift_abi_release_image
to
directory openshift_abi_install_dir
which defaults to /usr/local/bin
. To aid debugging, the version of
openshift-install
will be printed.
Afterwards, openshift-install
will generate the cluster manifests and the agent image for OpenShift Agent-based
Installer from the files in openshift_abi_config_dir
. To boot all cluster nodes with the agent image, this role relies
on the tasks defined in openshift_abi_boot_code
. The latter defines a list of tasks which will run by this role,
similar to variable openshift_abi_config
, in order to insert the agent image as virtual media in each cluster node and
then reboot the node with the agent image.
The role will then wait until:
- the rendezvous host (bootstrap host) has been bootstrapped,
- the cluster installation has been completed,
- the number of nodes matches the number of machines,
- all nodes are ready, because follow-up roles might require workload capacity, and
- cluster operators have finished progressing to ensure that the configuration which had been specified at installation time has been achieved.
Finally, this role will execute all tasks defined in variable openshift_abi_cleanup_code
which defines a list of tasks
to eject the agent image from all cluster nodes.
Tested OS images
- Cloud image (
amd64
) of Debian 10 (Buster) - Cloud image (
amd64
) of Debian 11 (Bullseye) - Cloud image (
amd64
) of Debian 12 (Bookworm) - Cloud image (
amd64
) of Debian 13 (Trixie) - Cloud image (
amd64
) of CentOS 7 (Core) - Cloud image (
amd64
) of CentOS 8 (Stream) - Cloud image (
amd64
) of CentOS 9 (Stream) - Cloud image (
amd64
) of Fedora Cloud Base 40 - Cloud image (
amd64
) of Ubuntu 18.04 LTS (Bionic Beaver) - Cloud image (
amd64
) of Ubuntu 20.04 LTS (Focal Fossa) - Cloud image (
amd64
) of Ubuntu 22.04 LTS (Jammy Jellyfish) - Cloud image (
amd64
) of Ubuntu 24.04 LTS (Noble Numbat)
Available on Ansible Galaxy in Collection jm1.cloudy.
OpenShift Client aka oc
is required for extracting openshift-install
from the release image and managing
Kubernetes resources. You may use role jm1.cloudy.openshift_client
to install it.
Agent-based Installer, i.e. openshift-install
, requires nmstate during generating manifests when
agent-config.yaml
has any host in hosts
that defines a networkConfig
entry. Debian 12 (Bookworm), Ubuntu 22.04 LTS
(Jammy Jellyfish) and older releases do not provide packages for nmstate, use CentOS Stream 9 or Fedora
instead.
This role uses module(s) from collection jm1.ansible
. To install this collection you may follow
the steps described in README.md
using the provided requirements.yml
.
Name | Default value | Required | Description |
---|---|---|---|
openshift_abi_boot_code |
undefined | true | List of tasks to run in order to insert and boot the agent image on all cluster nodes 1 2 3 |
openshift_abi_cleanup_code |
undefined | true | List of tasks to run in order to eject the agent image from all cluster nodes 1 2 3 |
openshift_abi_config |
undefined | true | List of tasks to run in order to create install-config.yaml , agent-config.yaml and other manifests for openshift-install in openshift_abi_config_dir 1 2 3 |
openshift_abi_config_dir |
~/clusterconfigs |
false | Directory where install-config.yaml , agent-config.yaml and other manifests will be stored. Defaults to clusterconfigs in ansible_user 's home |
openshift_abi_install_dir |
/usr/local/bin |
false | Directory where openshift-install will be installed to |
openshift_abi_pullsecret |
undefined | false | Pull secret downloaded from Red Hat Cloud Console which will be used to authenticate with Container registries Quay.io and registry.redhat.io , which serve the container images for OpenShift Container Platform components. A pull secret is required for OpenShift deployments only, but not for OKD deployments. |
openshift_abi_pullsecret_file |
~/pull-secret.txt |
false | Path to pull secret file |
openshift_abi_release_image |
undefined | true | Container image from which openshift-install will be extracted, e.g. 'quay.io/okd/scos-release:4.13.0-0.okd-scos-2023-07-20-165025' |
None.
- hosts: all
roles:
- name: Create an OpenShift cluster with Agent-based Installer
role: jm1.cloudy.openshift_abi
tags: ["jm1.cloudy.openshift_abi"]
For a complete example on how to use this role, refer to hosts lvrt-lcl-session-srv-500-okd-abi-ha-router
up to
lvrt-lcl-session-srv-530-okd-abi-ha-provisioner
from the provided examples inventory. The
top-level README.md
describes how these hosts can be provisioned with playbook
playbooks/site.yml
.
If you want to deploy OpenShift instead of OKD, download a pull secret from Red Hat Cloud
Console. It is required to authenticate with Container registries Quay.io
and registry.redhat.io
,
which serve the container images for OpenShift Container Platform components. Next, change the following host_vars
of
Ansible host lvrt-lcl-session-srv-530-okd-abi-ha-provisioner
:
openshift_abi_pullsecret: |
{"auths":{"xxxxxxx": {"auth": "xxxxxx","email": "xxxxxx"}}}
# Or read pull secret from file ~/pull-secret.txt residing at the Ansible controller
#openshift_abi_pullsecret: |
# {{ lookup('ansible.builtin.file', lookup('ansible.builtin.env', 'HOME') + '/pull-secret.txt') }}
openshift_abi_release_image: "{{ lookup('ansible.builtin.pipe', openshift_abi_release_image_query) }}"
openshift_abi_release_image_query: |
curl -s https://mirror.openshift.com/pub/openshift-v4/amd64/clients/ocp/stable-4.13/release.txt \
| grep 'Pull From: quay.io' \
| awk -F ' ' '{print $3}'
For instructions on how to run Ansible playbooks have look at Ansible's Getting Started Guide.
GNU General Public License v3.0 or later
See LICENSE.md to see the full text.
Jakob Meng @jm1 (github, galaxy, web)
Footnotes
-
Useful Ansible modules in this context could be
blockinfile
,command
,copy
,file
,lineinfile
andtemplate
. ↩ ↩2 ↩3 -
Tasks will be executed with
jm1.ansible.execute_module
which supports keywordsbecome
,become_exe
,become_flags
,become_method
,become_user
,environment
andwhen
only. NOTE: Keywords related tobecome
will not inherit values from the role's caller. For example, whenbecome
is defined in a playbook it will not be passed on to a task here. ↩ ↩2 ↩3 -
Tasks will be executed with
jm1.ansible.execute_module
which supports modules and action plugins only. Some Ansible modules such asansible.builtin.meta
andansible.builtin.{include,import}_{playbook,role,tasks}
are core features of Ansible, in fact not implemented as modules and thus cannot be called fromjm1.ansible.execute_module
. Doing so causes Ansible to raise errors such asMODULE FAILURE\nSee stdout/stderr for the exact error
. In addition, Ansible does not support free-form parameters for arbitrary modules, so for example, change from- debug: msg=""
to- debug: { msg: "" }
. ↩ ↩2 ↩3