Sign Up
Log In
Log In
or
Sign Up
Places
All Projects
Status Monitor
Collapse sidebar
SUSE:ALP:Workloads
ansible-container
_service:obs_scm:ansible-container.obscpio
Overview
Repositories
Revisions
Requests
Users
Attributes
Meta
File _service:obs_scm:ansible-container.obscpio of Package ansible-container
07070100000000000081A4000000000000000000000001652E3F1A000001AE000000000000000000000000000000000000001D00000000ansible-container/.gitignore# Prerequisites *.d # Object files *.o *.ko *.obj *.elf # Linker output *.ilk *.map *.exp # Precompiled Headers *.gch *.pch # Libraries *.lib *.a *.la *.lo # Shared objects (inc. Windows DLLs) *.dll *.so *.so.* *.dylib # Executables *.exe *.out *.app *.i*86 *.x86_64 *.hex # Debug files *.dSYM/ *.su *.idb *.pdb # Kernel Module Compile Results *.mod* *.cmd .tmp_versions/ modules.order Module.symvers Mkfile.old dkms.conf 07070100000001000081A4000000000000000000000001652E3F1A000008F2000000000000000000000000000000000000001D00000000ansible-container/Dockerfile# SPDX-License-Identifier: MIT # Define the tags for OBS and build script builds: #!BuildTag: suse/alp/workloads/ansible:latest #!BuildTag: suse/alp/workloads/ansible:%PKG_VERSION%.%TAG_OFFSET% #!BuildTag: suse/alp/workloads/ansible:%PKG_VERSION%.%TAG_OFFSET%.%RELEASE% FROM opensuse/tumbleweed:latest # Mandatory labels for the build service: # https://en.opensuse.org/Building_derived_containers # Define labels according to https://en.opensuse.org/Building_derived_containers # labelprefix=com.suse.alp.workloads.ansible LABEL org.opencontainers.image.title="Ansible base container" LABEL org.opencontainers.image.description="Container for Ansible" LABEL org.opencontainers.image.created="%BUILDTIME%" LABEL org.opencontainers.image.version="0.1" LABEL org.openbuildservice.disturl="%DISTURL%" LABEL org.opensuse.reference="registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/ansible:%PKG_VERSION%-%RELEASE%" LABEL com.suse.supportlevel="techpreview" LABEL com.suse.eula="beta" LABEL com.suse.image-type="application" LABEL com.suse.release-stage="alpha" # endlabelprefix # openssh-clients : for ansble ssh ENV LANG en_US.UTF-8 ENV LC_ALL en_US.UTF-8 RUN mkdir -p /container COPY label-install \ label-uninstall \ ansible-wrapper.sh \ hosts_alphost_group \ /container ADD examples/ /container/examples RUN chmod +x /container/ansible-wrapper.sh WORKDIR /work LABEL INSTALL="/usr/bin/podman run --env IMAGE=IMAGE --rm --security-opt label=disable -v /:/host IMAGE /bin/bash /container/label-install" LABEL UNINSTALL="/usr/bin/podman run --rm --security-opt label=disable -v /:/host IMAGE /bin/bash /container/label-uninstall" LABEL USER-INSTALL="/usr/bin/podman run --env IMAGE=IMAGE --security-opt label=disable --rm -v \${PWD}/:/host IMAGE /bin/bash /container/label-install" LABEL USER-UNINSTALL="/usr/bin/podman run --rm --security-opt label=disable -v \${PWD}/:/host IMAGE /bin/bash /container/label-uninstall" RUN zypper -v -n in \ ansible \ ansible-lint \ ansible-test \ git-core \ openssh-clients \ python3-libvirt-python \ python3-lxml \ python3-netaddr \ ; \ zypper clean --all RUN cat /container/hosts_alphost_group >> /etc/ansible/hosts 07070100000002000081A4000000000000000000000001652E3F1A00000428000000000000000000000000000000000000001A00000000ansible-container/LICENSEMIT License Copyright (c) 2023 SUSE SA Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 07070100000003000081A4000000000000000000000001652E3F1A000011F5000000000000000000000000000000000000002600000000ansible-container/OpenBuildService.md# Open Build Service Integration This container is currently being built in [SUSE:ALP:Workloads/ansible-container](https://build.opensuse.org/package/show/SUSE:ALP:Workloads/ansible-container). The built container images are available from [registry.opensuse.org](https://registry.opensuse.org/cgi-bin/cooverview?srch_term=project%3D%5ESUSE%3A+container%3Dansible). Source services have been configured in the [\_service file](https://build.opensuse.org/package/view_file/SUSE:ALP:Workloads/ansible-container/_service?expand=1) to simplify maintenance activities. ## Maintenance Worflow The following assumes a working knowledge of the [SUSE Open Build Service (OBS)](https://build.opensuse.org/). For more details see the [official documentation](https://openbuildservice.org/help/) and the [Wiki](https://en.opensuse.org/Portal:Build_Service). The maintenance workflow can leverage the provided source service integration, using the following workflow: * Branch the main package * Update the sources * Test build the container * Commit the updates * Test the published container * Submit the updates to the main package ## Branch the main package Create a branch of the main package under your home area, using the [osc](https://en.opensuse.org/openSUSE:OSC) command: ```shell % osc bco SUSE:ALP:Workloads/ansible-container ``` This will create a local checkout `home:username:branches:SUSE:ALP:Workloads/ansible-container`, where `username` will be whatever your OBS account username is. Note that you should ensure that publishing is enabled for your image to be able to see it in registry.opensuse.org. ## Update the sources To update the container build to use the latest sources from [SUSE/ansible-container](https://github.com/SUSE/ansible-container) you can trigger the remote source services, which will update the sources in the build servuce, as follows: ```shell % cd home:username:branches:SUSE:ALP:Workloads/ansible-container % osc service remoterun ``` Note this will also update the `ansible-container.changes` file based upon the Github PR titles. To see the updated sources you can additionally run services locally: ```shell % osc service localrun ``` The services defined in the `_service` file will extract the `Dockerfile` and associated files from the sources which should allow local building of the container. ## Test build the container To test that the container will build correctly in the build service you can try building it locally as follows: ```shell % osc service localrun % osc build --vm-type=kvm --vm-disk-size=8192 Tumbleweed_containerfiles x86_64 ``` The `osc service localrun` is required to ensure that the relevant files, that are required to be able to perform the image build, have been extracted locally from the sources. Note that `--vm-disk-size=8192` is required to ensure that sufficient disk space is available for the container build process to complete. Also you may have to respond to prompts to trust package source repos as part of running the build. Remember to cleanup any temporary files that may have been created by the `osc service localrun` once you are happy that the container builds successfully. ## Commit the updates As part of commiting the changes you should ensure that your changes are included in osc version control: * add any required new files * removed any deleted old files. You should also ensure that temporary files removed, such as those created by running `osc service localrun`. An easy way to do this, so long as you have already ensured your required files are included in the osc version control system, is to run `osc clean` which will remove all files that are not managed by the version control system, or the osc command itself, from the working directory. When you are satisfied that the updated sources are viable you can commit any changes to the repo as follows: ```shell % osc commit ``` ## Test the published container Once the updated container has been built you can retrieve it from your area in [registry.suse.de](https://registry.opensuse.org/cgi-bin/cooverview?srch_term=project%3D%5Ehome%3Ausername+container%3Dansible), changing `username` to your build service username. Note that you may need to enable publishing for your project to be able to see the published image. Perform relevant verification testing for any changes pulled in by the updated sources. ## Submit the updates to the main package Once you are happy with the candidate changes to the ansible-container you can submit them to the main package using: ```shell % osc submitrequest ``` 07070100000004000081A4000000000000000000000001652E3F1A000041D8000000000000000000000000000000000000001C00000000ansible-container/README.md# Containerized Ansible: What's inside # This container provides the ansible toolstack inside a container. * '''Dockerfile''' with the definition of the ansible container * based on OpenSUSE Tumbleweed * installs ansible and some additional tools ## Intended purpose This container is intended as a reference example of an Ansible workload container, based upon the latest Ansible version available for openSUSE Tumbleweed, for use on SUSE's Adaptable Linux Platform, and is tailored for that purpose, with included example playbooks that demonstrate how to configure networking and enable Libvirt support. ### SUSE ALP Open Build Service This container is being built in the [Open Build Service SUSE:ALP:Workloads project](https://build.opensuse.org/package/show/SUSE:ALP:Workloads/ansible-container) and published in [registry.opensuse.org](https://registry.opensuse.org/cgi-bin/cooverview?srch_term=project%3D%5ESUSE%3A+container%3Dansible) See our [Open Build Service integration workflow](OpenBuildService.md) for more details. ### Testing Note that [SUSE/alp-test-env][https://github.com/SUSE/alp-test-env] was developed to support development and testing of this container. It can be used to bring up one or more ALP test vms in a repeatable fashion that can be used for testing purposes. ## System Setup ## * Podman, python3-lxml and python3-rpm are needed on the container host. The run label commands are hard coded to use podman. Python3-lxml and python3-rpm are required on the container host for ansible to interact with libvirt and gather package facts. Kernel-default-base does not contain the needed drivers for many Network Manager (nmcli) operations such as creating bonded interfaces and should be replaced with kernel-default. * `sudo transactional-update pkg install python3-rpm python3-lxml kernel-default -kernel-default-base` * system reboot is required after all transactional updates * `sudo shutdown -r now` ## Ansible commands The ansible commands are provided as symlinks to ansible-wrapper.sh. The commands will instantiate the container and execute the ansible command. ## To install ansible commands ## * as root: * for the root user the ansible commands are placed in /usr/local/bin * podman container runlabel install ansible * as non-root * For non-root users 'podman container runlabel user-install ansible' will place the ansible commands in ${PWD}/bin. The following will install the ansible commands into the current user's bin area (~/bin). * (cd ~; podman container runlabel user-install ansible) ## Ansible Commands ## * ansible * ansible-community * ansible-config * ansible-connection * ansible-console * ansible-doc * ansible-galaxy * ansible-inventory * ansible-lint * ansible-playbook * ansible-pull * ansible-test * ansible-vault ## Uninstall ansible commands ## * as root: * podman container runlabel uninstall ansible * as non-root * (cd ~; podman container runlabel user-uninstall ansible) ## Operation is through SSH back to container host or to other remote systems ## Since Ansible is running within a container, the default localhost environment is the container itself and not the system hosting the container instance. As such any changes made to the localhost environment are in fact being made to the container and would be lost when the container exits. Instead Ansible can be targetted at the host running the container, namely host.containers.internal, via an SSH connection, using an Ansible inventory similar to that found in `examples/ansible/inventory/inventory.yaml`, which looks like: ```yaml alhost_group: hosts: alphost: ansible_host: host.containers.internal ansible_python_interpreter: /usr/bin/python3 ``` NOTE: An equivalent `alphost` default inventory item has also been added to the container's `/etc/ansible/hosts` inventory, which can be leveraged by the `ansible` command line tool. For example to run the `setup` module to collect and show the system facts from the `alphost` you could run a command like the following: ```shell $ ansible alphost -m setup alphost | SUCCESS => { "ansible_facts": { ... }, "changed": false } ``` The inventory record could also contain other hosts to be managed. ### SSH keys must be set up ### The container must be able to SSH to the system being managed. So, the system must support SSH access and the SSH keys must have been created (using `ssh-keygen`) and the public key must be in the `.ssh/authorized_keys` file for the target user. While the root user can be used so long as the system allows SSH'ing to the root account, the preferred method to to use an non-root account that has passwordless sudo rights. Any operations in ansible play books that require system privilege would then need to use "become: true" SSH access can be validated with `ssh localhost`. # Ansible Playbooks List See the `examples/ansible` for example Ansible playbooks. On an ALP system where the Ansible workload container has been installed, using the `install` runlabel, the examples will be available under the `/usr/local/share/ansible-container/examples/ansible` directory. There are twelve playbooks currently under /usr/local/share/ansible-container/examples/ansible ## Example Playbooks * playbook.yml * network.yml ## Workload Setup Playbooks * setup_libvirt_host.yml * setup_cockpit.yml * setup_firewalld.yml * setup_gnome_display_manager.yml * setup_kea_dhcp_server.yml * setup_kea_dhcpv6_server.yml * setup_grafana.yml * setup_neuvector.yml ## VM Creation Playbooks * create_alp_vm.yml * create_tumbleweed_vm.yml ## Simple Ansible test (playbook.yml) The 'playbook.yml' tests several common ansible operations, such as gathering facts and testing for installed packages. The play is invoked changing to directory `/usr/local/share/ansible-container/examples/ansible` and entering: ```shell $ ansible-playbook playbook.yml ... PLAY RECAP *************************************************************************************************************** alphost : ok=8 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` ## Ansible driving nmcli to change system networking (network.yml) The 'network.yml' uses the 'community.general.nmcli' plugin to test common network operations such as assigning static IP addresses to NICs and creating bonded interfaces. The NICs, IP addresses, bond names, bonded NICs are defined in the 'vars" section of network.yml and should be updated to reflect the current user environment. The 'network.yml' play is run by changing to directory `/usr/local/share/ansible-container/examples/ansible` and entering: ```shell $ ansible-playbook network.yml ... ASK [Ping test Bond IPs] ************************************************************************************************ ok: [alphost] => (item={'name': 'bondcon0', 'ifname': 'bond0', 'ip4': '192.168.181.10/24', 'gw4': '192.168.181.2', 'mode': 'active-backup'}) ok: [alphost] => (item={'name': 'bondcon1', 'ifname': 'bond1', 'ip4': '192.168.181.11/24', 'gw4': '192.168.181.2', 'mode': 'balance-alb'}) TASK [Ping test static nics IPs] ***************************************************************************************** ok: [alphost] => (item={'name': 'enp2s0', 'ifname': 'enp2s0', 'ip4': '192.168.181.3/24', 'gw4': '192.168.181.2', 'dns4': ['8.8.8.8']}) ok: [alphost] => (item={'name': 'enp3s0', 'ifname': 'enp3s0', 'ip4': '192.168.181.4/24', 'gw4': '192.168.181.2', 'dns4': ['8.8.8.8']}) PLAY RECAP *************************************************************************************************************** alphost : ok=9 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` ## Setup ALP as a Libvirt host The `setup_libvirt_host.yml` playbook can be used to install the ALP `kvm-container` workload. To try out this example playbook, you can change directory to the `/usr/local/share/ansible-container/examples/ansible` directory and run the following command: ```shell $ cd /usr/local/share/ansible-container/examples/ansible $ ansible-playbook setup_libvirt_host.yml ... PLAY RECAP ***************************************************************************************************************************** alphost : ok=11 changed=6 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 $ sudo /usr/local/bin/virsh list --all using /etc/kvm-container.conf as configuration file + podman exec -ti libvirtd virsh list --all Authorization not available. Check if polkit service is running or see debug message for more information. Id Name State -------------------- ``` NOTE: If the required kernel and supporting packages are not already installed a reboot will be required to complete the install of those packages; re-run the playbook after the reboot has completed successfully to finish the setup. ## Create an openSUSE Tumbleweed appliance VM The `create_tumbleweed_vm.yml` example playbook can be used to create and start a Libvirt managed VM, called `tumbleweed`, based upon the latest available Tumbleweed appliance VM image. It leverages the `setup_libvirt_host.yml` example playbook, as outlined previously, to ensure that the ALP host is ready to manage VMs before creating the new VM, and may fail prompting you to reboot before running the playbook again to finish setting up Libvirt and creating the VM. ```shell $ cd /usr/local/share/ansible-container/examples/ansible $ ansible-playbook create_tumbleweed_vm.yml ... TASK [Query list of libvirt VMs] ******************************************************************************************************* ok: [alphost] TASK [Show that Tumbleweed appliance has been created] ********************************************************************************* ok: [alphost] => { "msg": "Running VMs: tumbleweed" } PLAY RECAP ***************************************************************************************************************************** alphost : ok=15 changed=4 unreachable=0 failed=0 skipped=6 rescued=0 ignored=0 ``` ## Setup NeuVector on ALP host The setup_neuvector.yml playbook can be used to deploy the NeuVector workload on an ALP host. ```shell $ cd /usr/local/share/ansible-container/examples/ansible $ ansible-playbook setup_neuvector.yml ... TASK [Print message connect to NeuVector] ************************************************************************************************************************************************************************ ok: [alphost] => { "msg": "NeuVector is running on https://HOST_RUNNING_NEUVECTOR_SERVICE:8443 You need to accept the warning about the self-signed SSL certificate and log in with the following default credentials: admin / admin." } ... PLAY RECAP ***************************************************************************************************************************** alphost : ok=8 changed=6 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` For more details, you can refer to the [SUSE ALP documentation](https://documentation.suse.com/alp/dolomite/html/alp-dolomite/available-alp-workloads.html#task-run-neuvector-with-podman). ## Setup Kea DHCP Server on ALP Host The setup_kea_dhcp_server.yml and setup_kea_dhcpv6_server.yml playbook automates the deployment and management of the Kea DHCPV4 and DHCPV6 server workload on an ALP host. ```shell $ cd /usr/local/share/ansible-container/examples/ansible $ ansible-playbook setup_kea_dhcp_server.yml ... TASK [Start Kea DHCPv4 server using systemd] ********************************************************************************************************************************************************************* changed: [alphost] PLAY RECAP ******************************************************************************************************************************************************************************************************* alphost : ok=6 changed=5 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` ### Configuring DHCP Server: For configuration, the playbooks utilize sample files named kea-dhcp4.conf and kea-dhcp6.conf. These files are located in the /templates directory and are provided as default configurations for Kea DHCPv4 and DHCPv6 servers, respectively. While these default configurations are suitable for many environments, you might have specific requirements or preferences for your setup. In such cases, you can modify these files in the /templates directory before running the playbook, allowing for a more tailored DHCP configuration. After deployment, the active Kea configuration files can be found in the /etc/kea directory. For a deep dive into configuring the Kea DHCP server, kindly refer to the official documentation available at https://kea.readthedocs.io/ ## Setup Cockpit Web Server on ALP Host The setup_cockpit.yml playbook automates the deployment of the Cockpit Web server on an ALP Dolomite host using a containerized approach with Podman. ```shell $ cd /usr/local/share/ansible-container/examples/ansible $ ansible-playbook setup_cockpit.yml ... TASK [Start Cockpit Web server using systemd] *************************************************************************************************** changed: [alphost] PLAY RECAP *************************************************************************************************************************************** alphost : ok=7 changed=5 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` After running the playbook, access the Cockpit Web interface at https://HOSTNAME_OR_IP_OF_ALP_HOST:9090. Accept the certificate warning due to the self-signed certificate. ## Deploy Firewalld on ALP Host Using the setup_firewalld.yml Ansible playbook, deploy Firewalld via Podman on SUSE ALP Dolomite to define network trust levels. Ensure dbus and polkit configurations are set beforehand. Use the /usr/local/bin/firewall-cmd wrapper to manage the firewalld instance. For an in-depth understanding, refer to the [Firewalld-Podman-Dolomite Documentation](https://documentation.suse.com/alp/dolomite/html/alp-dolomite/available-alp-workloads.html#task-run-firewalld-with-podman). ```shell $ cd /usr/local/share/ansible-container/examples/ansible $ ansible-playbook setup_firewalld.yml ... PLAY RECAP *************************************************************************************************************************************** alphost : ok=8 changed=5 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 ``` ## Deploy GNOME Display Manager on ALP Host This playbook simplifies the deployment and running of the GNOME Display Manager (GDM) on SUSE ALP Dolomite. Leveraging Podman, it allows users to run GDM within a containerized environment. The playbook will install necessary packages, configure SELinux, retrieve and set up the necessary container images, manage system services related to GDM, and start GDM as a service. For an in-depth understanding, refer to the [GDM-Dolomite Documentation](https://documentation.suse.com/alp/dolomite/html/alp-dolomite/available-alp-workloads.html#task-run-gdm-with-podman) ```shell $ cd /usr/local/share/ansible-container/examples/ansible $ ansible-playbook setup_gnome_display_manager.yml ... PLAY RECAP *************************************************************************************************************************************** alphost : ok=10 changed=6 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 ``` ## Deploy Grafana on SUSE ALP Host Utilizing the setup_grafana.yml Ansible playbook, automate the deployment of Grafana on a SUSE ALP Dolomite host. This playbook leverages Podman for deployment of Grafana. For an in-depth understanding, refer to the [Grafana-Dolomite Documentation](https://documentation.suse.com/alp/dolomite/html/alp-dolomite/available-alp-workloads.html#task-run-grafana-with-podman). ```shell $ cd /usr/local/share/ansible-container/examples/ansible $ ansible-playbook setup_grafana.yml ... PLAY RECAP *************************************************************************************************************************************** alphost : ok=6 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` Upon successful execution of the playbook, access the Grafana interface at http://HOSTNAME_OR_IP_OF_ALP_HOST:3000. When logging in for the first time, use the default credentials admin for both the username and password. Subsequently, set a new password as prompted.07070100000005000081ED000000000000000000000001652E3F1A0000032E000000000000000000000000000000000000002500000000ansible-container/ansible-wrapper.sh#! /bin/sh PATH=/sbin:/usr/sbin:/usr/local/sbin:/usr/local/bin:/usr/bin:/bin IMAGE=registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/ansible:latest for cnf in /etc/default/ansible-container ~/.config/ansible-container/image do if [ -r ${cnf} ]; then . ${cnf} fi done KEED_USERID="" if [[ $(id -ru) != "0" ]]; then KEED_USERID="--userns=keep-id" fi # make symlinks for mount points # this needed to hand colons in file path names LINK_DIR=`mktemp -d -p /tmp` ln -s $(pwd) ${LINK_DIR}/work ln -s ${HOME} ${LINK_DIR}/home podman run --security-opt label=disable -it -v ${LINK_DIR}/work:/work -v ${LINK_DIR}/home:${HOME} ${KEED_USERID} --rm ${IMAGE} "$(basename "${0}")" "$@" # clean up symlink area rm ${LINK_DIR}/work ${LINK_DIR}/home rmdir ${LINK_DIR} 07070100000006000041ED000000000000000000000003652E3F1A00000000000000000000000000000000000000000000001B00000000ansible-container/examples07070100000007000041ED000000000000000000000004652E3F1A00000000000000000000000000000000000000000000002300000000ansible-container/examples/ansible07070100000008000081A4000000000000000000000001652E3F1A00000F51000000000000000000000000000000000000003500000000ansible-container/examples/ansible/create_alp_vm.yml--- # # A example playbook showing how the create a ALP VM. # # Ensure the system is ready to act as a libvirt host. # NOTE: A reboot may be required if packages need to be installed. - name: Setup ALP system as a libvirt host import_playbook: setup_libvirt_host.yml tags: libvirt - name: Create an ALP VM hosts: alphost vars: appliance: name: alptest mirror: https://download.opensuse.org/repositories/SUSE:/ALP/images/ image: ALP-VM.x86_64-0.0.1-kvm-Build24.3 format: qcow2 checksum: sha256 vcpus: 2 memory_mb: 1536 disk_size_gb: 30 libvirt: images: /var/lib/libvirt/images network: default_network tasks: - name: Check if we already have the ALP VM image ansible.builtin.stat: path: "{{ libvirt.images }}/{{ appliance.image }}.{{ appliance.format }}" register: stat_vm_image - name: Download ALP VM image become: true ansible.builtin.get_url: dest: "{{ libvirt.images }}" url: "{{ item.url }}" checksum: "{{ appliance.checksum }}:{{ item.url }}.{{ appliance.checksum }}" mode: '0644' loop: - name: "{{ appliance.image }}" url: "{{ appliance.mirror }}/{{ appliance.image }}.{{ appliance.format }}" loop_control: label: "{{ item.name }}" when: - not stat_vm_image.stat.exists - name: Query list of configured libvirt networks become: true community.libvirt.virt_net: command: list_nets register: virt_net_list_nets - name: Fail if required network is not available ansible.builtin.fail: msg: "ERROR: required '{{ libvirt.network }}' missing!" when: - libvirt.network not in virt_net_list_nets.list_nets - name: Determine list of SSH public keys ansible.builtin.set_fact: ssh_pub_keys: "{{ lookup('ansible.builtin.fileglob', '~/.ssh/*.pub').split(',') }}" - name: Generate ignition config file become: true ansible.builtin.template: src: "config.ign.j2" dest: "{{ libvirt.images }}/{{ appliance.name }}.ign" mode: '0644' - name: Query list of libvirt VMs become: true community.libvirt.virt: command: list_vms register: virt_list_vms - name: Create the ALP VM if not running become: true ansible.builtin.command: >- /usr/local/bin/virt-install --connect qemu:///system --import --name {{ appliance.name }} --osinfo opensusetumbleweed --virt-type kvm --hvm --machine q35 --boot hd --cpu host-passthrough --video vga --console pty,target_type=virtio --noautoconsole --network network={{ libvirt.network }} --rng /dev/urandom --vcpu {{ appliance.vcpus }} --memory {{ appliance.memory_mb }} --disk size={{ appliance.disk_size_gb }}, backing_store={{ libvirt.images }}/{{ appliance.image }}.{{ appliance.format }}, backing_format={{ appliance.format }}, bus=virtio, cache=none --graphics vnc,listen=0.0.0.0 --sysinfo type=fwcfg,entry0.name="opt/com.coreos/config",entry0.file="{{ libvirt.images }}/{{ appliance.name }}.ign" --tpm backend.type=emulator,backend.version=2.0,model=tpm-tis register: virt_install_vm changed_when: - "virt_install_vm.rc == 0" when: - appliance.name not in virt_list_vms.list_vms - name: Query list of libvirt VMs become: true community.libvirt.virt: command: list_vms register: virt_list_vms - name: Show that the ALP VM has been created ansible.builtin.debug: msg: "Running VMs: {{ virt_list_vms.list_vms | join(', ') }}" when: - appliance.name in virt_list_vms.list_vms 07070100000009000081A4000000000000000000000001652E3F1A00000D93000000000000000000000000000000000000003C00000000ansible-container/examples/ansible/create_tumbleweed_vm.yml--- # # A example playbook showing how the create a openSUSE Tumbleweed VM. # # Ensure the system is ready to act as a libvirt host. # NOTE: A reboot may be required if packages need to be installed. - name: Setup ALP system as a libvirt host import_playbook: setup_libvirt_host.yml tags: libvirt - name: Create an openSUSE Tumbleweed appliance hosts: alphost vars: appliance: name: tumbleweed mirror: https://download.opensuse.org/tumbleweed/appliances image: openSUSE-Tumbleweed-Minimal-VM.x86_64-kvm-and-xen format: qcow2 checksum: sha256 vcpus: 2 memory_mb: 2048 disk_size_gb: 30 libvirt: images: /var/lib/libvirt/images network: default_network tasks: - name: Check if we already have the openSUSE Tumbleweed image ansible.builtin.stat: path: "{{ libvirt.images }}/{{ appliance.image }}.{{ appliance.format }}" register: stat_vm_image - name: Download openSUSE Tumbleweed appliance image become: true ansible.builtin.get_url: dest: "{{ libvirt.images }}" url: "{{ item.url }}" checksum: "{{ appliance.checksum }}:{{ item.url }}.{{ appliance.checksum }}" mode: '0644' loop: - name: "{{ appliance.image }}" url: "{{ appliance.mirror }}/{{ appliance.image }}.{{ appliance.format }}" loop_control: label: "{{ item.name }}" when: - not stat_vm_image.stat.exists - name: Query list of configured libvirt networks become: true community.libvirt.virt_net: command: list_nets register: virt_net_list_nets - name: Fail if required network is not available ansible.builtin.fail: msg: "ERROR: required '{{ libvirt.network }}' missing!" when: - libvirt.network not in virt_net_list_nets.list_nets - name: Query list of libvirt VMs become: true community.libvirt.virt: command: list_vms register: virt_list_vms - name: Create the openSUSE Tumbleweed appliance if not running become: true ansible.builtin.command: >- /usr/local/bin/virt-install --connect qemu:///system --import --name {{ appliance.name }} --osinfo opensusetumbleweed --virt-type kvm --hvm --machine q35 --boot hd --cpu host-passthrough --video vga --console pty,target_type=virtio --noautoconsole --network network={{ libvirt.network }} --rng /dev/urandom --vcpu {{ appliance.vcpus }} --memory {{ appliance.memory_mb }} --cloud-init --disk size={{ appliance.disk_size_gb }}, backing_store={{ libvirt.images }}/{{ appliance.image }}.{{ appliance.format }}, backing_format={{ appliance.format }}, bus=virtio,cache=none --graphics vnc,listen=0.0.0.0 register: virt_install_vm changed_when: - "virt_install_vm.rc == 0" when: - ('tumbleweed' not in virt_list_vms.list_vms) - name: Query list of libvirt VMs become: true community.libvirt.virt: command: list_vms register: virt_list_vms - name: Show that Tumbleweed appliance has been created ansible.builtin.debug: msg: "Running VMs: {{ virt_list_vms.list_vms | join(', ') }}" when: - ('tumbleweed' in virt_list_vms.list_vms) 0707010000000A000041ED000000000000000000000003652E3F1A00000000000000000000000000000000000000000000002D00000000ansible-container/examples/ansible/inventory0707010000000B000041ED000000000000000000000003652E3F1A00000000000000000000000000000000000000000000003800000000ansible-container/examples/ansible/inventory/group_vars0707010000000C000041ED000000000000000000000002652E3F1A00000000000000000000000000000000000000000000003C00000000ansible-container/examples/ansible/inventory/group_vars/all0707010000000D000081A4000000000000000000000001652E3F1A00000032000000000000000000000000000000000000005400000000ansible-container/examples/ansible/inventory/group_vars/all/python_interpreter.yaml--- ansible_python_interpreter: /usr/bin/python3 0707010000000E000081A4000000000000000000000001652E3F1A00000085000000000000000000000000000000000000003C00000000ansible-container/examples/ansible/inventory/inventory.yamlalphost_group: hosts: alphost: ansible_host: host.containers.internal ansible_python_interpreter: /usr/bin/python3 0707010000000F000081A4000000000000000000000001652E3F1A000009E2000000000000000000000000000000000000002F00000000ansible-container/examples/ansible/network.yml--- - name: Configure Networking hosts: alphost vars: static_nics: - name: enp2s0 ifname: enp2s0 ip4: 192.168.181.3/24 gw4: 192.168.181.2 dns4: - 8.8.8.8 - name: enp3s0 ifname: enp3s0 ip4: 192.168.181.4/24 gw4: 192.168.181.2 dns4: - 8.8.8.8 bonds: - name: bondcon0 ifname: bond0 ip4: 192.168.181.10/24 gw4: 192.168.181.2 mode: active-backup - name: bondcon1 ifname: bond1 ip4: 192.168.181.11/24 gw4: 192.168.181.2 mode: balance-alb bonded_nics: - name: bond0-if1 ifname: enp4s0 master: bond0 - name: bond0-if2 ifname: enp5s0 master: bond0 - name: bond1-if1 ifname: enp6s0 master: bond1 - name: bond1-if2 ifname: enp7s0 master: bond1 tasks: - name: Gather the package facts ansible.builtin.package_facts: manager: auto - name: Ensure NetworkManager is installed ansible.builtin.package: name: "{{ item }}" state: present become: true loop: - NetworkManager - name: Configure NIC community.general.nmcli: conn_name: '{{ item.name }}' ifname: '{{ item.ifname }}' ip4: '{{ item.ip4 }}' gw4: '{{ item.gw4 }}' dns4: '{{ item.dns4 }}' state: present autoconnect: true type: ethernet become: true loop: '{{ static_nics }}' - name: Create bonds community.general.nmcli: type: bond conn_name: '{{ item.name }}' ifname: '{{ item.ifname }}' ip4: '{{ item.ip4 }}' gw4: '{{ item.gw4 }}' mode: '{{ item.mode }}' state: present become: true loop: "{{ bonds }}" - name: Add NICs to bonds community.general.nmcli: type: bond-slave conn_name: '{{ item.name }}' ifname: '{{ item.ifname }}' state: present master: '{{ item.master }}' become: true loop: "{{ bonded_nics }}" - name: Ping test Bond IPs ansible.builtin.command: >- ping -c 1 -W 0.1 {{ item.ip4 | ansible.utils.ipaddr('address') }} loop: "{{ bonds }}" changed_when: false - name: Ping test static nics IPs ansible.builtin.command: >- ping -c 1 -W 0.1 {{ item.ip4 | ansible.utils.ipaddr('address') }} loop: "{{ static_nics }}" changed_when: false 07070100000010000081A4000000000000000000000001652E3F1A00000425000000000000000000000000000000000000003000000000ansible-container/examples/ansible/playbook.yml--- - name: Ensure Alpha Host Setup hosts: alphost tasks: - name: Site | hello world ansible.builtin.command: echo "Hi! Ansible is working" changed_when: false - name: Gather the package facts ansible.builtin.package_facts: manager: auto - name: Print the package facts ansible.builtin.debug: var: ansible_facts.packages - name: Ensure NetworkManager is installed ansible.builtin.package: name: "{{ item }}" state: present become: true with_items: - NetworkManager - name: Deactivate Wireless Network Interfaces ansible.builtin.command: nmcli radio wifi off become: true when: "'NetworkManager' in ansible_facts.packages" changed_when: false - name: Test ssh ansible.builtin.wait_for: host: "{{ ansible_host }}" port: 22 delegate_to: localhost - name: Test webpage access ansible.builtin.uri: url: https://www.example.com return_content: true register: webpage 07070100000011000081A4000000000000000000000001652E3F1A00000818000000000000000000000000000000000000003500000000ansible-container/examples/ansible/setup_cockpit.yml--- # Ansible Playbook: Setup Cockpit Web server on ALP Dolomite # Description: This Ansible playbook automates the deployment of the Cockpit Web server on an ALP Dolomite host. # The steps are based on: [https://documentation.suse.com/alp/dolomite/html/alp-dolomite/available-alp-workloads.html#task-run-cockpit-with-podman] # Administering SUSE ALP Dolomite using Cockpit Documentation: [https://documentation.suse.com/alp/dolomite/single-html/cockpit-alp-dolomite/] - name: Setup Cockpit Web server hosts: alphost become: true vars: workload: name: cockpit-ws image: registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/cockpit-ws:latest tasks: - name: Install required packages, if any, for workload {{ workload.name }} ansible.builtin.package: name: "{{ item }}" state: present notify: Reboot loop: - cockpit-bridge - cockpit-tukit - name: Reboot right now if necessary ansible.builtin.meta: flush_handlers - name: Retrieve image for workload {{ workload.name }} containers.podman.podman_image: name: "{{ workload.image }}" state: present - name: Install Cockpit Web server container ansible.builtin.command: >- podman container runlabel install {{ workload.image }} register: workload_runlabel_install changed_when: - ('already exist' not in workload_runlabel_install.stdout) - name: Ensure service can be started for workload {{ workload.name }} ansible.builtin.systemd_service: name: "cockpit.service" state: "started" enabled: true - name: Inform user to access the Cockpit Web user interface ansible.builtin.debug: msg: >- Cockpit Web UI is running on https://{{ ansible_default_ipv4.address }}:9090 Please accept the warning about the self-signed SSL certificate to access it. handlers: - name: Reboot ansible.builtin.reboot: reboot_timeout: 600 post_reboot_delay: 60 07070100000012000081A4000000000000000000000001652E3F1A00000A02000000000000000000000000000000000000003700000000ansible-container/examples/ansible/setup_firewalld.yml--- # Ansible Playbook: Setup firewalld using Podman on SUSE ALP Dolomite # Description: This Ansible playbook automates the deployment of the firewalld using Podman on SUSE ALP Dolomite. # The deployment adds firewall capability to ALP Dolomite to define the trust level of network connections or interfaces. # Key Considerations: # - The container image utilizes the system's dbus instance. Thus, dbus and polkit configuration files must be installed initially. # - The systemd service and its configuration file allow the container to start and stop via systemd with Podman as the container manager. # - The `/usr/local/bin/firewall-cmd` serves as a wrapper to invoke firewall-cmd inside the container, with both Docker and Podman being supported. # Based on: "Running firewalld using Podman on SUSE ALP Dolomite". Documentation available at: # [https://documentation.suse.com/alp/dolomite/single-html/firewalld-podman-alp-dolomite/] - name: Setup firewalld using Podman on SUSE ALP Dolomite hosts: alphost become: true vars: workload: name: firewalld image: registry.opensuse.org/suse/alp/workloads/tumbleweed_images/suse/alp/workloads/firewalld tasks: - name: Gather package facts ansible.builtin.package_facts: manager: "rpm" - name: Fail if firewalld is installed locally ansible.builtin.fail: msg: "Firewalld is installed locally. Please remove it before installing this container." when: "'firewalld' in ansible_facts.packages" - name: Retrieve image for workload containers.podman.podman_image: name: "{{ workload.image }}" state: present - name: Initialize the environment ansible.builtin.command: >- podman container runlabel install "{{ workload.image }}" register: workload_runlabel_install changed_when: - "('already exist' not in workload_runlabel_install.stdout)" - name: Ensure polkit daemon is restarted (if necessary) ansible.builtin.service: name: polkit state: restarted when: - "'etc/polkit-1/actions/org.fedoraproject.FirewallD1.policy' in workload_runlabel_install.stdout" - name: Start and enable firewalld using systemd ansible.builtin.service: name: "{{ workload.name }}" state: started enabled: true - name: Display completion message ansible.builtin.debug: msg: >- "Firewalld workload setup complete." "Use the /usr/local/bin/firewall-cmd wrapper to manage the firewalld instance." 07070100000013000081A4000000000000000000000001652E3F1A00000AA3000000000000000000000000000000000000004300000000ansible-container/examples/ansible/setup_gnome_display_manager.yml--- # Ansible Playbook: Deploy and run GNOME Display Manager on ALP Dolomite # Description: This Ansible playbook automates the deployment and operation of the GNOME Display Manager (GDM) on SUSE ALP Dolomite using Podman. # This deployment allows users to run GDM within a container environment, providing a basic GNOME desktop experience. # Based on: "Running the GNOME Display Manager workload using Podman on SUSE ALP Dolomite". # Documentation reference: [https://documentation.suse.com/alp/dolomite/html/alp-dolomite/available-alp-workloads.html#task-run-gdm-with-podman] - name: Deploy and run GNOME Display Manager on ALP Dolomite hosts: alphost become: true vars: workload: name: gdm image: registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/gdm:latest tasks: - name: Install required packages, if any, for workload {{ workload.name }} ansible.builtin.package: name: ['accountsservice', 'systemd-experimental', 'python3-selinux'] state: present notify: Reboot - name: Reboot right now if necessary ansible.builtin.meta: flush_handlers - name: Set SELinux to permissive mode ansible.posix.selinux: policy: targeted state: permissive - name: Retrieve image for workload {{ workload.name }} containers.podman.podman_image: name: "{{ workload.image }}" state: present - name: Apply container runlabel install for workload {{ workload.name }} ansible.builtin.command: >- podman container runlabel install {{ workload.image }} register: workload_runlabel_install notify: Reload systemd daemon changed_when: - ('already exist' not in workload_runlabel_install.stdout) - name: Reload systemd daemon now ansible.builtin.meta: flush_handlers - name: Reload dbus service ansible.builtin.systemd: name: dbus state: reloaded - name: Restart accounts-daemon service ansible.builtin.systemd: name: accounts-daemon state: started enabled: true - name: Start service for workload {{ workload.name }} ansible.builtin.systemd: name: gdm.service state: started enabled: true - name: Display completion message ansible.builtin.debug: msg: >- GNOME Display Manager (GDM) has been successfully deployed and started on ALP Dolomite. After you log in, a basic GNOME environment opens. handlers: - name: Reboot ansible.builtin.reboot: reboot_timeout: 600 post_reboot_delay: 60 - name: Reload systemd daemon ansible.builtin.systemd: daemon_reload: true 07070100000014000081A4000000000000000000000001652E3F1A00000651000000000000000000000000000000000000003500000000ansible-container/examples/ansible/setup_grafana.yml--- # Ansible Playbook: Setup Grafana on SUSE ALP Dolomite # Description: This Ansible playbook automates the deployment of Grafana on a SUSE ALP Dolomite host. # The steps include fetching the Grafana image, setting up the Grafana container, and providing access information. # The steps are based on https://documentation.suse.com/alp/dolomite/html/alp-dolomite/available-alp-workloads.html#task-run-grafana-with-podman - name: Setup ALP system for Grafana hosts: alphost become: true vars: workload: name: grafana image: registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/grafana:latest tasks: - name: Retrieve image for Grafana containers.podman.podman_image: name: "{{ workload.image }}" state: present - name: Initialize the environment ansible.builtin.command: >- podman container runlabel install "{{ workload.image }}" register: workload_runlabel_install changed_when: - ('already exist' not in workload_runlabel_install.stdout) - name: Start and enable Grafana using systemd ansible.builtin.service: name: "{{ workload.name }}" state: started enabled: true - name: Display Grafana access information ansible.builtin.debug: msg: - "Please open the Grafana UI at http://{{ ansible_default_ipv4.address }}:3000." - "Log in to Grafana. The default user name and password are both set to 'admin'. After logging in, enter a new password." - "Follow the on-screen prompts to complete the configuration." 07070100000015000081A4000000000000000000000001652E3F1A00000735000000000000000000000000000000000000003D00000000ansible-container/examples/ansible/setup_kea_dhcp_server.yml--- # Ansible Playbook: Manage Kea DHCPV4 Server Workload on ALP Host # Description: This Ansible playbook automates the setup of the Kea DHCPV4 server workload # on an ALP host. It follows the steps documented in the URL provided below. # Kea Workload Documentation: https://build.opensuse.org/package/view_file/SUSE:ALP:Workloads/kea-container/README.md?expand=1 - name: Deploying and Managing the Kea DHCP server workload hosts: alphost become: true vars: workload: name: kea image: registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/kea:latest tasks: - name: Pull the Kea DHCP server container image containers.podman.podman_image: name: "{{ workload.image }}" state: present - name: Install all required parts of the Kea workload ansible.builtin.command: >- podman container runlabel install {{ workload.image }} register: workload_runlabel_install changed_when: - ('already exist' not in workload_runlabel_install.stdout) - name: Add firewall exception rule for DHCP ansible.posix.firewalld: service: dhcp permanent: true state: enabled immediate: true - name: Configure Kea DHCPv4 using template ansible.builtin.template: src: "kea-dhcp4.conf.j2" dest: "/etc/kea/kea-dhcp4.conf" mode: '0644' notify: Reload Kea configuration - name: Start Kea DHCPv4 server using systemd ansible.builtin.systemd: name: kea-dhcp4.service state: started enabled: true handlers: - name: Reload Kea configuration ansible.builtin.command: /usr/local/bin/keactrl reload register: kea_reload_result changed_when: - '"INFO/keactrl: Reloading kea-dhcp4..." in kea_reload_result.stdout' 07070100000016000081A4000000000000000000000001652E3F1A00000737000000000000000000000000000000000000003F00000000ansible-container/examples/ansible/setup_kea_dhcpv6_server.yml--- # Ansible Playbook: Manage Kea DHCPV6 Server Workload on ALP Host # Description: This Ansible playbook automates the setup of the Kea DHCPv6 server workload # on an ALP host. It follows the steps documented in the URL provided below. # Kea Workload Documentation: https://build.opensuse.org/package/view_file/SUSE:ALP:Workloads/kea-container/README.md?expand=1 - name: Deploying and Managing the Kea DHCP server workload hosts: alphost become: true vars: workload: name: kea image: registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/kea:latest tasks: - name: Pull the Kea DHCP server container image containers.podman.podman_image: name: "{{ workload.image }}" state: present - name: Install all required parts of the Kea workload ansible.builtin.command: >- podman container runlabel install {{ workload.image }} register: workload_runlabel_install changed_when: - ('already exist' not in workload_runlabel_install.stdout) - name: Add firewall exception rule for DHCP ansible.posix.firewalld: service: dhcpv6 permanent: true state: enabled immediate: true - name: Configure Kea DHCPv6 using template ansible.builtin.template: src: "kea-dhcp6.conf.j2" dest: "/etc/kea/kea-dhcp6.conf" mode: '0644' notify: Reload Kea configuration - name: Start Kea DHCPv6 server using systemd ansible.builtin.systemd: name: kea-dhcp6.service state: started enabled: true handlers: - name: Reload Kea configuration ansible.builtin.command: /usr/local/bin/keactrl reload register: kea_reload_result changed_when: - '"INFO/keactrl: Reloading kea-dhcp4..." in kea_reload_result.stdout' 07070100000017000081A4000000000000000000000001652E3F1A00001210000000000000000000000000000000000000003A00000000ansible-container/examples/ansible/setup_libvirt_host.yml--- # Ansible Playbook: Setup SUSE ALP Dolomite as a libvirt Host # Description: This Ansible playbook automates the setup of a SUSE ALP Dolomite host as a libvirt host. # The steps encompass installing necessary packages for the workload, ensuring system readiness through reboots if necessary, # fetching the required images for kvm-server and kvm-client from the specified registry, and installing tools for both kvm-server # and kvm-client. Subsequent tasks ensure that needed services are stopped, started, or enabled as per the requirements. # Documentation reference: [https://documentation.suse.com/alp/dolomite/html/alp-dolomite/available-alp-workloads.html#task-run-kvm-with-podman] # Creating customized VMs using virt-scenario: [https://documentation.suse.com/alp/dolomite/html/alp-dolomite/concept-virt-scenario.html] - name: Setup ALP system as a libvirt host hosts: alphost become: true vars: workload: name: kvm service: kvm-server-container images: kvmserver: registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/kvm-server:latest kvmclient: registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/kvm-client:latest required_pkgs: - kernel-default - "-kernel-default-base" - netcat-openbsd - python3-libvirt-python - python3-lxml - swtpm libvirtd_services: - libvirtd.service - libvirtd-ro.socket - libvirtd-admin.socket - libvirtd-tcp.socket - libvirtd-tls.socket log_and_lock_drivers: - container-virtlogd.service - virtlogd.socket - virtlogd-admin.socket - container-virtlockd.service - virtlockd.socket - virtlockd-admin.socket other_drivers: - qemu - network - nodedev - nwfilter - proxy - secret - storage tasks: - name: Install required packages, if any, for workload ansible.builtin.package: name: "{{ item }}" state: present loop: "{{ workload.required_pkgs }}" notify: Reboot - name: Reboot right now if necessary ansible.builtin.meta: flush_handlers - name: Retrieve images for kvm-server and kvm-client containers.podman.podman_image: name: "{{ item.value }}" state: present loop: "{{ workload.images | dict2items }}" - name: Install tools for kvmserver ansible.builtin.command: >- podman container runlabel install "{{ workload.images.kvmserver }}" register: workload_runlabel_install changed_when: - ('already exist' not in workload_runlabel_install.stdout) - name: Install tools for kvmclient ansible.builtin.command: >- podman container runlabel install "{{ workload.images.kvmclient }}" register: workload_runlabel_install changed_when: - ('already exist' not in workload_runlabel_install.stdout) - name: Ensure libvirtd is stopped and disabled ansible.builtin.systemd_service: name: "{{ item }}" state: stopped enabled: false loop: "{{ workload.libvirtd_services }}" register: service_result failed_when: > service_result is failed and ("Could not find the requested service" not in service_result.msg) - name: Ensure kvm-server-container.service is started and enabled ansible.builtin.systemd_service: name: "{{ workload.service }}" state: started enabled: true notify: Reload systemd - name: Reload systemd right now if necessary ansible.builtin.meta: flush_handlers - name: Enable and start log and lock drivers ansible.builtin.systemd_service: name: "{{ item }}" state: started enabled: true loop: "{{ workload.log_and_lock_drivers }}" - name: Enable and start other drivers ansible.builtin.systemd_service: name: "container-virt{{ item }}d.service" state: started enabled: true loop: "{{ workload.other_drivers }}" - name: Display completion message ansible.builtin.debug: msg: >- ALP system setup as a libvirt host on alptestvm completed successfully. All necessary components are installed and configured for managing virtual machines. handlers: - name: Reboot ansible.builtin.reboot: reboot_timeout: 600 post_reboot_delay: 60 - name: Reload systemd ansible.builtin.systemd: daemon_reload: true 07070100000018000081A4000000000000000000000001652E3F1A00000803000000000000000000000000000000000000003700000000ansible-container/examples/ansible/setup_neuvector.yml--- # This Ansible playbook is used to manage the NeuVector workload on a ALP host. # The steps are based on : https://build.opensuse.org/package/view_file/SUSE:ALP:Workloads/neuvector-demo/README.md?expand=1 # and https://documentation.suse.com/alp/micro/html/alp-micro/available-alp-workloads.html#task-run-neuvector-with-podman # The playbook supports setup of NeuVector. - name: Running the NeuVector workload hosts: alphost become: true vars: workload: name: neuvector image: registry.opensuse.org/suse/alp/workloads/bci_containerfiles/suse/alp/workloads/neuvector-demo:latest tasks: - name: Install required packages, if any, for workload {{ workload.name }} ansible.builtin.package: name: python3-selinux state: present notify: Reboot - name: Reboot right now if necessary ansible.builtin.meta: flush_handlers - name: Set SELinux into permissive mode ansible.posix.selinux: policy: targeted state: permissive - name: Retrieve image for workload {{ workload.name }} containers.podman.podman_image: name: "{{ workload.image }}" state: present - name: Execute nevector runlabel INSTALL ansible.builtin.command: >- podman container runlabel install {{ workload.image }} register: workload_runlabel_install changed_when: - ('already exist' not in workload_runlabel_install.stdout) - name: Enable and start NeuVector service ansible.builtin.systemd: name: neuvector.service state: started enabled: true - name: Print message connect to NeuVector ansible.builtin.debug: msg: >- NeuVector is running on https://{{ ansible_default_ipv4.address }}:8443 You need to accept the warning about the self-signed SSL certificate and log in with the following default credentials: admin / admin. handlers: - name: Reboot ansible.builtin.reboot: reboot_timeout: 600 post_reboot_delay: 60 07070100000019000041ED000000000000000000000002652E3F1A00000000000000000000000000000000000000000000002D00000000ansible-container/examples/ansible/templates0707010000001A000081A4000000000000000000000001652E3F1A00000680000000000000000000000000000000000000003B00000000ansible-container/examples/ansible/templates/config.ign.j2{#- Based upon ignition config generated by https://opensuse.github.io/fuel-ignition/edit -#} {%- set _keys = [] -%} {%- for _kf in ssh_pub_keys | default([]) -%} {%- set _ = _keys.append(lookup('ansible.builtin.file', _kf)) -%} {%- endfor -%} {%- set unique_keys = _keys | sort | unique -%} { "ignition": { "version": "3.2.0" }, "passwd": { "users": [ { "name": "root", {% if (unique_keys | length) > 0 %} "sshAuthorizedKeys": {{ unique_keys | to_json }}, {% endif %} {# Password is the user's name #} "passwordHash": "$2a$10$FbGb5ARQnuaHiskxcYIOgO9PADKyymvmioHMCoHdfO.eyYePLqBZ2" }, { "name": "test", {% if (unique_keys | length) > 0 %} "sshAuthorizedKeys": {{ unique_keys | to_json }}, {% endif %} {# Password is the user's name #} "passwordHash": "$2a$10$WK21CVEDrqW4QB5FmmeCjuvFlJl7NMCYGRqBCg/WR1932ua8igzIa" } ] }, "storage": { "filesystems": [ { "device": "/dev/disk/by-label/ROOT", "format": "btrfs", "mountOptions": [ "subvol=/@/home" ], "path": "/home", "wipeFilesystem": false } ], "files": [ { "path": "/etc/hostname", "mode": 420, "overwrite": true, "contents": { "source": "data:text/plain;charset=utf-8;base64,{{ appliance.name | b64encode }}" } }, { "path": "/etc/sudoers.d/test", "mode": 420, "overwrite": true, "contents": { "source": "data:text/plain;charset=utf-8;base64,{{ 'test ALL=(ALL:ALL) NOPASSWD:ALL' | b64encode }}" } } ] } } 0707010000001B000081A4000000000000000000000001652E3F1A00000B49000000000000000000000000000000000000003F00000000ansible-container/examples/ansible/templates/kea-dhcp4.conf.j2// Minimal Kea DHCPv6 Configuration Example // For full configuration details, refer to: // https://gitlab.isc.org/isc-projects/kea/-/blob/master/src/bin/keactrl/kea-dhcp6.conf.pre // // Note: This is a simplified configuration. Consult the linked full configuration for comprehensive details. { "Dhcp4": { "valid-lifetime": 4000, "renew-timer": 1000, "rebind-timer": 2000, "subnet4": [ { "subnet": "192.0.2.0/24", "pools": [ { "pool": "192.0.2.1 - 192.0.2.100" } ], "reservations": [ { "hw-address": "1a:1b:1c:1d:1e:1f", "ip-address": "192.0.2.201" }, { "client-id": "01:11:22:33:44:55:66", "ip-address": "192.0.2.202", "hostname": "special-snowflake" }, { "duid": "01:02:03:04:05", "ip-address": "192.0.2.203", "option-data": [ { "name": "domain-name-servers", "data": "10.1.1.202, 10.1.1.203" } ] }, { "client-id": "01:12:23:34:45:56:67", "ip-address": "192.0.2.204", "option-data": [ { "name": "vivso-suboptions", "data": "4491" }, { "name": "tftp-servers", "space": "vendor-4491", "data": "10.1.1.202, 10.1.1.203" } ] }, { "client-id": "01:0a:0b:0c:0d:0e:0f", "ip-address": "192.0.2.205", "next-server": "192.0.2.1", "server-hostname": "hal9000", "boot-file-name": "/dev/null" }, { "flex-id": "'s0mEVaLue'", "ip-address": "192.0.2.206" } ] } ], "loggers": [ { "name": "kea-dhcp4", "output_options": [ { "output": "stdout" } ], "severity": "INFO", "debuglevel": 0 } ] } } 0707010000001C000081A4000000000000000000000001652E3F1A00001194000000000000000000000000000000000000003F00000000ansible-container/examples/ansible/templates/kea-dhcp6.conf.j2// Minimal Kea DHCPv6 Configuration Example // For full configuration details, refer to: // https://gitlab.isc.org/isc-projects/kea/-/blob/master/src/bin/keactrl/kea-dhcp6.conf.pre // // Note: This is a simplified configuration. Consult the linked full configuration for comprehensive details. { "Dhcp6": { "interfaces-config": { "interfaces": [] }, "control-socket": { "socket-type": "unix", "socket-name": "/tmp/kea6-ctrl-socket" }, "lease-database": { "type": "memfile", "lfc-interval": 3600 }, "expired-leases-processing": { "reclaim-timer-wait-time": 10, "flush-reclaimed-timer-wait-time": 25, "hold-reclaimed-time": 3600, "max-reclaim-leases": 100, "max-reclaim-time": 250, "unwarned-reclaim-cycles": 5 }, "renew-timer": 1000, "rebind-timer": 2000, "preferred-lifetime": 3000, "valid-lifetime": 4000, "option-data": [ { "name": "dns-servers", "data": "2001:db8:2::45, 2001:db8:2::100" }, { "code": 12, "data": "2001:db8::1" }, { "name": "new-posix-timezone", "data": "EST5EDT4\\,M3.2.0/02:00\\,M11.1.0/02:00" }, { "name": "preference", "data": "0xf0" }, { "name": "bootfile-param", "data": "root=/dev/sda2, quiet, splash" } ], "subnet6": [ { "id": 1, "subnet": "2001:db8:1::/64", "pools": [ { "pool": "2001:db8:1::/80" } ], "pd-pools": [ { "prefix": "2001:db8:8::", "prefix-len": 56, "delegated-len": 64 } ], "option-data": [ { "name": "dns-servers", "data": "2001:db8:2::dead:beef, 2001:db8:2::cafe:babe" } ], "reservations": [ { "duid": "01:02:03:04:05:0A:0B:0C:0D:0E", "ip-addresses": ["2001:db8:1::100"] }, { "hw-address": "00:01:02:03:04:05", "ip-addresses": ["2001:db8:1::101"], "option-data": [ { "name": "dns-servers", "data": "3000:1::234" }, { "name": "nis-servers", "data": "3000:1::234" } ], "client-classes": ["special_snowflake", "office"] }, { "duid": "01:02:03:04:05:06:07:08:09:0A", "ip-addresses": ["2001:db8:1:0:cafe::1"], "prefixes": ["2001:db8:2:abcd::/64"], "hostname": "foo.example.com", "option-data": [ { "name": "vendor-opts", "data": "4491" }, { "name": "tftp-servers", "space": "vendor-4491", "data": "3000:1::234" } ] }, { "flex-id": "'somevalue'", "ip-addresses": ["2001:db8:1:0:cafe::2"] } ] } ], "loggers": [ { "name": "kea-dhcp6", "output_options": [ { "output": "@localstatedir@/log/kea-dhcp6.log" } ], "severity": "INFO", "debuglevel": 0 } ] } } 0707010000001D000081A4000000000000000000000001652E3F1A000000A0000000000000000000000000000000000000002600000000ansible-container/hosts_alphost_group# Define an inventory group for the ALP (podman) host [alphost_group] alphost ansible_host=host.containers.internal ansible_python_interpreter=/usr/bin/python3 0707010000001E000081A4000000000000000000000001652E3F1A000006FA000000000000000000000000000000000000002000000000ansible-container/label-install#!/bin/bash create_command_bin() { SCRIPT=$1 if [ ! -e ${TARGET_BIN}/${SCRIPT} ]; then cd ${TARGET_BIN}; ln -s ansible-wrapper.sh ${SCRIPT} else echo "${TARGET_BIN}/${SCRIPT} already exist, will not update it" fi } if [ ! -e ${TARGET_BIN}/${SCRIPT} ]; then echo "Failed to create ${TARGET_BIN}/${SCRIPT}" exit 1 fi # Commands to create COMMANDS="ansible \ ansible-config \ ansible-console \ ansible-galaxy \ ansible-playbook \ ansible-vault \ ansible-community \ ansible-connection \ ansible-doc \ ansible-inventory \ ansible-test \ ansible-lint \ ansible-pull" # determime target root directory # either /usr/local/bin or current user ~/bin if [ -d /host/usr/local/bin ]; then TARGET_ROOT=/host/usr/local IMAGE_CONF_DIR=etc/default IMAGE_CONF_FILE=ansible-container elif [ -d /host/bin ] ; then TARGET_ROOT=/host IMAGE_CONF_DIR=.config/ansible-container IMAGE_CONF_FILE=image else echo "could not determine copy target" exit 1 fi TARGET_BIN=${TARGET_ROOT}/bin TARGET_SHARE=${TARGET_ROOT}/share/ansible-container TARGET_CONF_DIR=${TARGET_ROOT}/${IMAGE_CONF_DIR} cp -v /container/ansible-wrapper.sh ${TARGET_BIN}/ansible-wrapper.sh for COMMAND in ${COMMANDS}; do create_command_bin ${COMMAND} done # Create container share area under /usr/local/share if it doesn't exist if [ ! -d ${TARGET_SHARE} ]; then mkdir -p ${TARGET_SHARE} fi # Copy examples to container share area, overwriting any previous content. cp -av /container/examples ${TARGET_SHARE}/ # Save container image used to install the container to appropriate conf # file if [ ! -d ${TARGET_CONF_DIR} ]; then mkdir -p ${TARGET_CONF_DIR} fi echo "IMAGE=${IMAGE}" > ${TARGET_CONF_DIR}/${IMAGE_CONF_FILE} 0707010000001F000081A4000000000000000000000001652E3F1A000002FB000000000000000000000000000000000000002200000000ansible-container/label-uninstall#!/bin/bash delete_file() { FILE=$1 if [[ -e "/host/bin/${FILE}" || -L "/host/bin/${FILE}" ]]; then /usr/bin/rm -vf /host/bin/${FILE} elif [[ -e "/host/usr/local/bin/${FILE}" || -L "/host/usr/local/bin/${FILE}" ]]; then /usr/bin/rm -vf /host/usr/local/bin/${FILE} else echo "${FILE} not present, nothing to remove" fi } COMMANDS="ansible-wrapper.sh \ ansible \ ansible-config \ ansible-console \ ansible-galaxy \ ansible-playbook \ ansible-vault \ ansible-community \ ansible-connection \ ansible-doc \ ansible-inventory \ ansible-test \ ansible-lint \ ansible-pull" for COMMAND in ${COMMANDS}; do delete_file ${COMMAND} done if [ -d /host/usr/local/share/ansible-container ]; then rm -rf /host/usr/local/share/ansible-container fi exit 0 07070100000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000B00000000TRAILER!!!144 blocks
Locations
Projects
Search
Status Monitor
Help
OpenBuildService.org
Documentation
API Documentation
Code of Conduct
Contact
Support
@OBShq
Terms
openSUSE Build Service is sponsored by
The Open Build Service is an
openSUSE project
.
Sign Up
Log In
Places
Places
All Projects
Status Monitor