File yomi-0.0.1+git.1630589391.4557cfd.obscpio of Package yomi-formula

07070100000000000081A40000000000000000000000016130D1CF0000050A000000000000000000000000000000000000002D00000000yomi-0.0.1+git.1630589391.4557cfd/.gitignore# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
#  Usually these files are written by a python script from a template
#  before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/
.pytest_cache/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
target/

# Jupyter Notebook
.ipynb_checkpoints

# IPython
profile_default/
ipython_config.py

# pyenv
.python-version

# celery beat schedule file
celerybeat-schedule

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# emacs
*~
07070100000001000081A40000000000000000000000016130D1CF00002C5D000000000000000000000000000000000000002A00000000yomi-0.0.1+git.1630589391.4557cfd/LICENSE                                 Apache License
                           Version 2.0, January 2004
                        http://www.apache.org/licenses/

   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction,
      and distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by
      the copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all
      other entities that control, are controlled by, or are under common
      control with that entity. For the purposes of this definition,
      "control" means (i) the power, direct or indirect, to cause the
      direction or management of such entity, whether by contract or
      otherwise, or (ii) ownership of fifty percent (50%) or more of the
      outstanding shares, or (iii) beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity
      exercising permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation
      source, and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but
      not limited to compiled object code, generated documentation,
      and conversions to other media types.

      "Work" shall mean the work of authorship, whether in Source or
      Object form, made available under the License, as indicated by a
      copyright notice that is included in or attached to the work
      (an example is provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object
      form, that is based on (or derived from) the Work and for which the
      editorial revisions, annotations, elaborations, or other modifications
      represent, as a whole, an original work of authorship. For the purposes
      of this License, Derivative Works shall not include works that remain
      separable from, or merely link (or bind by name) to the interfaces of,
      the Work and Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including
      the original version of the Work and any modifications or additions
      to that Work or Derivative Works thereof, that is intentionally
      submitted to Licensor for inclusion in the Work by the copyright owner
      or by an individual or Legal Entity authorized to submit on behalf of
      the copyright owner. For the purposes of this definition, "submitted"
      means any form of electronic, verbal, or written communication sent
      to the Licensor or its representatives, including but not limited to
      communication on electronic mailing lists, source code control systems,
      and issue tracking systems that are managed by, or on behalf of, the
      Licensor for the purpose of discussing and improving the Work, but
      excluding communication that is conspicuously marked or otherwise
      designated in writing by the copyright owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity
      on behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

   3. Grant of Patent License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      (except as stated in this section) patent license to make, have made,
      use, offer to sell, sell, import, and otherwise transfer the Work,
      where such license applies only to those patent claims licensable
      by such Contributor that are necessarily infringed by their
      Contribution(s) alone or by combination of their Contribution(s)
      with the Work to which such Contribution(s) was submitted. If You
      institute patent litigation against any entity (including a
      cross-claim or counterclaim in a lawsuit) alleging that the Work
      or a Contribution incorporated within the Work constitutes direct
      or contributory patent infringement, then any patent licenses
      granted to You under this License for that Work shall terminate
      as of the date such litigation is filed.

   4. Redistribution. You may reproduce and distribute copies of the
      Work or Derivative Works thereof in any medium, with or without
      modifications, and in Source or Object form, provided that You
      meet the following conditions:

      (a) You must give any other recipients of the Work or
          Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices
          stating that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works
          that You distribute, all copyright, patent, trademark, and
          attribution notices from the Source form of the Work,
          excluding those notices that do not pertain to any part of
          the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
          distribution, then any Derivative Works that You distribute must
          include a readable copy of the attribution notices contained
          within such NOTICE file, excluding those notices that do not
          pertain to any part of the Derivative Works, in at least one
          of the following places: within a NOTICE text file distributed
          as part of the Derivative Works; within the Source form or
          documentation, if provided along with the Derivative Works; or,
          within a display generated by the Derivative Works, if and
          wherever such third-party notices normally appear. The contents
          of the NOTICE file are for informational purposes only and
          do not modify the License. You may add Your own attribution
          notices within Derivative Works that You distribute, alongside
          or as an addendum to the NOTICE text from the Work, provided
          that such additional attribution notices cannot be construed
          as modifying the License.

      You may add Your own copyright statement to Your modifications and
      may provide additional or different license terms and conditions
      for use, reproduction, or distribution of Your modifications, or
      for any such Derivative Works as a whole, provided Your use,
      reproduction, and distribution of the Work otherwise complies with
      the conditions stated in this License.

   5. Submission of Contributions. Unless You explicitly state otherwise,
      any Contribution intentionally submitted for inclusion in the Work
      by You to the Licensor shall be under the terms and conditions of
      this License, without any additional terms or conditions.
      Notwithstanding the above, nothing herein shall supersede or modify
      the terms of any separate license agreement you may have executed
      with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
      names, trademarks, service marks, or product names of the Licensor,
      except as required for reasonable and customary use in describing the
      origin of the Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or
      agreed to in writing, Licensor provides the Work (and each
      Contributor provides its Contributions) on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
      implied, including, without limitation, any warranties or conditions
      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
      PARTICULAR PURPOSE. You are solely responsible for determining the
      appropriateness of using or redistributing the Work and assume any
      risks associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing
      the Work or Derivative Works thereof, You may choose to offer,
      and charge a fee for, acceptance of support, warranty, indemnity,
      or other liability obligations and/or rights consistent with this
      License. However, in accepting such obligations, You may act only
      on Your own behalf and on Your sole responsibility, not on behalf
      of any other Contributor, and only if You agree to indemnify,
      defend, and hold each Contributor harmless for any liability
      incurred by, or claims asserted against, such Contributor by reason
      of your accepting any such warranty or additional liability.

   END OF TERMS AND CONDITIONS

   APPENDIX: How to apply the Apache License to your work.

      To apply the Apache License to your work, attach the following
      boilerplate notice, with the fields enclosed by brackets "[]"
      replaced with your own identifying information. (Don't include
      the brackets!)  The text should be enclosed in the appropriate
      comment syntax for the file format. We also recommend that a
      file or class name and description of purpose be included on the
      same "printed page" as the copyright notice for easier
      identification within third-party archives.

   Copyright [yyyy] [name of copyright owner]

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.
07070100000002000081A40000000000000000000000016130D1CF0000B7E4000000000000000000000000000000000000002C00000000yomi-0.0.1+git.1630589391.4557cfd/README.md# Yomi - Yet one more installer

Table of contents
=================
* [Yomi - Yet one more installer](#yomi---yet-one-more-installer)
   * [What is Yomi](#what-is-yomi)
   * [Overview](#overview)
   * [Installing and configuring salt-master](#installing-and-configuring-salt-master)
      * [Other ways to install salt-master](#other-ways-to-install-salt-master)
      * [Looking for the pillar](#looking-for-the-pillar)
      * [Enabling auto-sign](#enabling-auto-sign)
      * [Salt API](#salt-api)
   * [The Yomi formula](#the-yomi-formula)
      * [Looking for the pillar in Yomi](#looking-for-the-pillar-in-yomi)
      * [Enabling auto-sing in Yomi](#enabling-auto-sing-in-yomi)
      * [Salt API in Yomi](#salt-api-in-yomi)
         * [Real time monitoring in Yomi](#real-time-monitoring-in-yomi)
   * [Booting a new machine](#booting-a-new-machine)
      * [The ISO image](#the-iso-image)
      * [PXE Boot](#pxe-boot)
      * [Finding the master node](#finding-the-master-node)
      * [Setting the minion ID](#setting-the-minion-id)
      * [Adding user provided configuration](#adding-user-provided-configuration)
      * [Container](#container)
   * [Basic operations](#basic-operations)
      * [Getting hardware information](#getting-hardware-information)
      * [Configuring the pillar](#configuring-the-pillar)
      * [Cleaning the disks](#cleaning-the-disks)
      * [Applying the yomi state](#applying-the-yomi-state)
   * [Pillar reference for Yomi](#pillar-reference-for-yomi)
      * [config section](#config-section)
      * [partitions section](#partitions-section)
      * [lvm section](#lvm-section)
      * [raid section](#raid-section)
      * [filesystems section](#filesystems-section)
      * [bootloader section](#bootloader-section)
      * [software section](#software-section)
      * [suseconnect section](#suseconnect-section)
      * [salt-minion section](#salt-minion-section)
      * [services section](#services-section)
      * [networks section](#networks-section)
      * [users section](#users-section)

# What is Yomi

Yomi (yet one more installer) is a new proposal for an installer for
the [open]SUSE family. It is designed as a
[SaltStack](https://www.saltstack.com/) state, and expected to be used
in situations were unattended installations for heterogeneous nodes is
required, and where some bits of intelligence in the configuration
file can help to customize the installation.

Being also a Salt state makes the installation process one more step
during the provisioning stage, making on Yomi a good candidate for
integration in any workflow were SaltStack is used.


# Overview

To execute Yomi we need a modern version of Salt, as we need special
features are only on the
[master](https://github.com/saltstack/salt/tree/master) branch of
Salt. Technically we can use the last released version of Salt for
salt-master, but for the minions we need the most up-to-date
version. The good news is that most of the patches are currently
merged in the openSUSE package of Salt.

Yomi is developed in
[OBS](https://build.opensuse.org/project/show/systemsmanagement:yomi),
and actually consists on two components:

* [yomi-formula](https://build.opensuse.org/package/show/systemsmanagement:yomi/yomi-formula):
  contains the Salt states and modules requires to drive an
  installation. The [source code](https://github.com/openSUSE/yomi) of
  the project in available under the openSUSE group in GitHub.
* [openSUSE-Tubleweed-Yomi](https://build.opensuse.org/package/show/systemsmanagement:yomi/openSUSE-Tumbleweed-Yomi):
  is the image that can be used too boot the new nodes, that includes
  the `salt-minion` service already configured. There are two versions
  of this image, one that is used as a LiveCD image and other designed
  to be used from a PXE Boot server.

The installation process of Yomi will require:

* Install and configure the
  [`salt-master`](#installing-and-configuring-salt-master) service.
* Install the [`yomi-formula`](#the-yomi-formula) package.
* Prepare the [pillar](#pillar-in-yomi) for the new installations.
* Boot the new systems with the [ISO image](#the-iso-image) or via
  [PXE boot](#pxe-boot)

Currently Yomi support the installation under x86_64 and ARM64
(aarch64) with EFI.


# Installing and configuring salt-master

SaltStack can be deployed with different architectures. The
recommended one will require the `salt-master` service.

```bash
zypper in salt-master

systemctl enable --now salt-master.service
```

## Other ways to install salt-master

For different ways of installation, read the [official
documentation](https://docs.saltstack.com/en/latest/topics/installation/index.html). For
example, for development purposes installing it inside a virtual
environment can be a good idea:

```bash
python3 -mvenv venv

source venv/bin/activate

pip install --upgrade pip
pip install salt

# Create the basic layout and config files
mkdir -p venv/etc/salt/pki/{master,minion} \
      venv/etc/salt/autosign_grains \
      venv/var/cache/salt/master/file_lists/roots

cat <<EOF > venv/etc/salt/master
root_dir: $(pwd)/venv

file_roots:
  base:
    - $(pwd)/srv/salt

pillar_roots:
  base:
    - $(pwd)/srv/pillar
EOF
```

## Looking for the pillar

Salt pillar are the data that the Salt states use to decide the
actions that needs to be done. For example, in the case of Yomi the
typical data will be the layout of the hard disks, the software
patterns that will be installed, or the users that will be
created. For a complete explanation of the pillar required by Yomi,
check the section [Pillar in Yomi](#pillar-in-yomi)

By default Salt will search the states in `/srv/salt`, and the pillar
in `/srv/pillar`, as established by `file_roots` and `pillar_roots`
parameters in the default configuration file (`/etc/salt/master`).

To indicate a different place where to find the pillar, we can add a
new snippet in the `/etc/salt/master.d` directory:

```bash
cat <<EOF > /etc/salt/master.d/pillar.conf
pillar_roots:
  base:
    - /srv/pillar
	- /usr/share/yomi/pillar
EOF
```

The `yomi-formula` package already contains an example of such
configuration. Check section [Looking for the pillar in
Yomi](#looking-for-the-pillar-in-yomi)

## Enabling auto-sign

To simplify the discovery and key management of the minions, we can
use the auto-sign feature of Salt. To do that we need to add a new
file in `/etc/salt/master.d`.

```bash
echo "autosign_grains_dir: /etc/salt/autosign_grains" > \
     /etc/salt/master.d/autosign.conf
```

The Yomi ISO image available in Factory already export some UUIDs
generated for each minion, so we need to list into the master all the
possible valid UUIDs.

```bash
mkdir -p /etc/salt/autosign_grains

for i in $(seq 0 9); do
  echo $(uuidgen --md5 --namespace @dns --name http://opensuse.org/$i)
done > /etc/salt/autosign_grains/uuid
```

The `yomi-formula` package already contains an example of such
configuration. Check section [Enabling auto-sing in
Yomi](#enabling-auto-sing-in-yomi)

## Salt API

The `salt-master` service can be accessed via a REST API, provided by
an external tool that needs to be enabled.

```bash
zypper in salt-api

systemctl enable --now salt-api.service
```

There are different options to configure the `salt-api` service, but
is safe to choose `CherryPy` as a back-end to serve the requests of
Salt API.

We need to configure this service to listen to one port, for example
8000, and to associate an authorization mechanism. Read the Salt
documentation about this topic for different options.

```bash
cat <<EOF > /etc/salt/master.d/salt-api.conf
rest_cherrypy:
  port: 8000
  debug: no
  disable_ssl: yes
EOF

cat <<EOF > /etc/salt/master.d/eauth.conf
external_auth:
  file:
    ^filename: /etc/salt/user-list.txt
    salt:
      - .*
      - '@wheel'
      - '@runner'
      - '@jobs'
EOF

echo "salt:linux" > /etc/salt/user-list.txt
```

The `yomi-formula` package already contains an example of such
configuration. Check section [Salt API in Yomi](#salt-api-in-yomi)


# The Yomi formula

The states and modules required by Salt to drive an installation can
be installed where the `salt-master` resides:

```bash
zypper in yomi-formula
```

This package will install the states in
`/usr/share/salt-formulas/states`, some pillar examples in
`/usr/share/yomi/pillar` and configuration files in `/usr/share/yomi`.

## Looking for the pillar in Yomi

Yomi expect from the pillar to be a normal YAML document, optionally
generated by a Jinja template, as is usual in Salt.

The schema of the pillar is described in the section [Pillar reference
for Yomi](#pillar-reference-for-yomi), but the `yomi-formula` package
provides a set of examples that can be used to deploy MicroOS
installations, Kubic, LVM, RAID or simple openSUSE Tumbleweed ones.

In order to `salt-master` can find the pillar, we need to change the
`pillar_roots` entry in the configuration file, or use the one
provided by the package:

```bash
cp -a /usr/share/yomi/pillar.conf /etc/salt/master.d/
systemctl restart salt-master.service
```

## Enabling auto-sing in Yomi

The images generated by the Open Build Service that are ready to be
used together with Yomi contains a list a random UUID, that can be
used as a auto-sing grain in `salt-master`.

We can enable this feature adding the configuration file provided by
the package:

```bash
cp /usr/share/yomi/autosign.conf /etc/salt/master.d/
systemctl restart salt-master.service
```

## Salt API in Yomi

As described in the section [Salt API](#salt-api), we need to enable
the `salt-api` service in order to provide a REST API service to
`salt-minion`.

This service is used by Yomi to monitor the installation, reading the
event bus of Salt. To enable the real-time events we need to enable
set `events` field to `yes` in the configuration section of the
pillar.

We can enable this service easily (after installing the `salt-api`
package and the dependencies) using the provided configuration file:

```bash
cp /usr/share/yomi/salt-api.conf /etc/salt/master.d/
systemctl restart salt-master.service
```

Feel free to edit `/etc/salt/master.d/salt-api.conf` and provide the
required certificates to enable the SSL connection, an use a different
authorization mechanism. The current one is based on reading the file
`/usr/share/yomi/user-list.txt`, that is storing the password in plain
text. So please, *do not* use this in production.

### Real time monitoring in Yomi

Once we check that in our `config` of the pillar contains this:

```yaml
config:
  events: yes
```

We can launch the `yomi-monitor` tool.

```bash
export SALTAPI_URL=http://localhost:8000
export SALTAPI_EAUTH=file
export SALTAPI_USER=salt
export SALTAPI_PASS=linux

yomi-monitor -r -y
```

The `yomi-monitor` tool store in a local cache the authentication
tokens generated by Salt API. This will accelerate the next connection
to the service, but sometimes can cause authentication errors (for
example, when the cache is in place but the salt-master get
reinstalled). The option `-r` makes sure that this cache is removed
before connection. Check the help option of the tool for more
information.


# Booting a new machine

As described in the previous sections, Yomi is a set of Salt states
that are used to drive the installation of a new operating system. To
take full control of the system where the installation will be done,
you will need to boot from an external system that provides an already
configured `salt-minion`, and a set of CLI tools required during the
installation.

We can deploy all the requirements using different mechanisms. One,
for example, is via PXE boot. We can build a server that will deliver
the Linux `kernel` and an `initrd` will all the required
software. Another alternative is to have an already live ISO image
that you use to boot from the USB port.

There is an already available image that contains all the requirements
in
[Factory](https://build.opensuse.org/package/show/openSUSE:Factory/openSUSE-Tumbleweed-Yomi). This
is an image build from openSUSE Tumbleweed repositories that includes
a very minimal set of tools, including the openSUSE version of
`salt-minion`.

To use the last version of the image, together with the last version
of `salt-minion` that includes all the patches that are under review
in the SaltStack project, you can always use the version from the
[devel
project](https://build.opensuse.org/package/show/systemsmanagement:yomi/openSUSE-Tumbleweed-Yomi)

Note that this image is a `_multibuild` one, and generates two
different images. One is a LiveCD ISO image, ready to be booted from
USB or DVD, and the other one is a PXE Boot ready image.

## The ISO image

The ISO image is a LiveCD that can be booted from USB or from DVD, and
the last version can be always be downloaded from:

```bash
wget https://download.opensuse.org/repositories/systemsmanagement:/yomi/images/iso/openSUSE-Tumbleweed-Yomi.x86_64-livecd.iso
```

This image do not have root password, so if we have physical access to
the node we can become root locally.  The `sshd` service is enabled
during boot time but for security reasons the user `root` cannot
access via SSH (`PermitEmptyPasswords` is not set).  To gain remote
access to `root` we need to set the kernel command line parameter
`ym.sshd=1` (for example, via PXE Boot).

## PXE Boot

The second image available is a OEM ramdisk that can be booted from
PXE Boot.

To install the image we first need to download the file
`openSUSE-Tumbleweed-Yomi.x86_64-${VERSION}-pxeboot-Build${RELEASE}.${BUILD}.install.tar`
from the Factory, or directly from the development project.

We need to start the `sftpd` service or use `dnsmasq` to behave also
as a tftp server. There is some documentation in the [openSUSE
wiki](https://en.opensuse.org/SDB:PXE_boot_installation), and if you
are using QEMU you can also check the appendix document.

```bash
mkdir -p /srv/tftpboot/pxelinux.cfg
cp /usr/share/syslinux/pxelinux.0 /srv/tftpboot

cd /srv/tftpboot
tar -xvf $IMAGE

cat <<EOF > /srv/tftpboot/pxelinux.cfg/default
default yomi
prompt   1
timeout  30

label yomi
  kernel pxeboot.kernel
  append initrd=pxeboot.initrd.xz rd.kiwi.install.pxe rd.kiwi.install.image=tftp://${SERVER}/openSUSE-Tumbleweed-Yomi.xz rd.kiwi.ramdisk ramdisk_size=1048576
EOF
```

This image is based on Tumbleweed, that leverage by default the
predictable network interface name.  If your image is based on a
different one, be sure to add `net.ifnames=1` at the end of the
`append` section.

## Finding the master node

The `salt-minion` configuration in the Yomi image will search the
`salt-master` system under the `salt` name. Is expected that the local
DNS service will resolve the `salt` name to the correct IP address.

During boot time of the Yomi image we can change the address where is
expected to find the master node. To do that we can enter under the
GRUB menu the entry `ym.master=my_master_address`. For example
`ym.master=10.0.2.2` will make the minion to search the master in the
address `10.0.2.2`.

An internal systemd service in the image will detect this address and
configure the `salt-minion` accordingly.

Under the current Yomi states, this address will be copied under the
new installed system, together with the key delivered by the
`salt-master` service. This means that once the system is fully
installed with the new operating system, the new `salt-minion` will
find the master directly after the first boot.

## Setting the minion ID

In a similar way, during the boot process we can set the minion ID
that will be assigned to the `salt-minion`. Using the parameter
`ym.minion_id`. For example, `ym.minion_id=worker01` will set the
minion ID for this system as `worker01`.

The rules for the minion ID are a bit more complicated. Salt, by
default, set the minion ID equal to the FQDN or the IP of the node if
no ID is specified. This cannot be a good idea if the IP changes, so
the current rules are:

* The value from `ym.minion_id` boot parameter.
* The FQDN hostname of the system, if is different from localhost.
* The MAC address of the first interface of the system.

## Adding user provided configuration

Sometimes we need to inject in the `salt-minion` some extra
configuration, before the service runs. For example, we might need to
add some grains, or enable some feature in the `salt-minion` service
running inside the image.

To do that we have to options: we can pass an URL with the content, or
we can add the full content as a parameter during the boot process.

To pass an URL we should use `ym.config_url` parameter. For example,
`ym.config_url=http://server.com/pub/myconfig.cfg` will download the
configuration file, and will store it under the default name
`config_url.cfg` in `/etc/salt/minion.d`. We can set a different name
from the default via the parameter `ym.config_url_name`.

In a similar way we can use the parameter `ym.config` to declare the
full content of the user provided configuration file. You need to use
quotes to mark the string and escaped control codes to indicate new
lines or tabs, like `ym.config="grains:\n my_grain: my_value"`. This
will create a file named `config.cfg`, and the name can be overwritten
with the parameter `ym.config_name`.

## Container

Because the versatility of Salt, is possible to execute the modules
that belong to the `salt-minion` service Yomi without the requirement
of any `salt-master` nor `salt-minion` service running. We could
launch the installation via only the `salt-call` command in local
mode.

Because of that, es possible to deliver Yomi as a single container,
composed of the different Salt and Yomi modules and states.

We can boot a machine using any mechanism, like a recovery image, and
use `podman` to register the Yomi container. This container will be
executed as a privileged one, mapping the external devices inside the
container space.

To register the container we can do:

```bash
podman pull registry.opensuse.org/systemsmanagement/yomi/images/opensuse/yomi:latest
```

Is recommended to create a local pillar directory;

```bash
mkdir pillar
```

Once we have the pillar data, we can launch the installer:

```bash
podman run --privileged --rm \
  -v /dev:/dev \
  -v /run/udev:/run/udev \
  -v ./pillar:/srv/pillar \
  <CONTAINER_ID> \
  salt-call --local state.highstate
```


# Basic operations

Once `salt-master` is configured and running, the `yomi-formula`
states are available and a new system is booted with a up-to-date
`salt-minion`, we can start to operate with Yomi.

The usual process is simple: describe the pillar information and apply
the `yomi` state to the node or nodes. Is not relevant how the pillar
was designed (maybe using a smart template that cover all the cases or
writing a raw YAML that only covers one single installation).  In this
section we will provide some hints about how get information and can
help in this process.

## Getting hardware information

The provided pillar are only an example of what we can do with
Yomi. Eventually we need to adapt them based on the hardware that we
have.

We can discover the hardware configuration with different
mechanism. One is get the `grains` information directly from the
minion:

```bash
salt node grains.items
```

We can get more detailed information using other Salt modules, like
`partition.list`, `network.interfaces` or `udev.info`.

With Yomi we provided a simple interface to `hwinfo` that provides in
a single report some of the information that is required to make
decisions about the pillar.

```bash
# Synchronize all the modules to the minion
salt node saltutil.sync_all

# Get a short report about some devices
salt node devices.hwinfo

# Get a detailled report about some devices
salt node devices.hwinfo short=no
```

## Configuring the pillar

The package `yomi-formula` provides some pillar examples that can be
used as a reference when you are creating your own profiles.

Salt search the pillar information in the directories listed in the
`pillar_roots` configuration entry, and using the snippet from the
section [Pillar in Yomi](#pillar-in-yomi), we can make those examples
available in our system.

In the case that we want to edit those files, we can copy them in a
different directory and add it to the `pillar_roots` entry.

```bash
mkdir -p /srv/pillar-yomi
cp -a /usr/share/yomi/pillar/* /srv/pillar-yomi

cat <<EOF > /etc/salt/master.d/pillar.conf
pillar_roots:
  base:
    - /srv/pillar-yomi
    - /srv/pillar
EOF
systemctl restart salt-master.service
```

The pillar tree start with the `top.sls` file (there is another
`top.sls` file for the states, do not confuse them).

```yaml
base:
  '*':
    - installer
```

This file is used to map the node with the data that the states will
use later. For this example the file that contain the data is
`installer.sls`, but feel free to choose a different name when you are
creating your own pillar.

This `installer.sls` is used as an entry point for the rest of the
data. Inside the file there is some Jinja templates that can be edited
to define different kinds of installations. This feature is leveraged
by the
[openQA](https://github.com/os-autoinst/os-autoinst-distri-opensuse/tree/master/tests/yomi)
tests, to easily make multiple deployments.

You can edit the `{% set VAR=VAL %}` section to adjust it to your
current profile, or create one from scratch. The files
`_storage.sls.*` are included for different scenarios, and this is the
place where the disk layout is described. Feel free to include it
directly on your pillar, or use a different mechanism to decide the
layout.

## Cleaning the disks

Yomi try to be careful with the current data stored in the disks. By
default will not remove any partition, nor will make an implicit
decision about the device where the installation will run.

If we want to remove the data from the device, we can use the provided
`devices.wipe` execution module.

```bash
# List the partitions
salt node partition.list /dev/sda

# Make sure that the new modules are in the minion
salt node saltutil.sync_all

# Remove all the partitions and the filesystem information
salt node devices.wipe /dev/sda
```

To wipe all the devices defined in the pillar at once, we can apply
the `yomi.storage.wipe` state.

```bash
# Make sure that the new modules are in the minion
salt node saltutil.sync_all

# Remove all the partitions and the filesystem information
salt node state.apply yomi.storage.wipe
```

## Applying the yomi state

Finally, to install the operating system defined by the pillar into
the new node, we need to apply the high-state:

```bash
salt node state.apply yomi
```

If we have a `top.sls` file similar to this example, living in
`/srv/salt` or in any other place where `file_roots` option is
configured:

```yaml
base:
  '*':
    - yomi
```

We can apply directly the high state:

```bash
salt node state.highstate
```

# Pillar reference for Yomi

To install a new node, we need to provide some data to describe the
installation requirements, like the layout of the partitions, file
systems used, or what software to install inside the new
deployment. This data is collected in what is Salt is known as a
[pillar](https://docs.saltstack.com/en/latest/topics/tutorials/pillar.html).

To configure the `salt-master` service to find the pillar, check the
section [Looking for the pillar](#looking-for-the-pillar).

Pillar can be associated with certain nodes in our network, making of
this technique a basic one to map a description of how and what to
install into a node. This mapping is done via the `top.sls` file:

```yaml
base:
  'C7:7E:55:62:83:17':
    - installer
```

In `installer.sls` we will describe in detail the installation
parameters that will be applied to the node which minion-id match with
`C7:7E:55:62:83:17`. Note that in this example we are using the MAC
address of the first interface as a minion-id (check the section
**Enabling Autosign** for an example).

The `installer.sls` pillar consist on several sections, that we can
describe here.

## `config` section

The `config` section contains global configuration options that will
affect the installer.

* `events`: Boolean. Optional. Default: `yes`

  Yomi can fire Salt events before and after the execution of the
  internal states that Yomi use to drive the installation. Using the
  Salt API, WebSockets, or any other mechanism provided by Salt, we
  can listen the event bus and use this information to monitor the
  installer. Yomi provides a basic tool, `yomi-monitor`, that shows
  real time information about the installation process.

  To disable the events, set this parameter to `no`.

  Note that this option will add three new states for each single Yomi
  state. One extra state is executed always before the normal state,
  and is used to signalize that a new state will be executed. If the
  state is successfully terminated, a second extra state will send an
  event to signalize that the status of the state is positive. But if
  the state fails, a third state will send the fail signal. All those
  extra states will be showed in the final report of Salt.

* `reboot`: String. Optional. Default: `yes`

  Control the way that the node will reboot. There are three possible
  values:

  * `yes`: Will produce a full reboot cycle. This value can be
    specified as the "yes" string, or the `True` boolean value.

  * `no`: Will no reboot after the installation.

  * `kexec`: Instead of rebooting, reload the new kernel installed in
    the node.

  * `halt`: The machine will halt at the end of the installation.

  * `shutdown`: The machine will shut down at the end of the
    installation.

* `snapper`: Boolean. Optional. Default: `no`

  In Btrfs configurations (and in LVM, but still not implemented) we
  can install the snapper tool, to do automatic snapshots before and
  after updates in the system. One installed, a first snapshot will be
  done and the GRUB entry to boot from snapshots will be added.

* `locale`: String. Optional. Default: `en_US.utf8`

  Sets the system locale, more specifically the LANG= and LC\_MESSAGES
  settings. The argument should be a valid locale identifier, such as
  `de_DE.UTF-8`. This controls the locale.conf configuration file.

* `locale_message`: String. Optional.

  Sets the system locale, more specifically the LANG= and LC\_MESSAGES
  settings. The argument should be a valid locale identifier, such as
  `de_DE.UTF-8`. This controls the locale.conf configuration file.

* `keymap`: String. Optional. Default: `us`

  Sets the system keyboard layout. The argument should be a valid
  keyboard map, such as `de-latin1`. This controls the "KEYMAP" entry
  in the vconsole.conf configuration file.

* `timezone`: String. Optional. Default: `UTC`

  Sets the system time zone. The argument should be a valid time zone
  identifier, such as "Europe/Berlin". This controls the localtime
  symlink.

* `hostname`: String. Optional.

  Sets the system hostname. The argument should be a host name,
  compatible with DNS. This controls the hostname configuration file.

* `machine_id`: String. Optional.

  Sets the system's machine ID. This controls the machine-id file. If
  no one is provided, the one from the current system will be re-used.

* `target`: String. Optional. Default: `multi-user.target`

  Set the default target used for the boot process.

Example:

```yaml
config:
  # Do not send events, useful for debugging
  events: no
  # Do not reboot after installation
  reboot: no
  # Always install snapper if possible
  snapper: yes
  # Set language to English / US
  locale: en_US.UTF-8
  # Japanese keyboard
  keymap: jp
  # Universal Timezone
  timezone: UTC
  # Boot in graphical mode
  target: graphical.target
```

## `partitions` section

Yomi separate partitioning the devices from providing a file system,
creating volumes or building arrays of disks. The advantage of this is
that this, usually, compose better that other approaches, and makes
more easy adding more options that needs to work correctly with the
rest of the system.

* `config`: Dictionary. Optional.

  Subsection that store some configuration options related with the
  partitioner.

  * `label`: String. Optional. Default: `msdos`

    Default label for the partitions of the devices. We use any
    `parted` partition recognized by `mklabel`, like `gpt`, `msdos` or
    `bsd`. For UEFI systems, we need to set it to `gpt`. This value
    will be used for all the devices if is not overwritten.

  * `initial_gap`: Integer. Optional. Default: `0`

    Initial gap (empty space) leaved before the first
    partition. Usually is recommended to be 1MB, so GRUB have room to
    write the code needed after the MBR, and the sectors are aligned
    for multiple SSD and hard disk devices. Also is relevant for the
    sector alignment in devices. The valid units are the same for
    `parted`. This value will be used for all the devices if is not
    overwritten.

* `devices`: Dictionary.

  List of devices that will be partitioned. We can indicate already
  present devices, like `/dev/sda` or `/dev/hda`, but we can also
  indicate devices that will be present after the RAID configuration,
  like `/dev/md0` or `/dev/md/myraid`. We can use any valid device
  name in Linux such as all the `/dev/disk/by-id/...`,
  `/dev/disk/by-label/...`, `/dev/disk/by-uuid/...` and others.

  For each device we have:

  * `label`: String. Optional. Default: `msdos`

    Partition label for the device. The meaning and the possible
    values are identical for `label` in the `config` section.

  * `initial_gap`: Integer. Optional. Default: `0`

    Initial gap (empty space) leave before the first partition for
    this device.

  * `partitions`: Array. Optional.

    Partitions inside a device are described with an array. Each
    element of the array is a dictionary that describe a single
    partition.

    * `number`: Integer. Optional. Default: `loop.index`

      Expected partition number. Eventually this parameter will be
      really optional, when the partitioner can deduce it from other
      parameters. Today is better to be explicit in the partition
      number, as this will guarantee that the partition is found in
      the hard disk if present. If is not set, number will be the
      current index position in the array.

    * `id`: String. Optional.

      Full name of the partition. For example, valid ids can be
      `/dev/sda1`, `/dev/md0p1`, etc. Is optional, as the name can be
      deduced from `number`.

    * `size`: Float or String.

      Size of the partition expressed in `parted` units. All the units
      needs to match for partitions on the same device. For example,
      if `initial_gap` or the first partition is expressed in MB, all
      the sized needs to be expressed in MB too.

      The last partition can use the string `rest` to indicate that
      this partition will use all the free space available. If after
      this another partition is defined, Yomi will show a validation
      error.

    * `type`: String.

      A string that indicate for what this partition will be
      used. Yomi recognize several types:

      * `swap`: This partition will be used for SWAP.
      * `linux`: Partition used to root, home or any data.
      * `boot`: Small partition used for GRUB when in BIOS and `gpt`.
      * `efi`: EFI partition used by GRUB when UEFI.
      * `lvm`: Partition used to build an LVM physical volume.
      * `raid`: Partition that will be a component of an array.

Example:

```yaml
partitions:
  config:
    label: gpt
    initial_gap: 1MB
  devices:
    /dev/sda:
      partitions:
        - number: 1
          size: 256MB
          type: efi
        - number: 2
          size: 1024MB
          type: swap
        - number: 3
          size: rest
          type: linux
```

## `lvm` section

To build an LVM we usually create some partitions (in the `partitions`
section) with the `lvm` type set, and in the `lvm` section we describe
the details. This section is a dictionary, were each key is the name
of the LVM volume, and inside it we can find:

* `devices`: Array.

  List of components (partitions or full devices) that will constitute
  the physical volumes and the virtual group of the LVM. If the
  element of the array is a string, this will be the name of a device
  (or partition) that belongs to the physical group. If the element is
  a dictionary it will contains:

  * `name`: String.

    Name of the device or partition.

  The rest of the elements of the dictionary will be passed to the
  `pvcreate` command.

  Note that the name of the virtual group will be the key where this
  definition is under.

* `volumes`: Array.

  Each element of the array will define:

  * `name`: String.

    Name of the logical volume under the volume group.

  The rest of the elements of the dictionary will be passed to the
  `lvcreate` command. For example, `size` and `extents` are used to
  indicate the size of the volume, and they can include a suffix to
  indicate the units. Those units will be the same used for
  `lvcreate`.

The rest of the elements of this section will be passed to the
`vgcreate` command.

Example:

```yaml
lvm:
  system:
    devices:
      - /dev/sda1
      - /dev/sdb1
      - name: /dev/sdc1
        dataalignmentoffset: 7s
    clustered: 'n'
    volumes:
      - name: swap
        size: 1024M
      - name: root
        size: 16384M
      - name: home
        extents: 100%FREE
```

## `raid` section

In the same way that LVM, to create RAID arrays we can setup first
partitions (with the type `raid`) and configure the details in this
section. Also, similar to the LVM section, the keys a correspond to
the name of the device where the RAID will be created. Valid values
are like `/dev/md0` or `/dev/md/system`.

* `level`: String.

   RAID level. Valid values can be `linear`, `raid0`, `0`, `stripe`,
   `raid1`, `1`, `mirror`, `raid4`, `4`, `raid5`, `5`, `raid6`, `6`,
   `raid10`, `10`, `multipath`, `mp`, `faulty`, `container`.

* `devices`: Array.

  List of devices or partitions that build the array.

* `metadata`: String. Optional. Default: `default`

  Metadata version for the superblock. Valid values are `0`, `0.9`,
  `1`, `1.0`, `1.1`, `1.2`, `default`, `ddm`, `imsm`.

The user can specify more parameters that will be passed directly to
`mdadm`, like `spare-devices` to indicate the number of extra devices
in the initial array, or `chunk` to speficy the chunk size.

Example:

```yaml
raid:
  /dev/md0:
    level: 1
    devices:
      - /dev/sda1
      - /dev/sdb1
      - /dev/sdc1
    spare-devices: 1
    metadata: 1.0
```

## `filesystems` section

The partitions, devices or arrays created in previous sections usually
requires a file system. This section will simply list the device name
and the file system (and properties) that will be applied to it.

* `filesystem`. String.

  File system to apply in the device. Valid values are `swap`,
  `linux-swap`, `bfs`, `btrfs`, `xfs`, `cramfs`, `ext2`, `ext3`,
  `ext4`, `minix`, `msdos`, `vfat`. Technically Salt will search for a
  command that match `mkfs.<filesystem>`, so the valid options can be
  more extensive that the one listed here.

* `mountpoint`. String.

  Mount point where the device will be registered in `fstab`.

* `fat`. Integer. Optional.

  If the file system is `vfat` we can force the FAT size, like 12, 16
  or 32.

* `subvolumes`. Dictionary.

  For `btrfs` file systems we can specify more details.

  * `prefix`. String. Optional.

    `btrfs` sub-volume name where the rest of the sub-volumes will be
    under. For example, if we set `prefix` as `@` and we create a
    sub-volume named `var`, Yomi will create it as `@/var`.

  * `subvolume`. Dictionary.

    * `path`. String.

      Path name for the sub-volume.

	* `copy_on_write`. Boolean. Optional. Default: `yes`

      Value for the copy-on-write option in `btrfs`.

Example:

```yaml
filesystems:
  /dev/sda1:
    filesystem: vfat
    mountpoint: /boot/efi
    fat: 32
  /dev/sda2:
    filesystem: swap
  /dev/sda3:
    filesystem: btrfs
    mountpoint: /
    subvolumes:
      prefix: '@'
      subvolume:
        - path: home
        - path: opt
        - path: root
        - path: srv
        - path: tmp
        - path: usr/local
        - path: var
          copy_on_write: no
        - path: boot/grub2/i386-pc
        - path: boot/grub2/x86_64-efi
```

## `bootloader` section

* `device`: String.

  Device name where GRUB2 will be installed. Yomi will take care of
  detecting if is a BIOS or an UEFI setup, and also if Secure-Boot in
  activated, to install and configure the bootloader (or the shim
  loader)

* `timeout`: Integer. Optional. Default: `8`

  Value for the `GRUB_TIMEOUT` parameter.

* `kernel`: String. Optional. Default: `splash=silent quiet`

  Line assigned to the `GRUB_CMDLINE_LINUX_DEFAULT` parameter.

* `terminal`: String. Optional. Default: `gfxterm`

  Value for the `GRUB_TERMINAL` parameter.

  If the value is set to `serial`, we need to add content to the
  `serial_command` parameter.

  If the value is set to `console`, we can pass the console parameters
  to the `kernel` parameter. For example, `kernel: splash=silent quiet
  console=tty0 console=ttyS0,115200`

* `serial_command`: String. Optional

  Value for the `GRUB_SERIAL_COMMAND` parameter. If there is a value,
  `GRUB_TERMINAL` is expected to be `serial`.

* `gfxmode`: String. Optional. Default: `auto`

  Value for the `GRUB_GFXMODE` parameter.

* `theme`: Boolean. Optional. Default: `no`

  If `yes` the `grub2-branding` package will be installed and
  configured.

* `disable_os_prober`: Boolean. Optional. Default: `False`

  Value for the `GRUB_DISABLE_OS_PROBER` parameter.

Example:

```yaml
bootloader:
  device: /dev/sda
```

## `software` section

We can indicate the repositories that will be registered in the new
installation, and the packages and patterns that will be installed.

* `config`. Dictionary. Optional

  Local configuration for the software section. Except `minimal`,
  `transfer`, and `verify` all the options can be overwritten in each
  repository definition.

  * `minimal`: Boolean. Optional. Default: `no`

    Configure zypper to make a minimal installation, excluding
    recommended, documentation and multi-version packages.

  * `transfer`: Boolean. Optional. Default: `no`

    Transfer the current repositories (maybe defined in the media
    installation) into the installed system. If marked, this step will
    be done early, so any future action could update or replace one of
    the repositories.

  * `verify`: Boolean. Optional. Default: `yes`

    Verify the package key when installing.

  * `enabled`: Boolean. Optional. Default: `yes`

    If the repository is enabled, packages can be installed from
    there. A disabled repository will not be removed.

  * `refresh`: Boolean. Optional. Default: `yes`

    Enable auto-refresh of the repository.

  * `gpgcheck`: Boolean. Optional. Default: `yes`

    Enable or disable the GPG check for the repositories.

  * `gpgautoimport`: Boolean. Optional. Default: `yes`

    If enabled, automatically trust and import public GPG key for the
    repository.

  * `cache`: Boolean. Optional. Default: `no`

    If the cache is enabled, will keep the RPM packages.

* `repositories`. Dictionary. Optional

  Each key of the dictionary will be the alias under where this
  repository is registered, and the key, if is a string, the URL
  associated with it.

  If the key is an dictionary, we can overwrite some of the default
  configuration options set in the `config` section, with the
  exception of `minimal`. There are some more elements that we can set
  for the repository:

  * `url`: String.

    URL of the repository.

  * `name`: String. Optional

    Descriptive name for the repository.

  * `priority`: Integer. Optional. Default: `0`

    Set priority of the repository.

* `packages`. Array. Optional

  List of packages or patters to be installed.

* `image`. Dictionary. Optional

  We can bootstrap the root file system based on a partition image
  generate by KIWI (or any other mechanism), that will be copied into
  the partition that have the root mount point assigned. This can be
  used to speed the installation process.

  Those images needs to contain only the file system and the data. If
  the image contains a boot loader or partition information, the image
  will fail during the resize operation. To validate if the image is
  suitable, a simple `file image.raw` will do.

  * `url`: String.

    URL of the image. As internally we are using curl to fetch the
    image, we can support multiple protocols like `http://`,
    `https://` or `tftp://` among others. The image can be compressed,
    and in that case one of those extensions must to be used to
    indicate the format: [`gz`, `bz2`, `xz`]

  * `md5`|`sha1`|`sha224`|`sha256`|`sha384`|`sha512`: String. Optional

    Checksum type and value used to validate the image. If this field
    is present but empty (only the checksum type, but with no value
    attached), the state will try to fetch the checksum fail from the
    same URL given in the previous field. If the path contains an
    extension for a compression format, this will be replaced with the
    checksum type as a new extension.

    For example, if the URL is `http://example.com/image.xz`, the
    checksum type is `md5`, and no value is provided, the checksum
    will be expected at `http://example.com/image.md5`.

    But if the URL is something like `http://example.com/image.ext4`,
    the checksum will be expected in the URL
    `http://example.com/image.ext4.md5`.

  If the checksum type is provided, the value for the last image will
  be stored in the Salt cache, and will be used to decide if the image
  in the URL is different from the one already copied in the
  partition. If this is the case, no image will be
  downloaded. Otherwise a new image will be copied, and the old one
  will be overwritten in the same partition.

Example:

```yaml
software:
  repositories:
    repo-oss: "http://download.opensuse.org/tumbleweed/repo/oss"
    update:
	  url: http://download.opensuse.org/update/tumbleweed/
	  name: openSUSE Update
  packages:
    - patterns-base-base
    - kernel-default
```

## `suseconnect` section

Very related with the previous section (`software`), we can register
an SLE product and modules using the `SUSEConnect` command.

In order to `SUSEConnect` to succeed, a product needs to be present
already in the system. This imply that the register must happen after
(at least a partial) installation has been done.

As `SUSEConnect` will register new repositories, this also imply that
not all the packages that can be enumerated in the `software` section
can be installed.

To resolve both conflicts, Yomi will first install the packages listed
in the `sofwtare` section, and after the registration, the packages
listed in this `suseconnect` section.

* `config`. Dictionary.

  Local configuration for the section. It is not optional as there is
  at least one parameter that is required for any registration.

  * `regcode`. String.

  Subscription registration code for the product to be registered.

  * `email`. String. Optional.

  Email address for product registration.

  * `url`. String. Optional.

  URL of registration server (e.g. https://scc.suse.com)

  * `version`. String. Optional.

  Version part of the product name. If the product name do not have a
  version, this default value will be used.

  * `arch`. String. Optional.

  Architecture part of the product name. If the product name do not
  have an architecture, this default value will be used.

* `products`. Array. Optional.

  Product names to register. The expected format is
  <name>/<version>/<architecture>. If only <name> is used, the values
  for <version> and <architecture> will be taken from the `config`
  section.

  If the product / module have a different registration code than the
  one declared in the `config` sub-section, we can declare a new one
  via a dictionary.

  * `name`. String. Optional.

    Product names to register. The expected format is
    <name>/<version>/<architecture>. If only <name> is used, the
    values for <version> and <architecture> will be taken from the
    `config` section.

  * `regcode`. String. Optional.

    Subscription registration code for the product to be registered.

* `packages`. Array. Optional

  List of packages or patters to be installed from the different
  modules.

Example:

```yaml
suseconnect:
  config:
    regcode: SECRET-CODE
  products:
    - sle-module-basesystem/15.2/x86_64
    - sle-module-server-applications/15.2/x86_64
    - name: sle-module-live-patching/15.2/x86_64
      regcode: SECRET-CODE
```

## `salt-minion` section

Install and configure the salt-minion service.

* `config`. Boolean. Optional. Default: `no`

  If `yes`, the configuration and cetificates of the new minion will
  be the same that the current minion that is activated. This will
  copy the minion configuration, certificates and grains, together
  with the cached modules and states that are usually synchronized
  before a highstate.

  This option will be replaced in the future with more detailed ones.

Example:

```yaml
salt-minion:
  config: yes
```

## `services` section

We can list the services that will be enabled or disabled during boot
time.

* `enabled`. Array. Optional

  List of services that will be enabled and started during the boot.

* `disabled`. Array. Optional

  List of services that will be exclicitly disabled during the boot.

Example:

```yaml
services:
  enabled:
    - salt-minion
```

## `networks` section

We can list the networks available in the target system. If the list
is not provided, Yomi will try to deduce the network configuration
based on the current setup.

* `interface`. String.

  Name of the interface.

Example:

```yaml
networks:
  - interface: ens3
```

## `users` section

In this section we can list a simple list of users and passwords that
we expect to find once the system is booted.

* `username`. String.

  Login or username for the user.

* `password`. String. Optional.

  Shadow password hash for the user.

* `certificates`. Array. Optional.

  Certificates that will be added to .ssh/authorized_keys. Use only
  the encoded key (remove the "ssh-rsa" prefix and the "user@host"
  suffix).

Example:

```yaml
users:
  - username: root
    password: "$1$wYJUgpM5$RXMMeASDc035eX.NbYWFl0"
  - username: aplanas
    certificates:
      - "AAAAB3NzaC1yc2EAAAADAQABAAABAQDdP6oez825gnOLVZu70KqJXpqL4fGf\
        aFNk87GSk3xLRjixGtr013+hcN03ZRKU0/2S7J0T/dICc2dhG9xAqa/A31Qac\
        hQeg2RhPxM2SL+wgzx0geDmf6XDhhe8reos5jgzw6Pq59gyWfurlZaMEZAoOY\
        kfNb5OG4vQQN8Z7hldx+DBANPbylApurVz6h5vvRrkPfuRVN5ZxOkI+LeWhpo\
        vX5XK3eTjetAwWEro6AAXpGoQQQDjSOoYHCUmXzcZkmIWEubCZvAI4RZ+XCZs\
        +wTeO2RIRsunqP8J+XW4cZ28RZBc9K4I1BV8C6wBxN328LRQcilzw+Me+Lfre\
        eDPglqx"
```
07070100000003000081ED0000000000000000000000016130D1CF00005D24000000000000000000000000000000000000003000000000yomi-0.0.1+git.1630589391.4557cfd/autoyast2yomi#!/usr/bin/python3

# -*- coding: utf-8 -*-
#
# Author: Alberto Planas <aplanas@suse.com>
#
# Copyright 2019 SUSE LLC.
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.

import argparse
import crypt
import json
import logging
from pathlib import Path
import xml.etree.ElementTree as ET


class PathDict(dict):
    def path(self, path, default=None):
        result = self
        for item in path.split("."):
            if item in result:
                result = result[item]
            else:
                return default
        return result


class Convert:
    def __init__(self, control):
        """Construct a Convert class from a control XML object"""
        self.control = control
        # Store the parsed version of the control file
        self._control = None
        self.pillar = {}

    @staticmethod
    def _find(element, name):
        """Find a single element in a XML tree"""
        return element.find("{{http://www.suse.com/1.0/yast2ns}}{}".format(name))

    @staticmethod
    def _get_tag(element):
        """Get element name, without namespace"""
        return element.tag.replace("{http://www.suse.com/1.0/yast2ns}", "")

    @staticmethod
    def _get_type(element):
        """Get element type if any"""
        return element.attrib.get("{http://www.suse.com/1.0/configns}type")

    @staticmethod
    def _get_text(element):
        """Get element text if any"""
        if element is not None and element.text is not None:
            return element.text.strip()

    @staticmethod
    def _get_bool(element):
        """Get element boolean value if any"""
        _text = Convert._get_text(element)
        if _text:
            return _text.lower() == "true"

    @staticmethod
    def _get_int(element):
        """Get element integer value if any"""
        _text = Convert._get_text(element)
        if _text:
            try:
                return int(_text)
            except ValueError:
                pass

    @staticmethod
    def _get(element):
        """Recursively parse the XML tree"""
        type_ = Convert._get_type(element)
        if not type_ and not len(element):
            return Convert._get_text(element)
        elif type_ == "symbol":
            return Convert._get_text(element)
        elif type_ == "boolean":
            return Convert._get_bool(element)
        elif type_ == "integer":
            return Convert._get_int(element)
        elif type_ == "list":
            return [Convert._get(subelement) for subelement in element]
        elif not type_ and len(element):
            return {
                Convert._get_tag(subelement): Convert._get(subelement)
                for subelement in element
            }
        else:
            logging.error("Element type not recognized: %s", type_)

    @staticmethod
    def _parse(element):
        """Parse the XML tree entry point"""
        return PathDict({Convert._get_tag(element): Convert._get(element)})

    def convert(self):
        """Transform a XML control file into a Yomi pillar"""
        self._control = Convert._parse(self.control.getroot())
        self._convert_config()
        self._convert_partitions()
        self._convert_lvm()
        self._convert_raid()
        self._convert_filesystems()
        self._convert_bootloader()
        self._convert_software()
        self._convert_suseconnect()
        self._convert_salt_minion()
        self._convert_services()
        self._convert_users()
        return self.pillar

    def _reboot(self):
        """Detect if a reboot is required"""
        reboot = False
        mode = self._control.path("profile.general.mode", {})
        if mode.get("final_halt") or mode.get("halt"):
            reboot = "shutdown"
        elif mode.get("final_reboot") or mode.get("forceboot"):
            reboot = True
        return reboot

    def _snapper(self):
        """Detect if snapper is required"""
        partitioning = self._control.path("profile.partitioning", PathDict())
        snapper = any(
            drive.get("enable_snapshots", True)
            for drive in partitioning
            if any(
                partition
                for partition in drive.get("partitions", [])
                if partition.get("filesystem", "btrfs") == "btrfs"
            )
        )

        snapper |= self._control.path("profile.bootloader.suse_btrfs", False)
        return snapper

    def _keymap(self):
        """Translate keymap configuration"""
        keymap = self._control.path("profile.keyboard.keymap", "english-us")
        return {
            "english-us": "us",
            "english-uk": "gb",
            "german": "de-nodeadkeys",
            "german-deadkey": "de",
            "german-ch": "ch",
            "french": "fr",
            "french-ch": "ch-fr",
            "french-ca": "ca",
            "cn-latin1": "ca-multix",
            "spanish": "es",
            "spanish-lat": "latam",
            "spanish-lat-cp850": "es",
            "spanish-ast": "es-ast",
            "italian": "it",
            "persian": "ir",
            "portugese": "pt",
            "portugese-br": "br",
            "portugese-br-usa": "us-intl",
            "greek": "gr",
            "dutch": "nl",
            "danish": "dk",
            "norwegian": "no",
            "swedish": "se",
            "finnish": "fi-kotoistus",
            "czech": "cz",
            "czech-qwerty": "cz-qwerty",
            "slovak": "sk",
            "slovak-qwerty": "sk-qwerty",
            "slovene": "si",
            "hungarian": "hu",
            "polish": "pl",
            "russian": "ruwin_alt-UTF-8",
            "serbian": "sr-cy",
            "estonian": "ee",
            "lithuanian": "lt",
            "turkish": "tr",
            "croatian": "hr",
            "japanese": "jp",
            "belgian": "be",
            "dvorak": "us-dvorak",
            "icelandic": "is",
            "ukrainian": "ua-utf",
            "khmer": "khmer",
            "korean": "kr",
            "arabic": "arabic",
            "tajik": "tj_alt-UTF8",
            "taiwanese": "us",
            "chinese": "us",
            "romanian": "ro",
            "us-int": "us-intl",
        }.get(keymap, "us")

    def _convert_config(self):
        """Convert the config section of a pillar"""
        config = self.pillar.setdefault("config", {})

        # Missing fields:
        #  * locale_message
        #  * machine_id

        config["events"] = True
        config["reboot"] = self._reboot()
        config["snapper"] = self._snapper()
        config["locale"] = self._control.path("profile.language.language", "en_US.utf8")
        config["keymap"] = self._keymap()
        config["timezone"] = self._control.path("profile.timezone.timezone", "UTC")
        hostname = self._control.path("profile.networking.dns.hostname")
        if hostname:
            config["hostname"] = hostname
        config["target"] = self._control.path(
            "profile.services-manager.default_target", "multi-user.target"
        )

    def _size(self, partition):
        """Detect the size of a partition"""
        size = partition.get("size")
        return "rest" if size == "max" or not size else size

    def _type(self, partition):
        """Detect the type of a partition"""
        partition_id = partition.get("partition_id")
        if not partition_id:
            filesystem = partition.get("filesystem")
            if filesystem or partition.get("mount"):
                partition_id = 130 if filesystem == "swap" else 131
            elif partition.get("lvm_group"):
                partition_id = 142
            elif partition.get("raid_name"):
                partition_id = 253
            else:
                # 'boot' type if is not a file system, LVM, nor RAID
                return "boot"
        return {130: "swap", 131: "linux", 142: "lvm", 253: "raid", 259: "efi"}[
            partition_id
        ]

    def _label(self, drive):
        """Detect the kind of partition table of a device"""
        disklabel = drive.get("disklabel", "gpt")
        if disklabel and disklabel != "none" and not drive.get("raid_options"):
            return disklabel

    def _convert_partitions(self):
        """Convert the partitions section of a pillar"""
        partitions = self.pillar.setdefault("partitions", {})

        for drive in self._control.path("profile.partitioning", []):
            # If is part of a logical volume, we skip the drive
            if drive.get("is_lvm_vg"):
                continue

            # If the device is missing, we cannot build the pillar
            device = drive.get("device")
            if not device:
                logging.error("Device missing in partitioning")
                continue

            devices = partitions.setdefault("devices", {})
            _device = devices.setdefault(device, {})

            label = self._label(drive)
            if label:
                _device["label"] = label

            _partitions = _device.setdefault("partitions", [])

            for index, partition in enumerate(drive.get("partitions", [])):
                _partition = {}
                _partition["number"] = partition.get("partition_nr", index + 1)
                _partition["size"] = self._size(partition)
                _partition["type"] = self._type(partition)

                if _partition:
                    _partitions.append(_partition)

    def _convert_lvm(self):
        """Convert the lvm section of a pillar"""
        lvm = {}

        for drive in self._control.path("profile.partitioning", []):
            # If the device is missing, we cannot build the pillar
            device = drive.get("device")
            if not device:
                logging.error("Device missing in partitioning")
                continue

            if drive.get("is_lvm_vg"):
                lvm_group = Path(device).name
                group = lvm.setdefault(lvm_group, {})
                volumes = group.setdefault("volumes", [])
                for partition in drive.get("partitions", []):
                    volumes.append(
                        {"name": partition["lv_name"], "size": partition["size"]}
                    )
                # Group parameters
                pesize = drive.get("pesize")
                if pesize:
                    group["physicalextentsize"] = pesize
            else:
                for index, partition in enumerate(drive.get("partitions", [])):
                    lvm_group = partition.get("lvm_group")
                    if lvm_group:
                        partition_nr = partition.get("partition_nr", index + 1)
                        group = lvm.setdefault(lvm_group, {})
                        devices = group.setdefault("devices", [])
                        devices.append("{}{}".format(device, partition_nr))

        if lvm:
            self.pillar["lvm"] = lvm

    def _convert_raid(self):
        """Convert the raid section of a pillar"""
        raid = {}

        for drive in self._control.path("profile.partitioning", []):
            # If the device is missing, we cannot build the pillar
            device = drive.get("device")
            if not device:
                logging.error("Device missing in partitioning")
                continue

            raid_options = drive.get("raid_options")
            if raid_options:
                _device = raid.setdefault(device, {})
                chunk_size = raid_options.get("chunk_size")
                if chunk_size:
                    _device["chunk"] = chunk_size
                parity_algorithm = raid_options.get("parity_algorithm")
                if parity_algorithm:
                    _device["parity"] = parity_algorithm.replace("_", "-")
                _device["level"] = raid_options.get("raid_type", "raid1")
                device_order = raid_options.get("device_order")
                if device_order:
                    _device["devices"] = device_order
                continue

            for index, partition in enumerate(drive.get("partitions", [])):
                raid_name = partition.get("raid_name")
                if raid_name:
                    partition_nr = partition.get("partition_nr", index + 1)
                    _device = raid.setdefault(raid_name, {})
                    devices = _device.setdefault("devices", [])
                    devices.append("{}{}".format(device, partition_nr))

        if raid:
            self.pillar["raid"] = raid

    def _convert_filesystems(self):
        filesystems = self.pillar.setdefault("filesystems", {})

        for drive in self._control.path("profile.partitioning", []):
            # If the device is missing, we cannot build the pillar
            device = drive.get("device")
            if not device:
                logging.error("Device missing in partitioning")
                continue

            for index, partition in enumerate(drive.get("partitions", [])):
                filesystem = {}

                if drive.get("is_lvm_vg"):
                    lv_name = partition["lv_name"]
                    _partition = str(Path(device, lv_name))
                elif drive.get("raid_options"):
                    partition_nr = partition.get("partition_nr", index + 1)
                    _partition = "{}p{}".format(device, partition_nr)
                else:
                    partition_nr = partition.get("partition_nr", index + 1)
                    _partition = "{}{}".format(device, partition_nr)

                _filesystem = partition.get("filesystem")
                if _filesystem:
                    filesystem["filesystem"] = _filesystem

                mount = partition.get("mount")
                if mount:
                    filesystem["mountpoint"] = mount

                subvolumes = partition.get("subvolumes")
                if _filesystem == "btrfs" and subvolumes:
                    _subvolumes = filesystem.setdefault("subvolumes", {})
                    _subvolumes["prefix"] = partition.get("subvolumes_prefix", "@")

                    subvolume = _subvolumes.setdefault("subvolume", [])
                    for _subvolume in subvolumes:
                        if isinstance(_subvolume, str):
                            subvolume.append({"path": _subvolume})
                        else:
                            subvolume.append(_subvolume)

                if _partition and filesystem:
                    filesystems[_partition] = filesystem

    def _kernel(self, bootloader_global):
        append = bootloader_global.get("append", "")
        cpu_mitigations = bootloader_global.get("cpu_mitigations", "")
        if cpu_mitigations:
            cpu_mitigations = (
                "noibrs noibpb nopti nospectre_v2 nospectre_v1 "
                "l1tf=off nospec_store_bypass_disable "
                "no_stf_barrier mds=off mitigations=off"
            )
        else:
            cpu_mitigations = ""
        vgamode = bootloader_global.get("vgamode", "")
        if vgamode and vgamode not in append:
            vgamode = "vga={}".format(vgamode)
        else:
            vgamode = ""

        return " ".join(
            param
            for param in ("splash=silent quiet", append, cpu_mitigations, vgamode)
            if param
        )

    def _convert_bootloader(self):
        """Convert the bootloader section of the pillar"""
        bootloader = self.pillar.setdefault("bootloader", {})

        _global = self._control.path("profile.bootloader.global", {})

        # TODO: If EFI, we will be sure to create a EFI partition

        # TODO: `boot_custom` is not used to store the device
        device = _global.get("boot_custom")
        if device:
            bootloader["device"] = device
        else:
            logging.error("Bootloader device not found in control file")

        timeout = _global.get("timeout")
        if timeout:
            bootloader["timeout"] = timeout

        bootloader["kernel"] = self._kernel(_global)

        terminal = _global.get("terminal")
        if terminal:
            bootloader["terminal"] = terminal

        serial = _global.get("serial")
        if serial:
            bootloader["serial_command"] = serial

        gfxmode = _global.get("gfxmode")
        if gfxmode:
            bootloader["gfxmode"] = gfxmode

        bootloader["theme"] = True

        os_prober = _global.get("os_prober")
        if os_prober is not None:
            bootloader["disable_os_prober"] = not os_prober

    def _repositories(self, add_on):
        return {
            entry["alias"]: entry["media_url"]
            for add_on_type in ("add_on_products", "add_on_others")
            for entry in add_on.get(add_on_type, [])
        }

    def _packages(self, software, include_pre, include_post):
        packages = []

        if include_pre:
            for product in software.get("products", []):
                packages.append("product:{}".format(product))

        if include_pre:
            for pattern in software.get("patterns", []):
                packages.append("pattern:{}".format(pattern))

        if include_post:
            for pattern in software.get("post-patterns", []):
                packages.append("pattern:{}".format(pattern))

        if include_pre:
            for package in software.get("packages", []):
                packages.append(package)

        if include_post:
            for package in software.get("post-packages", []):
                packages.append(package)

        kernel = software.get("kernel")
        if include_pre and kernel:
            packages.append(kernel)

        return packages

    def _convert_software(self):
        """Convert the software section of the pillar"""
        software = self.pillar.setdefault("software", {})

        _software = self._control.path("profile.software", {})

        install_recommended = _software.get("install_recommended")
        if install_recommended is not None:
            config = software.setdefault("config", {})
            config["minimal"] = not install_recommended

        add_on = self._control.path("profile.add-on", {})
        if not add_on:
            logging.error("No repositories will be registered")
        software["repositories"] = self._repositories(add_on)
        software["packages"] = self._packages(
            _software,
            include_pre=True,
            include_post="suse_register" not in self._control["profile"],
        )

    def _products(self, suse_register):
        products = []

        for addon in suse_register.get("addons", []):
            products.append("/".join((addon["name"], addon["version"], addon["arch"])))

        return products

    def _convert_suseconnect(self):
        """Convert the suseconnect section of the pillar"""
        suseconnect = self.pillar.get("suseconnect", {})

        suse_register = self._control.path("profile.suse_register", {})

        if not suse_register:
            return

        config = suseconnect.setdefault("config", {})

        reg_code = suse_register.get("reg_code")
        if reg_code:
            config["regcode"] = reg_code

        email = suse_register.get("email")
        if email:
            config["email"] = email

        reg_server = suse_register.get("reg_server")
        if reg_server:
            config["url"] = reg_server

        suseconnect["products"] = self._products(suse_register)

        software = self._control.path("profile.software", {})
        packages = self._packages(software, include_pre=False, include_post=True)
        if packages:
            suseconnect["packages"] = packages

        if suseconnect:
            self.pillar["suseconnect"] = suseconnect

    def _convert_salt_minion(self):
        """Convert the salt-minion section of the pillar"""
        self.pillar.setdefault("salt-minion", {"configure": True})

    def _services(self, services):
        _services = []
        for service in services:
            if not service.endswith((".service", ".socket", ".timer")):
                service = "{}.service".format(service)
            _services.append(service)

        return _services

    def _convert_services(self):
        """Convert the services section of the pillar"""
        services = self.pillar.get("services", {})

        enable = self._control.path("profile.services-manager.services.enable", [])
        for service in self._services(enable):
            services.setdefault("enabled", []).append(service)

        disable = self._control.path("profile.services-manager.services.disable", [])
        for service in self._services(disable):
            services.setdefault("disabled", []).append(service)

        on_demand = self._control.path(
            "profile.services-manager.services.on_demand", []
        )
        for service in self._services(on_demand):
            services.setdefault("enabled", []).append(
                service.replace(".service", ".socket")
            )
            services.setdefault("disabled", []).append(service)

        if services:
            self.pillar["services"] = services

    @staticmethod
    def _password(user, salt=None):
        password = user.get("user_password")
        if password and not user.get("encrypted"):
            salt = salt if salt else crypt.mksalt(crypt.METHOD_MD5)
            password = crypt.crypt(password, salt)
        return password

    def _certificates(self, user):
        certificates = []
        for certificate in user.get("authorized_keys", []):
            parts = certificate.split()
            for index, part in enumerate(parts):
                if part in ("ssh-rsa", "ssh-dss", "ssh-ed25519") or part.startswith(
                    "ecdsa-sha"
                ):
                    certificates.append(parts[index + 1])
                    break

        return certificates

    def _convert_users(self):
        """Convert the users section of the pillar"""
        users = self.pillar.get("users", [])

        # TODO parse the fullname, uid, gid, etc. fields

        _users = self._control.path("profile.users", [])
        for _user in _users:
            user = {"username": _user["username"]}

            password = Convert._password(_user)
            if password:
                user["password"] = password

            certificates = self._certificates(_user)
            if certificates:
                user["certificates"] = certificates

            users.append(user)

        if users:
            self.pillar["users"] = users


if __name__ == "__main__":
    parser = argparse.ArgumentParser(description="Convert AutoYaST control files")
    parser.add_argument("control", metavar="CONTROL.XML", help="autoyast control file")
    parser.add_argument(
        "-o", "--out", default="yomi.json", help="output file (default: yomi.json)"
    )

    args = parser.parse_args()
    control = ET.parse(args.control)
    convert = Convert(control)
    pillar = convert.convert()
    with open(args.out, "w") as f:
        json.dump(pillar, f, indent=4)
07070100000004000041ED0000000000000000000000036130D1CF00000000000000000000000000000000000000000000002700000000yomi-0.0.1+git.1630589391.4557cfd/docs07070100000005000081A40000000000000000000000016130D1CF000018CB000000000000000000000000000000000000004300000000yomi-0.0.1+git.1630589391.4557cfd/docs/appendix-how-to-use-qemu.md# Appendix: How to use QEMU to test Yomi

We can use libvirt, VirtualBox or real hardware to test Yomi. In this
appendix we will give the basic instructions to setup QEMU with KVM to
create a local network that will enable the communication of the nodes
between each other, and with the guest.

## General overview

We will use `qemu-system-x86_64` and the OVMF firmware to deploy UEFI
nodes, and `socat` and `dnsmasq` to build a local network where our
nodes can communicate.

With QEMU we usually need to create some bridges and tun/tap
interfaces that enable the communication between the local
instances. To provide external access to those instances, we also
usually need to enable the masquerading via `iptables`, and
`ip_forward` via `sysctl` in out host. But using `socat` and `dnsmasq`
we can avoid this.

For this to work we will need two interfaces in the virtual
machine. One will be owned by QEMU, that will use the user networking
(SLIRP) back-end. In this network mode, the interface will have always
the IP 10.0.2.15 in the VM side, and the host is reachable via the
10.0.2.2 IP. There is also an internal DNS under the IP 10.0.2.3, that
is managed by QEMU and cannot be configured.

SLIRP is optional and more complicated QEMU deployments disable this
back-end by default. But for us is an easy way to have a connection
between the VM and the host.

If we maintain SLIPR operational, all the VMs will have the same IP,
and all of them will see the host machine via the same IP too, but
they cannot see each other. To resolve this we can add a second
virtual interface in the VM, that using multi-cast, will be used as a
communication channel between the VMs.

We will need to use to external tools to enable this multi-cast
communication. One, `socat`, will create a new virtual interface named
`vmlan` in the host, where all the VMs will be connected to. And the
other is `dnsmasq`, that will be used as a local DHCP / DNS server
that will work on this new interface.

### Creating the local network

First we will need to install both tools:

```bash
zypper in socat dnsmasq
```

Now we need to use `socat` to create a new virtual interface named
`vmlan`, that will expose the IP 10.0.3.1 to the host. At the other
side we will have the multicast socket from QEMU.

```bash
sudo socat \
  UDP4-DATAGRAM:230.0.0.1:1234,sourceport=1234,reuseaddr,ip-add-membership=230.0.0.1:127.0.0.1 \
  TUN:10.0.3.1/24,tun-type=tap,iff-no-pi,iff-up,tun-name=vmlan
```

If you see the error `Network is unreachable`, check if all the
interfaces have an IP assigned (this can be the case when running
inside a VM). But if the error message is `Device or resource busy`,
check that there is not a previous `socat` process running for the
same connection.

Move this process in a second plane, and check that via `ip a s` we
have the `vmlan` interface.

We will now attach a DHCP / DNS server to this new interface, so the
new nodes will have a predicted IP and hostname. Also the nodes will
find the master using a name that can be resolved.

```bash
sudo dnsmasq --no-daemon \
             --interface=vmlan \
             --except-interface=lo \
             --except-interface=em1 \
             --bind-interfaces \
             --dhcp-range=10.0.3.100,10.0.3.200 \
             --dhcp-option=option:router,10.0.3.101 \
             --dhcp-host=00:00:00:11:11:11,10.0.3.101,master \
             --dhcp-host=00:00:00:22:22:22,10.0.3.102,worker1 \
             --dhcp-host=00:00:00:33:33:33,10.0.3.103,worker2 \
             --host-record=master,10.0.3.101
```

This command will deliver IPs into the interface `vmlan` from the
range 10.0.3.100 to 10.0.3.200. The service will ignore the petitions
from the local host and the `em1` interface. If your interfaces are
named differently, you will need to adjust the command accordingly.

The hostnames `master`, `worker1` and `worker2` will be assigned based
on the MAC address, and `master` name will be always resolved to
10.0.3.101. This will simplify the configuration of the salt-minion
later.

### Connecting QEMU to the new network

We can now launch QEMU to have two interfaces. One will be connected
to the new `vmlan` network, via the multi-cast socket option, and the
other interface will be connected to the host machine.

Because we will use UEFI, we will need first to copy the OVMF firmware
locally.

```bash
cp /usr/share/qemu/ovmf-x86_64-code.bin .
cp /usr/share/qemu/ovmf-x86_64-vars.bin .
```

Now we can launch QEMU:

```bash
# Local copy for the variable OVMF file
cp -af ovmf-x86_64-vars.bin ovmf-x86_64-vars-node.bin

# Create the file that will be used as a hard-disk
qemu-img create -f qcow2 hda-node.qcow2 50G

qemu-system-x86_64 -m 2048 -enable-kvm \
  -netdev socket,id=vmlan,mcast=230.0.0.1:1234 \
  -device virtio-net-pci,netdev=vmlan,mac=00:00:00:11:11:11 \
  -netdev user,id=net0,hostfwd=tcp::10022-:22 \
  -device virtio-net-pci,netdev=net0,mac=10:00:00:11:11:11 \
  -cdrom *.iso \
  -hda hda-node.qcow2 \
  -drive if=pflash,format=raw,unit=0,readonly,file=./ovmf-x86_64-code.bin \
  -drive if=pflash,format=raw,unit=1,file=./ovmf-x86_64-vars-node.bin \
  -smp 2 \
  -boot d &
```

The first interface will be connected to the `vmlan` via a multi-cast
socket. The second interface will be the SLIRP user network mode, that
will be connected to the host. We also forward the local port `10022`
to the port `22` in the VM. So we can SSH into the node with:

```bash
ssh root@localhost -p 10022
```

# PXE Boot with QEMU

We can configure `dnsmasq` also to serve the TFTP assets that are
required for the [PXE Boot](../README.md#pxe-boot) image. For example,
this can be used as a base for a local server:

```bash
mkdir tftpboot
sudo ./dnsmasq --no-daemon \
               --interface=vmlan \
               --except-interface=lo \
               --except-interface=em1 \
               --bind-interfaces \
               --dhcp-range=10.0.3.100,10.0.3.200 \
               --dhcp-option=option:router,10.0.3.101 \
               --dhcp-host=00:00:00:11:11:11,10.0.3.101,worker \
               --host-record=master,10.0.2.2 \
               --enable-tftp \
               --dhcp-boot=pxelinux.0,,10.0.3.1 \
               --tftp-root=$(pwd)/tftpboot
```

Follow the documentation to create the different configuration files
and copy the assets in the correct places.
07070100000006000041ED0000000000000000000000036130D1CF00000000000000000000000000000000000000000000003000000000yomi-0.0.1+git.1630589391.4557cfd/docs/examples07070100000007000041ED0000000000000000000000046130D1CF00000000000000000000000000000000000000000000003600000000yomi-0.0.1+git.1630589391.4557cfd/docs/examples/kubic07070100000008000041ED0000000000000000000000026130D1CF00000000000000000000000000000000000000000000003C00000000yomi-0.0.1+git.1630589391.4557cfd/docs/examples/kubic/kubic07070100000009000081A40000000000000000000000016130D1CF000005B1000000000000000000000000000000000000004E00000000yomi-0.0.1+git.1630589391.4557cfd/docs/examples/kubic/kubic/control_plane.sls{% import 'macros.yml' as macros %}

{% set users = pillar['users'] %}
{% set public_ip = grains['ip4_interfaces']['ens3'][0] %}

{{ macros.log('module', 'install_kubic') }}
install_kubic:
  module.run:
    - kubeadm.init:
        - apiserver_advertise_address: {{ public_ip }}
        - pod_network_cidr: '10.244.0.0/16'
    - creates: /etc/kubernetes/admin.conf

{% for user in users %}
  {% set username = user.username %}
{{ macros.log('file', 'create_kubic_directory_' ~ username) }}
create_kubic_directory_{{ username }}:
  file.directory:
    - name: ~{{ username }}/.kube
    - user: {{ username }}
    - group: {{ username if username == 'root' else 'users' }}
    - mode: 700

{{ macros.log('file', 'copy_kubic_configuration_' ~ username) }}
copy_kubic_configuration_{{ username }}:
  file.copy:
    - name: ~{{ username }}/.kube/config
    - source: /etc/kubernetes/admin.conf
    - user: {{ username }}
    - group: {{ username if username == 'root' else 'users' }}
    - mode: 700
{% endfor %}

{{ macros.log('cmd', 'install_network') }}
install_network:
  cmd.run:
    - name: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
    - unless: ip link | grep -q flannel

{{ macros.log('loop', 'wait_interfaces_up') }}
wait_interfaces_up:
  loop.until:
    - name: network.interfaces
    - condition: "'flannel.1' in m_ret"
    - period: 5
    - timeout: 300
0707010000000A000081A40000000000000000000000016130D1CF000002CC000000000000000000000000000000000000004500000000yomi-0.0.1+git.1630589391.4557cfd/docs/examples/kubic/kubic/join.sls{% import 'macros.yml' as macros %}

{% set users = pillar['users'] %}
{% set join_params = salt.mine.get(tgt='00:00:00:11:11:11', fun='join_params')['00:00:00:11:11:11'] %}

{{ macros.log('module', 'join_control_plane') }}
join_control_plane:
  module.run:
    - kubeadm.join:
        - api_server_endpoint: {{ join_params['api-server-endpoint'] }}
        - discovery_token_ca_cert_hash: {{ join_params['discovery-token-ca-cert-hash'] }}
        - token: {{ join_params['token'] }}
    - creates: /etc/kubernetes/kubelet.conf

{{ macros.log('loop', 'wait_interfaces_up') }}
wait_interfaces_up:
  loop.until:
    - name: network.interfaces
    - condition: "'flannel.1' in m_ret"
    - period: 5
    - timeout: 300
0707010000000B000041ED0000000000000000000000026130D1CF00000000000000000000000000000000000000000000003B00000000yomi-0.0.1+git.1630589391.4557cfd/docs/examples/kubic/orch0707010000000C000081A40000000000000000000000016130D1CF00000345000000000000000000000000000000000000004500000000yomi-0.0.1+git.1630589391.4557cfd/docs/examples/kubic/orch/kubic.slssynchronize_all:
  salt.function:
    - name: saltutil.sync_all
    - tgt: '*'

install_microos:
  salt.state:
    - sls:
      - yomi
    - tgt: '*'

wait_for_reboots:
  salt.wait_for_event:
    - name: salt/minion/*/start
    - id_list:
      - '00:00:00:11:11:11'
      - '00:00:00:22:22:22'
    - require:
      - salt: install_microos

install_control_plane:
  salt.state:
    - tgt: '00:00:00:11:11:11'
    - sls:
      - kubic.control_plane

send_mine:
  salt.function:
    - name: mine.send
    - tgt: '00:00:00:11:11:11'
    - arg:
      - join_params
    - kwarg:
        mine_function: kubeadm.join_params
        create_if_needed: yes

join_worker:
  salt.state:
    - tgt: '00:00:00:22:22:22'
    - sls:
      - kubic.join

delete_mine:
  salt.function:
    - name: mine.delete
    - tgt: '*'
    - arg:
      - join_params
0707010000000D000081A40000000000000000000000016130D1CF0000176E000000000000000000000000000000000000004500000000yomi-0.0.1+git.1630589391.4557cfd/docs/use-case-as-a-kubic-worker.md# Use Case: A Kubic worker provisioned with Yomi

We can use [Yomi](https://github.com/openSUSE/yomi) to deploy worker
nodes in an already deployed Kubic cluster.

## Overview and requirements

In this section we are going to describe a way to deploy a two-node
Kubic cluster, and use Yomi to provision a third node.

For this example we can use `libvirt`, `virtualbox`, `vagrant` or
`QEMU`.

We will need to allocate three VMs with:

* 50GB of hard disk
* 2 CPU per node
* 2048MB RAM per system

We will need also connectivity bewteen the different VMs to form a
local network, and also access to Internet for downloading packages.

You can check
[appendix-how-to-use-qemu.md](appendix-how-to-use-qemu.md) to learn
about how to do this with QEMU and how to setup a DNS server with
`dnsmasq` to create a network configuration that will meet the
requirements.

## Installing MicroOS for Kubic

Follow the documentation about how to install a two node Kubic cluster
from the [Kubic
documentation](https://en.opensuse.org/Kubic:kubeadm). In a nutshell
the process is:

* Spin two empty nodes with QEMU / libvirt
* Boot both nodes using the [Kubic
  image](http://download.opensuse.org/tumbleweed/iso/openSUSE-Kubic-DVD-x86_64-Current.iso)
* Deploy one node with the 'Kubic Admin Node' role, this will install
  CRI-O, `kubeadn` and `kubicctl`, together with `salt-master`.
* Deploy the second node with the system role 'Kubic Worker Node'.

We will use `kubicctl` to deploy Kubernetes in the control plane, and
use this same tool to join the already installed worker.

If the control plane node have more that one interface (for example,
if we use QEMU as described in the appendix documentation this will be
the case, but not if we use libvirt), we need to identify the one that
is visible from the worker node. We will pass the IP of this interface
via the `--adv-addr` parameter.

```bash
kubicctl init --adv-addr 10.0.3.101
```

If there are not multiple interfaces and we want to use `flannel` as a
pod network, as simple `kubicctl init` will work on most of the cases.

In the worker node we need to set up `salt-minion` so in can connect
to the `salt-master` in the control plane node. We need to find the
address or IP address that can be used to point to the master,
configure the minion and restart the service.

```bash
echo "master: <MASTER-IP>" > /etc/salt/minion.d/master.conf
systemctl enable --now salt-minion.service
```

The minion now try to connect to the master, but before this can
succeed we need to accept the key in the `master` node.

```bash
salt-key -A
```

We can test from the master that the minion is answering properly:

```bash
salt worker1 test.ping
```

Now we can join the node from the `master` one:

```bash
kubicctl node add worker1
```

Note that `worker1` is refers here to the minion ID that Salt
recognize, not the host name of the worker node.

If the command succeed, we inspect the cluster status:

```bash
kubectl get nodes
```

It will show something like:

```
NAME      STATUS   ROLES    AGE   VERSION
master    Ready    master   11m   v1.15.0
worker1   Ready    <none>   56s   v1.15.0
```

If `kubectl` fails, check that `/etc/kubernetes/admin.conf` is copied
as `~/.kube/config` as documented in `kubeadm`.

## Provisioning a Kubic worker with Yomi

The first worker was allocated via the Kubic DVD image. This is
reasonable for small clusters, but we can simplify the work if we can
install MicroOS on new nodes using SaltStack and later join the node
to the cluster with `kubicctl`.

### Yomi image and Yomi package

Yomi is a set of Salt states that will allows the provisioning of
systems. We will need to boot the new node using a JeOS image that
contains a `salt-minion` service, that later can be controlled from
the `master` node, that is where `salt-master` is installed.

You can find mode information about this Yomi image in the [Booting a
new machine](../README.md#booting-a-new-machine) section if the main
documentation.

Download the ISO image or the PXE Boot one (check the previous link
the learn how to configure the PXE Boot one). Optionally configure the
`salt-master` to enable the auto-sign feature via UUID, as described
in the [Enabling auto-sign](../README.md#enabling-auto-sign) section.

In the `master` node we will need to install the `yomi-formula`
package from Factory.

```bash
transactional-update pkg install yomi-formula
reboot
```

We can now boot a new node in the same network that the Kubic cluster,
using the Yomi image. Be sure (via boot parameter or later
configuration) that the `salt-minion` can find the already present
master, and if needed accept the key.

### Adding the new worker

We need to set the pillar data that Yomi will use to make a new
installation. This data will describe installation details like how
will be the hard disk partition layout, the different packages that
will be installed or the services that will be enabled before booting.

The packages `yomi-formula` already provides and example for a MicroOS
installation, so we can use it as a template.

Read the section [Configuring the
pillar](../README.md#configuring-the-pillar) to learn more about the
pillar examples provided by the package, and how to copy them in a
place that we can edit them.

The `yomi-formula` package do not include an example `top.sls`, but we
can create one easily for this example.

```bash
cat <<EOF > /srv/salt/top.sls
base:
  '*':
    - yomi
EOF
```

Check also that for the pillar we also have a `top.sls`. The one that
we have in the package as an example is:

```yaml
base:
  '*':
    - installer
```

Now we can [get information about the
hardware](../README.md#getting-hardware-information) available in the
new worker node, and adjust the pillar accordingly.

Optionally we can [wipe the disks](../README.md#cleaning-the-disks),
and then apply the `yomi` state.

```bash
salt worker2 state.apply
```

Once the node is back, we can proceed as usual:

```bash
kubicctl node add worker2
```
0707010000000E000081A40000000000000000000000016130D1CF0000132C000000000000000000000000000000000000005000000000yomi-0.0.1+git.1630589391.4557cfd/docs/use-case-deploying-kubic-from-scratch.md# Use Case: Deployment of Kubic from scratch

We can use [Yomi](https://github.com/openSUSE/yomi) to deploy the
control plane and the workers of a new Kubic cluster using SaltStack
to orchestrate the installation.

## Deploying a Kubic control plane node with Yomi

In this section we are going to describe a way to deploy a two-node
Kubic cluster from scratch. One node will be the controller or the
Kubic cluster, and the second node will be the worker.

For this example we can use `libvirt`, `virtualbox`, `vagrant` or
`QEMU`.

We will need to allocate two VMs with:

* 50GB of hard disk
* 2 CPU per node
* 2048MB RAM per system

We will need also connectivity bewteen the different VMs to form a
local network, and also access to Internet for downloading packages.

You can check
[appendix-how-to-use-qemu.md](appendix-how-to-use-qemu.md) to learn
about how to do this with QEMU and how to setup a DNS server with
`dnsmasq` to create a network configuration that will meet the
requirements.

The general process will be to install a local `salt-master`, that
will be used to first install MicroOS in the two VMs. Later we will
use a [Salt
orchestrator](https://docs.saltstack.com/en/latest/topics/orchestrate/orchestrate_runner.html)
to provision the operating system and install the different Kubic
components via `kubeadm`. One node of the cluster will be for the
control plane, and the second one will be a worker.

## Installing salt-master and yomi-formula

We need to install locally the `salt-master` and the `yomi-formula`
packages, as we will control the installation from out laptop or
desktop machine.

```bash
sudo zypper in salt-master salt-standalone-formulas-configuration
sudo zypper in yomi-formula
```

## Configuring salt-master

We are going to use the states from Yomi that are living in
`/usr/share/salt-formulas/yomi`, and some other states are are in
`/usr/share/yomi/kubic`. In order to make both location reachable, we
need to configure `salt-master`.

```bash
sudo cp -a /usr/share/yomi/kubic-file.conf /etc/salt/master.d/
sudo cp -a /usr/share/yomi/pillar.conf /etc/salt/master.d/
```

Optionally, we will configure autosign via UUID, so we can avoid
accept the new `salt-minion` keys during the exercise.

```bash
sudo cp /usr/share/yomi/autosign.conf /etc/salt/master.d/
```

We can now restart the service:

```bash
systemctl restart salt-master.service
```

For a more detailed description read the sections [Looking for the
pillar](../README.md#looking-for-the-pillar) and [Enabling
auto-sign](../README.md#enabling-auto-sign) in the documentation.

## Orchestrating the Kubic installation

Now we can launch two nodes via `libvirt` or `QEMU`. For this last
option read the document [How to use
QEMU](appendix-how-to-use-qemu.md) to take some ideas and make the
proper adjustments on `dnsmasq` to assign correct names for the
different nodes.

You need to boot both nodes with the ISO image or the PXE Boot one,
and check that you can see them locally:

```bash
salt '*' test.ping
```

If something goes wrong check this in order:

1. `master` can be resolved from the nodes
2. `salt-minion` service is running correctly
3. There is no old key in the master (`salt-key '*' -D`)

Adjust the `kubic.sls` from the states to reference properly the
nodes. The provided example is using the MAC address to reference the
nodes:

* `00:00:00:11:11:11`: Control plane node
* `00:00:00:22:22:22`: Worker node

Now we can orchestrate the Kubic installation. So from your host
machine where `salt-master` is running we can fire the orchestrator.

```bash
salt-run state.orchestrate orch.kubic
```

This will execute commands in the `salt-master`, that will:

1. Synchronize all the execution modules, pillars and grains
2. Install MicroOS in both nodes
3. Wait for the reboot of both nodes
4. Install the control plane in `00:00:00:11:11:11`
5. Send a mine to the control plane node, that will collect the
   connection secrets
6. Join the worker (`00:00:00:22:22:22`) using `kubeadm` and those
   secrets
7. Remove the mine

This orchestrator is only an example, and there are elements that can
be improved. The main one is that inside the YAML file there are
references to the minion ID of the control plane and the worker,
something that is better to put in the pillar.

Another problem is that in the current version of Salt, we cannot send
asynchronous commands to the orchestrator. This imply that there is a
race condition in the section that wait for the node reboot. If one
node reboot before than the other, there is a chance that the reboot
event will be lost before the `salt.wait_for_event` is reached. The
next version of Salt, Neon, will add this capability, and the example
will be updated accordingly.

If this race condition happens, you can wait manually to the reboot,
comment the `salt.wait_for_event` entry in `kubic.sls`, and relaunch
the `salt-run` command.
0707010000000F000041ED0000000000000000000000026130D1CF00000000000000000000000000000000000000000000002B00000000yomi-0.0.1+git.1630589391.4557cfd/metadata07070100000010000081A40000000000000000000000016130D1CF00006D28000000000000000000000000000000000000003C00000000yomi-0.0.1+git.1630589391.4557cfd/metadata/01-form-yomi.yml# Uyuni Form for the Yomi pillar data - Main section
#
# Find mode pillar examples in /usr/share/yomi/pillar

config:
  $name: General Configuration
  $type: group
  $help: General Configuration Section for the Yomi Formula
  events:
    $type: boolean
    # Change default once Uyuni can track the events
    $default: no
    $help: If set, the installation can be monitored via Salt events
  reboot:
    $type: select
    $values:
      - "yes"
      - "no"
      - kexec
      - halt
      - shutdown
    $default: "yes"
    $help: Kind of reboot at the end of the installation
  snapper:
    $type: boolean
    $default: no
    $help: For now it only can be used in Btrfs filesystems
  # TODO: How to export the values from the locale formula?
  locale:
    $type: select
    # Output from 'localectl list-locales'
    $values:
      - C.utf8
      - aa_DJ
      - aa_DJ.utf8
      - aa_ER
      - aa_ER@saaho
      - aa_ET
      - af_ZA
      - af_ZA.utf8
      - agr_PE
      - ak_GH
      - am_ET
      - an_ES
      - an_ES.utf8
      - anp_IN
      - ar_AE
      - ar_AE.utf8
      - ar_BH
      - ar_BH.utf8
      - ar_DZ
      - ar_DZ.utf8
      - ar_EG
      - ar_EG.utf8
      - ar_IN
      - ar_IQ
      - ar_IQ.utf8
      - ar_JO
      - ar_JO.utf8
      - ar_KW
      - ar_KW.utf8
      - ar_LB
      - ar_LB.utf8
      - ar_LY
      - ar_LY.utf8
      - ar_MA
      - ar_MA.utf8
      - ar_OM
      - ar_OM.utf8
      - ar_QA
      - ar_QA.utf8
      - ar_SA
      - ar_SA.utf8
      - ar_SD
      - ar_SD.utf8
      - ar_SS
      - ar_SY
      - ar_SY.utf8
      - ar_TN
      - ar_TN.utf8
      - ar_YE
      - ar_YE.utf8
      - as_IN
      - ast_ES
      - ast_ES.utf8
      - ayc_PE
      - az_AZ
      - az_IR
      - be_BY
      - be_BY.utf8
      - be_BY@latin
      - bem_ZM
      - ber_DZ
      - ber_MA
      - bg_BG
      - bg_BG.utf8
      - bhb_IN.utf8
      - bho_IN
      - bi_VU
      - bn_BD
      - bn_IN
      - bo_CN
      - bo_IN
      - br_FR
      - br_FR.utf8
      - br_FR@euro
      - brx_IN
      - bs_BA
      - bs_BA.utf8
      - byn_ER
      - ca_AD
      - ca_AD.utf8
      - ca_ES
      - ca_ES.utf8
      - ca_ES@euro
      - ca_FR
      - ca_FR.utf8
      - ca_IT
      - ca_IT.utf8
      - ce_RU
      - chr_US
      - cmn_TW
      - crh_UA
      - cs_CZ
      - cs_CZ.utf8
      - csb_PL
      - cv_RU
      - cy_GB
      - cy_GB.utf8
      - da_DK
      - da_DK.utf8
      - de_AT
      - de_AT.utf8
      - de_AT@euro
      - de_BE
      - de_BE.utf8
      - de_BE@euro
      - de_CH
      - de_CH.utf8
      - de_DE
      - de_DE.utf8
      - de_DE@euro
      - de_IT
      - de_IT.utf8
      - de_LI.utf8
      - de_LU
      - de_LU.utf8
      - de_LU@euro
      - doi_IN
      - dv_MV
      - dz_BT
      - el_CY
      - el_CY.utf8
      - el_GR
      - el_GR.utf8
      - el_GR@euro
      - en_AG
      - en_AU
      - en_AU.utf8
      - en_BW
      - en_BW.utf8
      - en_CA
      - en_CA.utf8
      - en_DK
      - en_DK.utf8
      - en_GB
      - en_GB.iso885915
      - en_GB.utf8
      - en_HK
      - en_HK.utf8
      - en_IE
      - en_IE.utf8
      - en_IE@euro
      - en_IL
      - en_IN
      - en_NG
      - en_NZ
      - en_NZ.utf8
      - en_PH
      - en_PH.utf8
      - en_SG
      - en_SG.utf8
      - en_US
      - en_US.iso885915
      - en_US.utf8
      - en_ZA
      - en_ZA.utf8
      - en_ZM
      - en_ZW
      - en_ZW.utf8
      - eo
      - es_AR
      - es_AR.utf8
      - es_BO
      - es_BO.utf8
      - es_CL
      - es_CL.utf8
      - es_CO
      - es_CO.utf8
      - es_CR
      - es_CR.utf8
      - es_CU
      - es_DO
      - es_DO.utf8
      - es_EC
      - es_EC.utf8
      - es_ES
      - es_ES.utf8
      - es_ES@euro
      - es_GT
      - es_GT.utf8
      - es_HN
      - es_HN.utf8
      - es_MX
      - es_MX.utf8
      - es_NI
      - es_NI.utf8
      - es_PA
      - es_PA.utf8
      - es_PE
      - es_PE.utf8
      - es_PR
      - es_PR.utf8
      - es_PY
      - es_PY.utf8
      - es_SV
      - es_SV.utf8
      - es_US
      - es_US.utf8
      - es_UY
      - es_UY.utf8
      - es_VE
      - es_VE.utf8
      - et_EE
      - et_EE.iso885915
      - et_EE.utf8
      - eu_ES
      - eu_ES.utf8
      - eu_ES@euro
      - fa_IR
      - ff_SN
      - fi_FI
      - fi_FI.utf8
      - fi_FI@euro
      - fil_PH
      - fo_FO
      - fo_FO.utf8
      - fr_BE
      - fr_BE.utf8
      - fr_BE@euro
      - fr_CA
      - fr_CA.utf8
      - fr_CH
      - fr_CH.utf8
      - fr_FR
      - fr_FR.utf8
      - fr_FR@euro
      - fr_LU
      - fr_LU.utf8
      - fr_LU@euro
      - fur_IT
      - fy_DE
      - fy_NL
      - ga_IE
      - ga_IE.utf8
      - ga_IE@euro
      - gd_GB
      - gd_GB.utf8
      - gez_ER
      - gez_ER@abegede
      - gez_ET
      - gez_ET@abegede
      - gl_ES
      - gl_ES.utf8
      - gl_ES@euro
      - gu_IN
      - gv_GB
      - gv_GB.utf8
      - ha_NG
      - hak_TW
      - he_IL
      - he_IL.utf8
      - hi_IN
      - hif_FJ
      - hne_IN
      - hr_HR
      - hr_HR.utf8
      - hsb_DE
      - hsb_DE.utf8
      - ht_HT
      - hu_HU
      - hu_HU.utf8
      - hy_AM
      - hy_AM.armscii8
      - ia_FR
      - id_ID
      - id_ID.utf8
      - ig_NG
      - ik_CA
      - is_IS
      - is_IS.utf8
      - it_CH
      - it_CH.utf8
      - it_IT
      - it_IT.utf8
      - it_IT@euro
      - iu_CA
      - ja_JP.eucjp
      - ja_JP.shiftjisx0213
      - ja_JP.sjis
      - ja_JP.utf8
      - ka_GE
      - ka_GE.utf8
      - kk_KZ
      - kk_KZ.utf8
      - kl_GL
      - kl_GL.utf8
      - km_KH
      - kn_IN
      - ko_KR.euckr
      - ko_KR.utf8
      - kok_IN
      - ks_IN
      - ks_IN@devanagari
      - ku_TR
      - ku_TR.utf8
      - kw_GB
      - kw_GB.utf8
      - ky_KG
      - lb_LU
      - lg_UG
      - lg_UG.utf8
      - li_BE
      - li_NL
      - lij_IT
      - ln_CD
      - lo_LA
      - lt_LT
      - lt_LT.utf8
      - lv_LV
      - lv_LV.utf8
      - lzh_TW
      - mag_IN
      - mai_IN
      - mai_NP
      - mg_MG
      - mg_MG.utf8
      - mhr_RU
      - mi_NZ
      - mi_NZ.utf8
      - mk_MK
      - mk_MK.utf8
      - ml_IN
      - mn_MN
      - mni_IN
      - mr_IN
      - ms_MY
      - ms_MY.utf8
      - mt_MT
      - mt_MT.utf8
      - my_MM
      - nan_TW
      - nan_TW@latin
      - nb_NO
      - nb_NO.utf8
      - nds_DE
      - nds_NL
      - ne_NP
      - nhn_MX
      - niu_NU
      - niu_NZ
      - nl_AW
      - nl_BE
      - nl_BE.utf8
      - nl_BE@euro
      - nl_NL
      - nl_NL.utf8
      - nl_NL@euro
      - nn_NO
      - nn_NO.utf8
      - no_NO
      - no_NO.utf8
      - nr_ZA
      - nso_ZA
      - oc_FR
      - oc_FR.utf8
      - om_ET
      - om_KE
      - om_KE.utf8
      - or_IN
      - os_RU
      - pa_IN
      - pa_PK
      - pap_AW
      - pap_CW
      - pl_PL
      - pl_PL.utf8
      - ps_AF
      - pt_BR
      - pt_BR.utf8
      - pt_PT
      - pt_PT.utf8
      - pt_PT@euro
      - quz_PE
      - raj_IN
      - ro_RO
      - ro_RO.utf8
      - ru_RU
      - ru_RU.koi8r
      - ru_RU.utf8
      - ru_UA
      - ru_UA.utf8
      - rw_RW
      - sa_IN
      - sat_IN
      - sc_IT
      - sd_IN
      - sd_IN@devanagari
      - se_NO
      - sgs_LT
      - shs_CA
      - si_LK
      - sid_ET
      - sk_SK
      - sk_SK.utf8
      - sl_SI
      - sl_SI.utf8
      - sm_WS
      - so_DJ
      - so_DJ.utf8
      - so_ET
      - so_KE
      - so_KE.utf8
      - so_SO
      - so_SO.utf8
      - sq_AL
      - sq_AL.utf8
      - sq_MK
      - sr_ME
      - sr_RS
      - sr_RS@latin
      - ss_ZA
      - st_ZA
      - st_ZA.utf8
      - sv_FI
      - sv_FI.utf8
      - sv_FI@euro
      - sv_SE
      - sv_SE.utf8
      - sw_KE
      - sw_TZ
      - szl_PL
      - ta_IN
      - ta_LK
      - tcy_IN.utf8
      - te_IN
      - tg_TJ
      - tg_TJ.utf8
      - th_TH
      - th_TH.utf8
      - the_NP
      - ti_ER
      - ti_ET
      - tig_ER
      - tk_TM
      - tl_PH
      - tl_PH.utf8
      - tn_ZA
      - to_TO
      - tpi_PG
      - tr_CY
      - tr_CY.utf8
      - tr_TR
      - tr_TR.utf8
      - ts_ZA
      - tt_RU
      - tt_RU@iqtelif
      - ug_CN
      - uk_UA
      - uk_UA.utf8
      - unm_US
      - ur_IN
      - ur_PK
      - uz_UZ
      - uz_UZ.utf8
      - uz_UZ@cyrillic
      - ve_ZA
      - vi_VN
      - wa_BE
      - wa_BE.utf8
      - wa_BE@euro
      - wae_CH
      - wal_ET
      - wo_SN
      - xh_ZA
      - xh_ZA.utf8
      - yi_US
      - yi_US.utf8
      - yo_NG
      - yue_HK
      - zh_CN
      - zh_CN.gb18030
      - zh_CN.gbk
      - zh_CN.utf8
      - zh_HK
      - zh_HK.utf8
      - zh_SG
      - zh_SG.gbk
      - zh_SG.utf8
      - zh_TW
      - zh_TW.euctw
      - zh_TW.utf8
      - zu_ZA
      - zu_ZA.utf8
    $default: en_US.utf8
    $help: System locale configuration for systemd
  keymap:
    $type: select
    # Output from 'localectl list-keymaps'
    $values:
      - ANSI-dvorak
      - Pl02
      - al
      - al-plisi
      - amiga-de
      - amiga-us
      - applkey
      - at
      - at-mac
      - at-nodeadkeys
      - at-sundeadkeys
      - atari-de
      - atari-se
      - atari-uk-falcon
      - atari-us
      - az
      - azerty
      - ba
      - ba-alternatequotes
      - ba-unicode
      - ba-unicodeus
      - ba-us
      - backspace
      - bashkir
      - be
      - be-iso-alternate
      - be-latin1
      - be-nodeadkeys
      - be-oss
      - be-oss_latin9
      - be-oss_sundeadkeys
      - be-sundeadkeys
      - be-wang
      - bg-cp1251
      - bg-cp855
      - bg_bds-cp1251
      - bg_bds-utf8
      - bg_pho-cp1251
      - bg_pho-utf8
      - br
      - br-abnt
      - br-abnt-alt
      - br-abnt2
      - br-abnt2-old
      - br-dvorak
      - br-latin1-abnt2
      - br-latin1-us
      - br-nativo
      - br-nativo-epo
      - br-nativo-us
      - br-nodeadkeys
      - br-thinkpad
      - by
      - by-cp1251
      - by-latin
      - bywin-cp1251
      - ca
      - ca-eng
      - ca-fr-dvorak
      - ca-fr-legacy
      - ca-multi
      - ca-multix
      - carpalx
      - carpalx-full
      - cf
      - ch
      - ch-de_mac
      - ch-de_nodeadkeys
      - ch-de_sundeadkeys
      - ch-fr
      - ch-fr_mac
      - ch-fr_nodeadkeys
      - ch-fr_sundeadkeys
      - ch-legacy
      - cm
      - cm-azerty
      - cm-dvorak
      - cm-french
      - cm-mmuock
      - cm-qwerty
      - cn
      - cn-latin1
      - croat
      - ctrl
      - cz
      - cz-bksl
      - cz-cp1250
      - cz-dvorak-ucw
      - cz-lat2
      - cz-lat2-prog
      - cz-lat2-us
      - cz-qwerty
      - cz-qwerty_bksl
      - cz-rus
      - cz-us-qwertz
      - de
      - de-T3
      - de-deadacute
      - de-deadgraveacute
      - de-deadtilde
      - de-dsb
      - de-dsb_qwertz
      - de-dvorak
      - de-latin1
      - de-latin1-nodeadkeys
      - de-mac
      - de-mac_nodeadkeys
      - de-mobii
      - de-neo
      - de-nodeadkeys
      - de-qwerty
      - de-ro
      - de-ro_nodeadkeys
      - de-sundeadkeys
      - de-tr
      - de_CH-latin1
      - de_alt_UTF-8
      - defkeymap
      - defkeymap_V1.0
      - dk
      - dk-dvorak
      - dk-latin1
      - dk-mac
      - dk-mac_nodeadkeys
      - dk-nodeadkeys
      - dk-winkeys
      - dvorak
      - dvorak-ca-fr
      - dvorak-es
      - dvorak-fr
      - dvorak-l
      - dvorak-la
      - dvorak-programmer
      - dvorak-r
      - dvorak-ru
      - dvorak-sv-a1
      - dvorak-sv-a5
      - dvorak-uk
      - dz
      - ee
      - ee-dvorak
      - ee-nodeadkeys
      - ee-us
      - emacs
      - emacs2
      - en-latin9
      - epo
      - epo-legacy
      - es
      - es-ast
      - es-cat
      - es-cp850
      - es-deadtilde
      - es-dvorak
      - es-mac
      - es-nodeadkeys
      - es-olpc
      - es-sundeadkeys
      - es-winkeys
      - et
      - et-nodeadkeys
      - euro
      - euro1
      - euro2
      - fi
      - fi-classic
      - fi-kotoistus
      - fi-mac
      - fi-nodeadkeys
      - fi-smi
      - fi-winkeys
      - fo
      - fo-nodeadkeys
      - fr
      - fr-azerty
      - fr-bepo
      - fr-bepo-latin9
      - fr-bepo_latin9
      - fr-bre
      - fr-dvorak
      - fr-latin1
      - fr-latin9
      - fr-latin9_nodeadkeys
      - fr-latin9_sundeadkeys
      - fr-mac
      - fr-nodeadkeys
      - fr-oci
      - fr-oss
      - fr-oss_latin9
      - fr-oss_nodeadkeys
      - fr-oss_sundeadkeys
      - fr-pc
      - fr-sundeadkeys
      - fr_CH
      - fr_CH-latin1
      - gb
      - gb-colemak
      - gb-dvorak
      - gb-dvorakukp
      - gb-extd
      - gb-intl
      - gb-mac
      - gb-mac_intl
      - ge
      - ge-ergonomic
      - ge-mess
      - ge-ru
      - gh
      - gh-akan
      - gh-avn
      - gh-ewe
      - gh-fula
      - gh-ga
      - gh-generic
      - gh-gillbt
      - gh-hausa
      - gr
      - gr-pc
      - hr
      - hr-alternatequotes
      - hr-unicode
      - hr-unicodeus
      - hr-us
      - hu
      - hu-101_qwerty_comma_dead
      - hu-101_qwerty_comma_nodead
      - hu-101_qwerty_dot_dead
      - hu-101_qwerty_dot_nodead
      - hu-101_qwertz_comma_dead
      - hu-101_qwertz_comma_nodead
      - hu-101_qwertz_dot_dead
      - hu-101_qwertz_dot_nodead
      - hu-102_qwerty_comma_dead
      - hu-102_qwerty_comma_nodead
      - hu-102_qwerty_dot_dead
      - hu-102_qwerty_dot_nodead
      - hu-102_qwertz_comma_dead
      - hu-102_qwertz_comma_nodead
      - hu-102_qwertz_dot_dead
      - hu-102_qwertz_dot_nodead
      - hu-nodeadkeys
      - hu-qwerty
      - hu-standard
      - hu101
      - ie
      - ie-CloGaelach
      - ie-UnicodeExpert
      - ie-ogam_is434
      - il
      - il-heb
      - il-phonetic
      - in-eng
      - iq-ku
      - iq-ku_alt
      - iq-ku_ara
      - iq-ku_f
      - ir-ku
      - ir-ku_alt
      - ir-ku_ara
      - ir-ku_f
      - is
      - is-Sundeadkeys
      - is-dvorak
      - is-latin1
      - is-latin1-us
      - is-mac
      - is-mac_legacy
      - is-nodeadkeys
      - it
      - it-geo
      - it-ibm
      - it-intl
      - it-mac
      - it-nodeadkeys
      - it-scn
      - it-us
      - it-winkeys
      - it2
      - jp
      - jp-OADG109A
      - jp-dvorak
      - jp-kana86
      - jp106
      - kazakh
      - ke
      - ke-kik
      - keypad
      - kr
      - kr-kr104
      - ky_alt_sh-UTF-8
      - kyrgyz
      - la-latin1
      - latam
      - latam-deadtilde
      - latam-dvorak
      - latam-nodeadkeys
      - latam-sundeadkeys
      - lk-us
      - lt
      - lt-ibm
      - lt-lekp
      - lt-lekpa
      - lt-std
      - lt-us
      - lt.baltic
      - lt.l4
      - lt.std
      - lv
      - lv-adapted
      - lv-apostrophe
      - lv-ergonomic
      - lv-fkey
      - lv-modern
      - lv-tilde
      - ma-french
      - mac-be
      - mac-de-latin1
      - mac-de-latin1-nodeadkeys
      - mac-de_CH
      - mac-dk-latin1
      - mac-dvorak
      - mac-es
      - mac-euro
      - mac-euro2
      - mac-fi-latin1
      - mac-fr
      - mac-fr_CH-latin1
      - mac-it
      - mac-pl
      - mac-pt-latin1
      - mac-se
      - mac-template
      - mac-uk
      - mac-us
      - md
      - md-gag
      - me
      - me-latinalternatequotes
      - me-latinunicode
      - me-latinunicodeyz
      - me-latinyz
      - mk
      - mk-cp1251
      - mk-utf
      - mk0
      - ml
      - ml-fr-oss
      - ml-us-intl
      - ml-us-mac
      - mt
      - mt-us
      - ng
      - ng-hausa
      - ng-igbo
      - ng-yoruba
      - nl
      - nl-mac
      - nl-std
      - nl-sundeadkeys
      - nl2
      - "no"
      - no-colemak
      - no-dvorak
      - no-latin1
      - no-mac
      - no-mac_nodeadkeys
      - no-nodeadkeys
      - no-smi
      - no-smi_nodeadkeys
      - no-winkeys
      - pc110
      - ph
      - ph-capewell-dvorak
      - ph-capewell-qwerf2k6
      - ph-colemak
      - ph-dvorak
      - pl
      - pl-csb
      - pl-dvorak
      - pl-dvorak_altquotes
      - pl-dvorak_quotes
      - pl-dvp
      - pl-legacy
      - pl-qwertz
      - pl-szl
      - pl1
      - pl2
      - pl3
      - pl4
      - pt
      - pt-latin1
      - pt-latin9
      - pt-mac
      - pt-mac_nodeadkeys
      - pt-mac_sundeadkeys
      - pt-nativo
      - pt-nativo-epo
      - pt-nativo-us
      - pt-nodeadkeys
      - pt-sundeadkeys
      - ro
      - ro-cedilla
      - ro-latin2
      - ro-std
      - ro-std_cedilla
      - ro-winkeys
      - ro_std
      - ro_win
      - rs-latin
      - rs-latinalternatequotes
      - rs-latinunicode
      - rs-latinunicodeyz
      - rs-latinyz
      - ru
      - ru-cp1251
      - ru-cv_latin
      - ru-ms
      - ru-yawerty
      - ru1
      - ru1_win-utf
      - ru2
      - ru3
      - ru4
      - ru_win
      - ruwin_alt-CP1251
      - ruwin_alt-KOI8-R
      - ruwin_alt-UTF-8
      - ruwin_alt_sh-UTF-8
      - ruwin_cplk-CP1251
      - ruwin_cplk-KOI8-R
      - ruwin_cplk-UTF-8
      - ruwin_ct_sh-CP1251
      - ruwin_ct_sh-KOI8-R
      - ruwin_ct_sh-UTF-8
      - ruwin_ctrl-CP1251
      - ruwin_ctrl-KOI8-R
      - ruwin_ctrl-UTF-8
      - se
      - se-dvorak
      - se-fi-ir209
      - se-fi-lat6
      - se-ir209
      - se-lat6
      - se-latin1
      - se-mac
      - se-nodeadkeys
      - se-smi
      - se-svdvorak
      - se-us_dvorak
      - sg
      - sg-latin1
      - sg-latin1-lk450
      - si
      - si-alternatequotes
      - si-us
      - sk
      - sk-bksl
      - sk-prog-qwerty
      - sk-prog-qwertz
      - sk-qwerty
      - sk-qwerty_bksl
      - sk-qwertz
      - slovene
      - sr-cy
      - sun-pl
      - sun-pl-altgraph
      - sundvorak
      - sunkeymap
      - sunt4-es
      - sunt4-fi-latin1
      - sunt4-no-latin1
      - sunt5-cz-us
      - sunt5-de-latin1
      - sunt5-es
      - sunt5-fi-latin1
      - sunt5-fr-latin1
      - sunt5-ru
      - sunt5-uk
      - sunt5-us-cz
      - sunt6-uk
      - sv-latin1
      - sy-ku
      - sy-ku_alt
      - sy-ku_f
      - tj_alt-UTF8
      - tm
      - tm-alt
      - tr
      - tr-alt
      - tr-crh
      - tr-crh_alt
      - tr-crh_f
      - tr-f
      - tr-intl
      - tr-ku
      - tr-ku_alt
      - tr-ku_f
      - tr-sundeadkeys
      - tr_f-latin5
      - tr_q-latin5
      - tralt
      - trf
      - trq
      - ttwin_alt-UTF-8
      - ttwin_cplk-UTF-8
      - ttwin_ct_sh-UTF-8
      - ttwin_ctrl-UTF-8
      - tw
      - tw-indigenous
      - tw-saisiyat
      - ua
      - ua-cp1251
      - ua-utf
      - ua-utf-ws
      - ua-ws
      - uk
      - unicode
      - us
      - us-acentos
      - us-acentos-old
      - us-alt-intl
      - us-altgr-intl
      - us-colemak
      - us-dvorak
      - us-dvorak-alt-intl
      - us-dvorak-classic
      - us-dvorak-intl
      - us-dvorak-l
      - us-dvorak-r
      - us-dvp
      - us-euro
      - us-hbs
      - us-intl
      - us-mac
      - us-olpc2
      - us-workman
      - us-workman-intl
      - uz-latin
      - wangbe
      - wangbe2
      - windowkeys
    $default: us
    $help: System keyboard configuration for systemd
  timezone:
    $type: select
    # Output from 'timedatectl list-timezones'
    $values:
      - Africa/Abidjan
      - Africa/Accra
      - Africa/Algiers
      - Africa/Bissau
      - Africa/Cairo
      - Africa/Casablanca
      - Africa/Ceuta
      - Africa/El_Aaiun
      - Africa/Johannesburg
      - Africa/Juba
      - Africa/Khartoum
      - Africa/Lagos
      - Africa/Maputo
      - Africa/Monrovia
      - Africa/Nairobi
      - Africa/Ndjamena
      - Africa/Sao_Tome
      - Africa/Tripoli
      - Africa/Tunis
      - Africa/Windhoek
      - America/Adak
      - America/Anchorage
      - America/Araguaina
      - America/Argentina/Buenos_Aires
      - America/Argentina/Catamarca
      - America/Argentina/Cordoba
      - America/Argentina/Jujuy
      - America/Argentina/La_Rioja
      - America/Argentina/Mendoza
      - America/Argentina/Rio_Gallegos
      - America/Argentina/Salta
      - America/Argentina/San_Juan
      - America/Argentina/San_Luis
      - America/Argentina/Tucuman
      - America/Argentina/Ushuaia
      - America/Asuncion
      - America/Atikokan
      - America/Bahia
      - America/Bahia_Banderas
      - America/Barbados
      - America/Belem
      - America/Belize
      - America/Blanc-Sablon
      - America/Boa_Vista
      - America/Bogota
      - America/Boise
      - America/Cambridge_Bay
      - America/Campo_Grande
      - America/Cancun
      - America/Caracas
      - America/Cayenne
      - America/Chicago
      - America/Chihuahua
      - America/Costa_Rica
      - America/Creston
      - America/Cuiaba
      - America/Curacao
      - America/Danmarkshavn
      - America/Dawson
      - America/Dawson_Creek
      - America/Denver
      - America/Detroit
      - America/Edmonton
      - America/Eirunepe
      - America/El_Salvador
      - America/Fort_Nelson
      - America/Fortaleza
      - America/Glace_Bay
      - America/Godthab
      - America/Goose_Bay
      - America/Grand_Turk
      - America/Guatemala
      - America/Guayaquil
      - America/Guyana
      - America/Halifax
      - America/Havana
      - America/Hermosillo
      - America/Indiana/Indianapolis
      - America/Indiana/Knox
      - America/Indiana/Marengo
      - America/Indiana/Petersburg
      - America/Indiana/Tell_City
      - America/Indiana/Vevay
      - America/Indiana/Vincennes
      - America/Indiana/Winamac
      - America/Inuvik
      - America/Iqaluit
      - America/Jamaica
      - America/Juneau
      - America/Kentucky/Louisville
      - America/Kentucky/Monticello
      - America/La_Paz
      - America/Lima
      - America/Los_Angeles
      - America/Maceio
      - America/Managua
      - America/Manaus
      - America/Martinique
      - America/Matamoros
      - America/Mazatlan
      - America/Menominee
      - America/Merida
      - America/Metlakatla
      - America/Mexico_City
      - America/Miquelon
      - America/Moncton
      - America/Monterrey
      - America/Montevideo
      - America/Nassau
      - America/New_York
      - America/Nipigon
      - America/Nome
      - America/Noronha
      - America/North_Dakota/Beulah
      - America/North_Dakota/Center
      - America/North_Dakota/New_Salem
      - America/Ojinaga
      - America/Panama
      - America/Pangnirtung
      - America/Paramaribo
      - America/Phoenix
      - America/Port-au-Prince
      - America/Port_of_Spain
      - America/Porto_Velho
      - America/Puerto_Rico
      - America/Punta_Arenas
      - America/Rainy_River
      - America/Rankin_Inlet
      - America/Recife
      - America/Regina
      - America/Resolute
      - America/Rio_Branco
      - America/Santarem
      - America/Santiago
      - America/Santo_Domingo
      - America/Sao_Paulo
      - America/Scoresbysund
      - America/Sitka
      - America/St_Johns
      - America/Swift_Current
      - America/Tegucigalpa
      - America/Thule
      - America/Thunder_Bay
      - America/Tijuana
      - America/Toronto
      - America/Vancouver
      - America/Whitehorse
      - America/Winnipeg
      - America/Yakutat
      - America/Yellowknife
      - Antarctica/Casey
      - Antarctica/Davis
      - Antarctica/DumontDUrville
      - Antarctica/Macquarie
      - Antarctica/Mawson
      - Antarctica/Palmer
      - Antarctica/Rothera
      - Antarctica/Syowa
      - Antarctica/Troll
      - Antarctica/Vostok
      - Asia/Almaty
      - Asia/Amman
      - Asia/Anadyr
      - Asia/Aqtau
      - Asia/Aqtobe
      - Asia/Ashgabat
      - Asia/Atyrau
      - Asia/Baghdad
      - Asia/Baku
      - Asia/Bangkok
      - Asia/Barnaul
      - Asia/Beirut
      - Asia/Bishkek
      - Asia/Brunei
      - Asia/Chita
      - Asia/Choibalsan
      - Asia/Colombo
      - Asia/Damascus
      - Asia/Dhaka
      - Asia/Dili
      - Asia/Dubai
      - Asia/Dushanbe
      - Asia/Famagusta
      - Asia/Gaza
      - Asia/Hebron
      - Asia/Ho_Chi_Minh
      - Asia/Hong_Kong
      - Asia/Hovd
      - Asia/Irkutsk
      - Asia/Jakarta
      - Asia/Jayapura
      - Asia/Jerusalem
      - Asia/Kabul
      - Asia/Kamchatka
      - Asia/Karachi
      - Asia/Kathmandu
      - Asia/Khandyga
      - Asia/Kolkata
      - Asia/Krasnoyarsk
      - Asia/Kuala_Lumpur
      - Asia/Kuching
      - Asia/Macau
      - Asia/Magadan
      - Asia/Makassar
      - Asia/Manila
      - Asia/Nicosia
      - Asia/Novokuznetsk
      - Asia/Novosibirsk
      - Asia/Omsk
      - Asia/Oral
      - Asia/Pontianak
      - Asia/Pyongyang
      - Asia/Qatar
      - Asia/Qostanay
      - Asia/Qyzylorda
      - Asia/Riyadh
      - Asia/Sakhalin
      - Asia/Samarkand
      - Asia/Seoul
      - Asia/Shanghai
      - Asia/Singapore
      - Asia/Srednekolymsk
      - Asia/Taipei
      - Asia/Tashkent
      - Asia/Tbilisi
      - Asia/Tehran
      - Asia/Thimphu
      - Asia/Tokyo
      - Asia/Tomsk
      - Asia/Ulaanbaatar
      - Asia/Urumqi
      - Asia/Ust-Nera
      - Asia/Vladivostok
      - Asia/Yakutsk
      - Asia/Yangon
      - Asia/Yekaterinburg
      - Asia/Yerevan
      - Atlantic/Azores
      - Atlantic/Bermuda
      - Atlantic/Canary
      - Atlantic/Cape_Verde
      - Atlantic/Faroe
      - Atlantic/Madeira
      - Atlantic/Reykjavik
      - Atlantic/South_Georgia
      - Atlantic/Stanley
      - Australia/Adelaide
      - Australia/Brisbane
      - Australia/Broken_Hill
      - Australia/Currie
      - Australia/Darwin
      - Australia/Eucla
      - Australia/Hobart
      - Australia/Lindeman
      - Australia/Lord_Howe
      - Australia/Melbourne
      - Australia/Perth
      - Australia/Sydney
      - Europe/Amsterdam
      - Europe/Andorra
      - Europe/Astrakhan
      - Europe/Athens
      - Europe/Belgrade
      - Europe/Berlin
      - Europe/Brussels
      - Europe/Bucharest
      - Europe/Budapest
      - Europe/Chisinau
      - Europe/Copenhagen
      - Europe/Dublin
      - Europe/Gibraltar
      - Europe/Helsinki
      - Europe/Istanbul
      - Europe/Kaliningrad
      - Europe/Kiev
      - Europe/Kirov
      - Europe/Lisbon
      - Europe/London
      - Europe/Luxembourg
      - Europe/Madrid
      - Europe/Malta
      - Europe/Minsk
      - Europe/Monaco
      - Europe/Moscow
      - Europe/Oslo
      - Europe/Paris
      - Europe/Prague
      - Europe/Riga
      - Europe/Rome
      - Europe/Samara
      - Europe/Saratov
      - Europe/Simferopol
      - Europe/Sofia
      - Europe/Stockholm
      - Europe/Tallinn
      - Europe/Tirane
      - Europe/Ulyanovsk
      - Europe/Uzhgorod
      - Europe/Vienna
      - Europe/Vilnius
      - Europe/Volgograd
      - Europe/Warsaw
      - Europe/Zaporozhye
      - Europe/Zurich
      - Indian/Chagos
      - Indian/Christmas
      - Indian/Cocos
      - Indian/Kerguelen
      - Indian/Mahe
      - Indian/Maldives
      - Indian/Mauritius
      - Indian/Reunion
      - Pacific/Apia
      - Pacific/Auckland
      - Pacific/Bougainville
      - Pacific/Chatham
      - Pacific/Chuuk
      - Pacific/Easter
      - Pacific/Efate
      - Pacific/Enderbury
      - Pacific/Fakaofo
      - Pacific/Fiji
      - Pacific/Funafuti
      - Pacific/Galapagos
      - Pacific/Gambier
      - Pacific/Guadalcanal
      - Pacific/Guam
      - Pacific/Honolulu
      - Pacific/Kiritimati
      - Pacific/Kosrae
      - Pacific/Kwajalein
      - Pacific/Majuro
      - Pacific/Marquesas
      - Pacific/Nauru
      - Pacific/Niue
      - Pacific/Norfolk
      - Pacific/Noumea
      - Pacific/Pago_Pago
      - Pacific/Palau
      - Pacific/Pitcairn
      - Pacific/Pohnpei
      - Pacific/Port_Moresby
      - Pacific/Rarotonga
      - Pacific/Tahiti
      - Pacific/Tarawa
      - Pacific/Tongatapu
      - Pacific/Wake
      - Pacific/Wallis
      - UTC
    $default: UTC
    $help: System timezone configuration for systemd
  hostname:
    $type: text
    $optional: yes
    $help: Leave it empty when DHCP provides a hostname
  machine_id:
    $type: text
    $optional: yes
    $help: If empty, systemd will generate one
  target:
    $type: text
    $optional: yes
    $default: multi-user.target
    $ifEmpty: multi-user.target
    $help: Valid systemd target unit
07070100000011000081A40000000000000000000000016130D1CF000020CC000000000000000000000000000000000000004400000000yomi-0.0.1+git.1630589391.4557cfd/metadata/02-form-yomi-storage.yml# Uyuni Form for the Yomi pillar data - Storage and filesystem
#
# Find mode pillar examples in /usr/share/yomi/pillar

partitions:
  $type: group
  $help: Partiton (Storage) Subsection for the Yomi Formula
  config:
    $type: group
    $help: Configuration options for the partitioner
    label:
      $type: select
      $values:
        # - aix
        # - amiga
        # - bsd
        # - dvh
        - gpt
        # - mac
        - msdos
        # - pc98
        # - sun
        # - loop
      $default: gpt
      $help: Default type of partition table for the device
    initial_gap:
      $type: text
      $optional: yes
      $default: 0
      $help: Initial gap (empty space) leaved before the first partition. Valid units are s, B, kB, MB, GB, TB, compact, cyl, chs, %, kiB, MiB, GiB, TiB
  devices:
    $type: edit-group
    $minItems: 1
    $itemName: Device ${i}
    $help: List of (physical or logical) devices
    $prototype:
      $type: group
      $key:
        $name: Device
        $type: text
        $placeholder: /dev/sda
        $help: Device name. Names like /dev/disk/by-id/... or /dev/disk/by-label/... can be used
      label:
        $type: select
        $values:
          # - aix
          # - amiga
          # - bsd
          # - dvh
          - gpt
          # - mac
          - msdos
          # - pc98
          # - sun
          # - loop
        $default: gpt
        $help: Type of partition table for the device
      initial_gap:
        $type: text
        $optional: yes
        $default: 1MB
        $help: Initial gap (empty space) leaved before the first partition. Valid units are s, B, kB, MB, GB, TB, compact, cyl, chs, %, kiB, MiB, GiB, TiB
      partitions:
        $type: edit-group
        $minItems: 0
        $itemName: Partition ${i}
        $help: List of partitions for the device
        $prototype:
          number:
            $name: Partition Number
            $type: number
            $optional: yes
            # $default: ${i}
            $help: Will be appended to the device name (E.g. /dev/sda1 for devide /dev/sda and partition number 1)
          id:
            $name: Partition Name
            $type: text
            $optional: yes
            $placeholder: /dev/sda1
            $help: "Full name of the partition. For example, valid ids can be /dev/sda1, /dev/md0p1, etc. Is optional, as the name can be deduced from 'Partition Number'"
          size:
            $name: Partition Size
            $type: text
            $placeholder: "Parted units or 'rest': 500MB"
            $help: "Valid units are s, B, kB, MB, GB, TB, compact, cyl, chs, %, kiB, MiB, GiB, TiB. Use 'rest' to indicate the rest of the free space"
          type:
            $name: Partition Type
            $type: select
            $values:
              - swap
              - linux
              - boot
              - efi
              - lvm
              - raid
            $default: linux
            $help: Indicate the expected use of the partition

lvm:
  $type: edit-group
  $minItems: 0
  $itemName: Volume Group ${i}
  $help: LVM (Storage) Subsection for the Yomi Formula
  $prototype:
    $type: group
    $key:
      $name: Volume Group Name
      $type: text
      $help: Name of the logical volume
    devices:
      $type: edit-group
      $minItems: 1
      $itemName: Device or Partition ${i}
      $help: List of devices or partitions that belong to the volume
      $prototype:
        name:
          $name: Device or Partition
          $type: text
          $placeholder: /dev/sda1
          $help: Device or Partition with type LVM
        bootloaderareasize:
          $name: Boot Loader Area Size
          $type: text
          $optional: yes
          $help: "Directly passed to 'pvcreate'"
        dataaligment:
          $name: Data Aligment
          $type: text
          $optional: yes
          $help: "Directly passed to 'pvcreate'"
        dataalignmentoffset:
          $name: Data Aligment Offset
          $type: text
          $optional: yes
          $help: "Directly passed to 'pvcreate'"
    volumes:
      $type: edit-group
      $minItems: 1
      $itemName: Logical Volume ${i}
      $help: List of logical volumes
      $prototype:
        name:
          $name: Logical Volume Name
          $type: text
          $placeholder: root
          $help: Name of the logical volume
        extents:
          $type: text
          $optional: yes
          $placeholder: 100%FREE
          $help: "Directly passed to 'lvcreate'"
        size:
          $type: text
          $optional: yes
          $placeholder: 1024M
          $help: "Directly passed to 'lvcreate'"
        stripes:
          $type: number
          $optional: yes
          $help: "Directly passed to 'lvcreate'"
        stripesize:
          $name: Stripe Size
          $type: number
          $optional: yes
          $help: "Directly passed to 'lvcreate'"
        # There are more options that we can implement for LVM
    clustered:
      $type: select
      $optional: yes
      $values:
        - "y"
        - "n"
      $default: "n"
      $help: "Directly passed to 'vgcreate'"
    maxlogicalvolumes:
      $name: Max Logical Volumes
      $type: number
      $optional: yes
      $help: "Directly passed to 'vgcreate'"
    maxphysicalvolumes:
      $name: Max Physical Volumes
      $type: number
      $optional: yes
      $help: "Directly passed to 'vgcreate'"
    physicalextentsize:
      $name: Physical Extent Size
      $type: text
      $optional: yes
      $help: "Directly passed to 'vgcreate'"

raid:
  $type: edit-group
  $minItems: 0
  $itemName: RAID ${i}
  $help: RAID (Storage) Subsection for the Yomi Formula
  $prototype:
    $type: group
    $key:
      $name: RAID Device Name
      $type: text
      $placeholder: /dev/md0
      $help: Name of the RAID device
    level:
      $type: select
      $values:
        - linear
        - raid0
        - raid1
        - mirror
        - raid4
        - raid5
        - raid6
        - raid10
        - multipath
        - faulty
        - container
      $default: raid1
      $help: RAID type
    devices:
      $type: edit-group
      $minItems: 1
      $itemName: Device or Partition ${i}
      $help: List of devices or partitions that belong to the RAID
      $prototype:
        name:
          $name: Device or Partition
          $type: text
          $placeholder: /dev/sda1
          $help: Device or partition with type RAID
    metadata:
      $type: select
      $values:
        - 0
        - 0.9
        - 1
        - 1.1
        - 1.2
        - default
        - ddm
        - imsm
      $default: default
      $help: RAID metadata version
    raid-devices:
      $type: number
      $optional: yes
      $help: Number of active devices in array
    spare-devices:
      $type: number
      $optional: yes
      $help: Number of spare (eXtra) devices in initial array

filesystems:
  $type: edit-group
  $minItems: 1
  $itemName: Filesystem ${i}
  $help: File System (Storage) Subsection for the Yomi Formula
  $prototype:
    $type: group
    $key:
      $name: Partition
      $type: text
      $placeholder: /dev/sda1
      $help: Partition for the filesystem
    filesystem:
      $type: select
      $values:
        - swap
        - btrfs
        - xfs
        - ext2
        - ext3
        - ext4
        - vfat
      $default: ext4
      $help: Filesystem for the device
    mountpoint:
      $type: text
      $placeholder: /
      $visibleIf: .filesystem != swap
      $help: Mount point of the partition
    fat:
      $name: FAT Type
      $type: select
      $values:
        - 12
        - 16
        - 32
      $visibleIf: .filesystem == vfat
      $help: Type of FAT
    subvolumes:
      $name: BtrFS Subvolumes
      $type: group
      $visibleIf: .filesystem == btrfs
      $help: List of Btrfs subvolumes
      prefix:
        $type: text
        $placeholder: '@'
        $help: Btrfs subvolume prefix
      subvolume:
        $type: edit-group
        $minItems: 0
        $itemName: Subvolume ${i}
        $visibleIf: .prefix != ""
        $help: Subvolume description
        $prototype:
          path:
            $type: text
            $placeholder: /root
            $help: Path for the subvolume
          copy_on_write:
            $type: boolean
            $default: yes
            $help: CoW flag
07070100000012000081A40000000000000000000000016130D1CF00000480000000000000000000000000000000000000004700000000yomi-0.0.1+git.1630589391.4557cfd/metadata/03-form-yomi-bootloader.yml# Uyuni Form for the Yomi pillar data - Bootloader
#
# Find mode pillar examples in /usr/share/yomi/pillar

bootloader:
  $type: group
  $help: Bootloader Section for the Yomi Formula
  device:
    $type: text
    $placeholder: /dev/sda
    $required: yes
    $help: Device where the GRUB2 will be installed
  timeout:
    $type: number
    $optional: yes
    $default: 8
    $help: Value for the GRUB_TIMEOUT parameter
  kernel:
    $type: text
    $optional: yes
    $default: splash=silent quiet
    $help: Line assigned to the GRUB_CMDLINE_LINUX_DEFAULT parameter
  terminal:
    $type: text
    $optional: yes
    $default: gfxterm
    $help: Value for the GRUB_TERMINAL parameter
  serial_command:
    $type: text
    $optional: yes
    $help: Value for the GRUB_SERIAL_COMMAND parameter
  gfxmode:
    $type: text
    $optional: yes
    $default: auto
    $help: Value for the GRUB_GFXMODE parameter
  theme:
    $type: boolean
    $default: no
    $help: Install and configure grub2-branding package
  disable_os_prober:
    $name: Disable OS Prober
    $type: boolean
    $default: no
    $help: Value for the GRUB_DISABLE_OS_PROBER parameter
07070100000013000081A40000000000000000000000016130D1CF0000108F000000000000000000000000000000000000004500000000yomi-0.0.1+git.1630589391.4557cfd/metadata/04-form-yomi-software.yml# Uyuni Form for the Yomi pillar data - Software
#
# Find mode pillar examples in /usr/share/yomi/pillar

software:
  $type: group
  $help: Software Section for the Yomi Formula
  config:
    $name: Configuration
    $type: group
    $help: Local configuration for the software section
    minimal:
      $type: boolean
      $default: no
      $help: Exclude recommended, documentation and multi-version packages
    transfer:
      $type: boolean
      $default: no
      $help: Transfer the current repositories from the media
    verify:
      $type: boolean
      $default: yes
      $help: Verify the package key when installing
    enabled:
      $type: boolean
      $default: yes
      $help: Enable the repository
    refresh:
      $type: boolean
      $default: yes
      $help: Enable auto-refresh of the repository
    gpgcheck:
      $type: boolean
      $default: yes
      $help: Enable the GPG check for the repositories
    # gpgautoimport:
    #   $type: boolean
    #   $default: yes
    #   $help: Automatically trust and import public GPG key
    cache:
      $type: boolean
      $default: no
      $help: Keep the RPM packages in the system
  repositories:
    $type: edit-group
    $minItems: 0
    $itemName: Repository ${i}
    $help: List of registered repositories
    $prototype:
      $type: group
      $key:
        $name: Alias
        $type: text
        $placeholder: repo-oss
        $help: Short name or alias of the repository
      url:
        $type: url
        $placeholder: http://download.opensuse.org/tumbleweed/repo/oss
        $required: yes
        $help: URL of the repository
      name:
        $type: text
        $optional: yes
        $help: Descriptive name for the repository
      enabled:
        $type: boolean
        $default: yes
        $help: Enable the repository
      refresh:
        $type: boolean
        $default: yes
        $help: Enable auto-refresh of the repository
      priority:
        $type: number
        $help: Set priority of the repository
      gpgcheck:
        $type: boolean
        $default: yes
        $help: Enable the GPG check for the repositories
      # gpgautoimport:
      #   $type: boolean
      #   $default: yes
      #   $help: Automatically trust and import public GPG key
      cache:
        $type: boolean
        $default: no
        $help: Keep the RPM packages in the system
  packages:
    $type: edit-group
    $minItems: 0
    $itemName: Package ${i}
    $help: List of patterns or packages to install
    $prototype:
        $name: Package
        $type: text
        $help: "You can install patterns using the 'pattern:' prefix"
  image:
    $type: group
    $optional: yes
    $help: Image ISO used to dump in the hard disk
    url:
      $name: Image URL
      $type: url
      $help: URL from where download the image
    md5:
      $type: text
      $optional: yes
      $help: MD5 of the image, used for validation

suseconnect:
  $name: SUSEConnect
  $type: group
  $help: SUSEConnect Section for the Yomi Formula
  config:
    $type: group
    $help: Local configuration for the section
    regcode:
      $name: Registration Code
      $type: text
      $help: Subscription registration code for the product
    email:
      $type: text
      $optional: yes
      $help: Email address for product registration
    url:
      $type: url
      $optional: yes
      $placeholder: https://scc.suse.com
      $help: URL of registration server
    version:
      $type: text
      $optional: yes
      $help: Version part of the product name
    arch:
      $name: Architecture
      $type: text
      $optional: yes
      $help: Architecture part of the product name
  products:
    $type: edit-group
    $minItems: 0
    $itemName: Product ${i}
    $help: List of products to register
    $prototype:
      $type: text
      $placeholder: <name>/<version>/<architecture>
      $help: The expected format is <name>/<version>/<architecture>
  packages:
    $type: edit-group
    $minItems: 0
    $itemName: Package ${i}
    $help: List of patterns or packages to install from the products
    $prototype:
        $name: Package
        $type: text
        $help: "You can install patterns using the 'pattern:' prefix"
07070100000014000081A40000000000000000000000016130D1CF00000362000000000000000000000000000000000000004500000000yomi-0.0.1+git.1630589391.4557cfd/metadata/05-form-yomi-services.yml# Uyuni Form for the Yomi pillar data - Services
#
# Find mode pillar examples in /usr/share/yomi/pillar

salt-minion:
  $type: group
  $help: Salt Minion Section for the Yomi Formula
  config:
    $name: Install salt-minion
    $type: boolean
    $default: yes
    $help: (Provisional) Install and configure a salt-minion service

services:
  $type: group
  $help: Service Section for the Yomi Formula
  enabled:
    $type: edit-group
    $minItems: 0
    $itemName: Service ${i}
    $help: List of enabled services
    $prototype:
      $key:
        $type: text
        $name: Service
        $help: Name of the service to enable
  disabled:
    $type: edit-group
    $minItems: 0
    $itemName: Service ${i}
    $help: List of disabled services
    $prototype:
      $key:
        $type: text
        $name: Service
        $help: Name of the service to disable
07070100000015000081A40000000000000000000000016130D1CF000002CC000000000000000000000000000000000000004200000000yomi-0.0.1+git.1630589391.4557cfd/metadata/06-form-yomi-users.yml# Uyuni Form for the Yomi pillar data - Users
#
# Find mode pillar examples in /usr/share/yomi/pillar

users:
  $type: edit-group
  $minItems: 1
  $itemName: User ${i}
  $help: List of users of the system
  $prototype:
    username:
      $type: text
    password:
      $name: Password Hash
      $type: text
      $help: "You can generate a hash with 'openssl passwd -1 -salt <salt> <password>'"
    certificates:
      $type: edit-group
      $minItems: 0
      $itemName: Certificate ${i}
      $prototype:
        $key:
          $name: Certificate
          $type: text
          $help: "Will be added to .ssh/authorized_keys. Use only the encoded key (remove the 'ssh-rsa' prefix and the 'user@host' suffix)"
07070100000016000081A40000000000000000000000016130D1CF0000003E000000000000000000000000000000000000003800000000yomi-0.0.1+git.1630589391.4557cfd/metadata/metadata.ymldescription:
  Yet one more installer
group: installer
#AFTER
07070100000017000041ED0000000000000000000000026130D1CF00000000000000000000000000000000000000000000002900000000yomi-0.0.1+git.1630589391.4557cfd/pillar070701000000180000A1FF000000000000000000000001611CDAFF00000013000000000000000000000000000000000000003C00000000yomi-0.0.1+git.1630589391.4557cfd/pillar/_storage.sls.image_storage.sls.single070701000000190000A1FF000000000000000000000001611CDAFF00000014000000000000000000000000000000000000003C00000000yomi-0.0.1+git.1630589391.4557cfd/pillar/_storage.sls.kubic_storage.sls.microos0707010000001A000081A40000000000000000000000016130D1CF000009C6000000000000000000000000000000000000003A00000000yomi-0.0.1+git.1630589391.4557cfd/pillar/_storage.sls.lvm#
# Storage section for a LVM with three devices deployment
#

partitions:
  config:
    label: {{ partition }}
    # Same gap for all devices
    initial_gap: 1MB
  devices:
    /dev/{{ device_type }}a:
      partitions:
{% set next_partition = 1 %}
{% if not efi and partition == 'gpt' %}
        - number: {{ next_partition }}
          size: 1MB
          type: boot
  {% set next_partition = next_partition + 1 %}
{% endif %}
{% if efi and partition == 'gpt' %}
        - number: {{ next_partition }}
          size: 256MB
          type: efi
  {% set next_partition = next_partition + 1 %}
{% endif %}
        - number: {{ next_partition }}
          size: rest
          type: lvm
    /dev/{{ device_type }}b:
      partitions:
        - number: 1
          size: rest
          type: lvm
    /dev/{{ device_type }}c:
      partitions:
        - number: 1
          size: rest
          type: lvm

lvm:
  system:
    devices:
      - /dev/{{ device_type }}a{{ 2 if efi else 1 }}
      - /dev/{{ device_type }}b1
      - name: /dev/{{ device_type }}c1
        dataalignmentoffset: 7s
    clustered: 'n'
    volumes:
{% if swap %}
      - name: swap
        size: 1024M
{% endif %}
      - name: root
{% if home_filesystem %}
        size: 16384M
{% else %}
        extents: 100%FREE
{% endif %}
{% if home_filesystem %}
      - name: home
        extents: 100%FREE
{% endif %}

filesystems:
{% set next_partition = 1 %}
{% if not efi and partition == 'gpt' %}
  {% set next_partition = next_partition + 1 %}
{% endif %}
{% if efi and partition == 'gpt' %}
  /dev/{{ device_type }}a{{ next_partition }}:
    filesystem: vfat
    mountpoint: /boot/efi
  {% set next_partition = next_partition + 1 %}
{% endif %}
  /dev/system/swap:
    filesystem: swap
  /dev/system/root:
    filesystem: {{ root_filesystem }}
    mountpoint: /
{% if root_filesystem == 'btrfs' %}
    subvolumes:
      prefix: '@'
      subvolume:
  {% if not home_filesystem %}
        - path: home
  {% endif %}
        - path: opt
        - path: root
        - path: srv
        - path: tmp
        - path: usr/local
        - path: var
          copy_on_write: no
    {% if arch == 'aarch64' %}
        - path: boot/grub2/arm64-efi
    {% else %}
        - path: boot/grub2/i386-pc
        - path: boot/grub2/x86_64-efi
    {% endif %}
{% endif %}
{% if home_filesystem %}
  /dev/system/home:
    filesystem: {{ home_filesystem }}
    mountpoint: /home
{% endif %}

bootloader:
  device: /dev/{{ device_type }}a
  theme: yes
0707010000001B000081A40000000000000000000000016130D1CF00000926000000000000000000000000000000000000003E00000000yomi-0.0.1+git.1630589391.4557cfd/pillar/_storage.sls.microos#
# Storage section for a microos deployment in a single device
#

{% if swap %}
  {{ raise ('Do not define a SWAP partition for MicoOS') }}
{% endif %}
{% if home_filesystem %}
  {{ raise ('Do not define a separate home partition for MicoOS') }}
{% endif %}
{% if root_filesystem != 'btrfs' %}
  {{ raise ('File system must be BtrFS for MicoOS') }}
{% endif %}
{% if not snapper %}
  {{ raise ('Snapper is required for MicoOS') }}
{% endif %}

partitions:
  config:
    label: {{ partition }}
  devices:
    /dev/{{ device_type }}a:
      initial_gap: 1MB
      partitions:
{% set next_partition = 1 %}
{% if not efi and partition == 'gpt' %}
        - number: {{ next_partition }}
          size: 1MB
          type: boot
  {% set next_partition = next_partition + 1 %}
{% endif %}
{% if efi and partition == 'gpt' %}
        - number: {{ next_partition }}
          size: 256MB
          type: efi
  {% set next_partition = next_partition + 1 %}
{% endif %}
        - number: {{ next_partition }}
          size: 16384MB
          type: linux
{% set next_partition = next_partition + 1 %}
        - number: {{ next_partition }}
          size: rest
          type: linux
{% set next_partition = next_partition + 1 %}

filesystems:
{% set next_partition = 1 %}
{% if not efi and partition == 'gpt' %}
  {% set next_partition = next_partition + 1 %}
{% endif %}
{% if efi and partition == 'gpt' %}
  /dev/{{ device_type }}a{{ next_partition }}:
    filesystem: vfat
    mountpoint: /boot/efi
  {% set next_partition = next_partition + 1 %}
{% endif %}
  /dev/{{ device_type }}a{{ next_partition }}:
    filesystem: {{ root_filesystem }}
    mountpoint: /
    options: [ro]
    subvolumes:
      prefix: '@'
      subvolume:
        - path: root
        - path: home
        - path: opt
        - path: srv
        - path: boot/writable
        - path: usr/local
    {% if arch == 'aarch64' %}
        - path: boot/grub2/arm64-efi
    {% else %}
        - path: boot/grub2/i386-pc
        - path: boot/grub2/x86_64-efi
    {% endif %}
{% set next_partition = next_partition + 1 %}
  /dev/{{ device_type }}a{{ next_partition }}:
    filesystem: {{ root_filesystem }}
    mountpoint: /var
{% set next_partition = next_partition + 1 %}

bootloader:
  device: /dev/{{ device_type }}a
  kernel: swapaccount=1
  disable_os_prober: yes
  theme: yes
0707010000001C000081A40000000000000000000000016130D1CF00000B49000000000000000000000000000000000000003C00000000yomi-0.0.1+git.1630589391.4557cfd/pillar/_storage.sls.raid1#
# Storage section for a RAID 1 with three devices deployment
#

partitions:
  config:
    label: gpt
    # Same gap for all devices
    initial_gap: 1MB
  devices:
    /dev/{{ device_type }}a:
      partitions:
        - number: 1
          size: rest
          type: raid
    /dev/{{ device_type }}b:
      partitions:
        - number: 1
          size: rest
          type: raid
    /dev/{{ device_type }}c:
      partitions:
        - number: 1
          size: rest
          type: raid
    /dev/md0:
      partitions:
{% set next_partition = 1 %}
{% if not efi and partition == 'gpt' %}
        - number: {{ next_partition }}
          size: 1MB
          type: boot
  {% set next_partition = next_partition + 1 %}
{% endif %}
{% if efi and partition == 'gpt' %}
        - number: {{ next_partition }}
          size: 256MB
          type: efi
  {% set next_partition = next_partition + 1 %}
{% endif %}
{% if swap %}
        - number: {{ next_partition }}
          size: 1024MB
          type: swap
  {% set next_partition = next_partition + 1 %}
{% endif %}
        - number: {{ next_partition }}
          size: {{ 'rest' if not home_filesystem else '16384MB' }}
          type: linux
{% set next_partition = next_partition + 1 %}
{% if home_filesystem %}
        - number: {{ next_partition }}
          size: rest
          type: linux
  {% set next_partition = next_partition + 1 %}
{% endif %}

raid:
  /dev/md0:
    level: 1
    devices:
      - /dev/{{ device_type }}a1
      - /dev/{{ device_type }}b1
      - /dev/{{ device_type }}c1
    spare-devices: 1
    metadata: 1.0

filesystems:
{% set next_partition = 1 %}
{% if not efi and partition == 'gpt' %}
  {% set next_partition = next_partition + 1 %}
{% endif %}
{% if efi and partition == 'gpt' %}
  /dev/md0p{{ next_partition }}:
    filesystem: vfat
    mountpoint: /boot/efi
  {% set next_partition = next_partition + 1 %}
{% endif %}
{% if swap %}
  /dev/md0p{{ next_partition }}:
    filesystem: swap
  {% set next_partition = next_partition + 1 %}
{% endif %}
  /dev/md0p{{ next_partition }}:
    filesystem: {{ root_filesystem }}
    mountpoint: /
{% if root_filesystem == 'btrfs' %}
    subvolumes:
      prefix: '@'
      subvolume:
  {% if not home_filesystem %}
        - path: home
  {% endif %}
        - path: opt
        - path: root
        - path: srv
        - path: tmp
        - path: usr/local
        - path: var
          copy_on_write: no
    {% if arch == 'aarch64' %}
        - path: boot/grub2/arm64-efi
    {% else %}
        - path: boot/grub2/i386-pc
        - path: boot/grub2/x86_64-efi
    {% endif %}
{% endif %}
{% set next_partition = next_partition + 1 %}
{% if home_filesystem %}
  /dev/md0p{{ next_partition }}:
    filesystem: {{ home_filesystem }}
    mountpoint: /home
  {% set next_partition = next_partition + 1 %}
{% endif %}

bootloader:
  device: /dev/md0
  theme: yes
0707010000001D000081A40000000000000000000000016130D1CF00000984000000000000000000000000000000000000003D00000000yomi-0.0.1+git.1630589391.4557cfd/pillar/_storage.sls.single#
# Storage section for a single device deployment
#

partitions:
  config:
    label: {{ partition }}
  devices:
    /dev/{{ device_type }}a:
      initial_gap: 1MB
      partitions:
{% set next_partition = 1 %}
{% if not efi and partition == 'gpt' %}
        - number: {{ next_partition }}
          size: 1MB
          type: boot
  {% set next_partition = next_partition + 1 %}
{% endif %}
{% if efi and partition == 'gpt' %}
        - number: {{ next_partition }}
          size: 256MB
          type: efi
  {% set next_partition = next_partition + 1 %}
{% endif %}
{% if swap %}
        - number: {{ next_partition }}
          size: 1024MB
          type: swap
  {% set next_partition = next_partition + 1 %}
{% endif %}
        - number: {{ next_partition }}
          size: {{ 'rest' if not home_filesystem else '16384MB' }}
          type: linux
{% set next_partition = next_partition + 1 %}
{% if home_filesystem %}
        - number: {{ next_partition }}
          size: rest
          type: linux
  {% set next_partition = next_partition + 1 %}
{% endif %}

filesystems:
{% set next_partition = 1 %}
{% if not efi and partition == 'gpt' %}
  {% set next_partition = next_partition + 1 %}
{% endif %}
{% if efi and partition == 'gpt' %}
  /dev/{{ device_type }}a{{ next_partition }}:
    filesystem: vfat
    mountpoint: /boot/efi
  {% set next_partition = next_partition + 1 %}
{% endif %}
{% if swap %}
  /dev/{{ device_type }}a{{ next_partition }}:
    filesystem: swap
  {% set next_partition = next_partition + 1 %}
{% endif %}
  /dev/{{ device_type }}a{{ next_partition }}:
    filesystem: {{ root_filesystem }}
    mountpoint: /
{% if root_filesystem == 'btrfs' %}
    subvolumes:
      prefix: '@'
      subvolume:
  {% if not home_filesystem %}
        - path: home
  {% endif %}
        - path: opt
        - path: root
        - path: srv
        - path: tmp
        - path: usr/local
        - path: var
          copy_on_write: no
    {% if arch == 'aarch64' %}
        - path: boot/grub2/arm64-efi
    {% else %}
        - path: boot/grub2/i386-pc
        - path: boot/grub2/x86_64-efi
    {% endif %}
{% endif %}
{% set next_partition = next_partition + 1 %}
{% if home_filesystem %}
  /dev/{{ device_type }}a{{ next_partition }}:
    filesystem: {{ home_filesystem }}
    mountpoint: /home
  {% set next_partition = next_partition + 1 %}
{% endif %}

bootloader:
  device: /dev/{{ device_type }}a
  theme: yes
0707010000001E0000A1FF000000000000000000000001611CDAFF00000013000000000000000000000000000000000000003B00000000yomi-0.0.1+git.1630589391.4557cfd/pillar/_storage.sls.sles_storage.sls.single0707010000001F000081A40000000000000000000000016130D1CF00001006000000000000000000000000000000000000003700000000yomi-0.0.1+git.1630589391.4557cfd/pillar/installer.sls# Meta pillar for testing Yomi
#
# There are some parameters that can be configured and adapted to
# launch a basic Yomi installation:
#
#   * efi = {True, False}
#   * partition = {'msdos', 'gpt'}
#   * device_type = {'sd', 'hd', 'vd'}
#   * root_filesystem = {'ext{2, 3, 4}', 'btrfs'}
#   * home_filesystem = {'ext{2, 3, 4}', 'xfs', False}
#   * snapper = {True, False}
#   * swap = {True, False}
#   * mode = {'single', 'lvm', 'raid{0, 1, 4, 5, 6, 10}', 'microos',
#             'kubic', 'image', 'sles'}
#   * network = {'auto', 'eth0', 'ens3', ... }
#
# This meta-pillar can be used as a template for new installers. This
# template is expected to be adapted for production systems, as was
# designed for CI / QA and development.

# We cannot access to grains['efi'] from the pillar, as this is not
# yet synchronized
{% set efi = True %}
{% set partition = 'gpt' %}
{% set device_type = 'sd' %}
{% set root_filesystem = 'btrfs' %}
{% set home_filesystem = False %}
{% set snapper = True %}
{% set swap = False %}
{% set mode = 'microos' %}
{% set network = 'auto' %}

{% set arch = grains['cpuarch'] %}

config:
  events: no
  reboot: no
{% if snapper and root_filesystem == 'btrfs' %}
  snapper: yes
{% endif %}
  locale: en_US.UTF-8
  keymap: us
  timezone: UTC
  hostname: node

{% include "_storage.sls.%s" % mode %}

{% if mode == 'sles' %}
suseconnect:
  config:
    regcode: INTERNAL-USE-ONLY-f7fe-e9d9
    version: '15.2'
    arch: {{ arch }}
  products:
    - sle-module-basesystem
    - sle-module-server-applications
{% endif %}

software:
  config:
    minimal: {{ 'yes' if mode in ('microos', 'kubic') else 'no' }}
    enabled: yes
    autorefresh: yes
    gpgcheck: yes
  repositories:
{% if mode == 'sles' %}
    SUSE_SLE-15_GA: "http://download.suse.de/ibs/SUSE:/SLE-15:/GA/standard/"
    SUSE_SLE-15_Update: "http://download.suse.de/ibs/SUSE:/SLE-15:/Update/standard/"
    SUSE_SLE-15-SP1_GA: "http://download.suse.de/ibs/SUSE:/SLE-15-SP1:/GA/standard/"
    SUSE_SLE-15-SP1_Update: "http://download.suse.de/ibs/SUSE:/SLE-15-SP1:/Update/standard/"
    SUSE_SLE-15-SP2_GA: "http://download.suse.de/ibs/SUSE:/SLE-15-SP2:/GA/standard/"
    SUSE_SLE-15-SP2_Update: "http://download.suse.de/ibs/SUSE:/SLE-15-SP2:/Update/standard/"
{% elif arch == 'aarch64' %}
    repo-oss: "http://download.opensuse.org/ports/aarch64/tumbleweed/repo/oss/"
{% else %}
    repo-oss:
      url: "http://download.opensuse.org/tumbleweed/repo/oss/"
      name: openSUSE-Tumbleweed
{% endif %}
{% if mode == 'image' %}
  image:
    url: tftp://10.0.3.1/openSUSE-Tumbleweed-Yomi{{ arch }}-1.0.0.xz
    md5:
{% else %}
  packages:
  {% if mode == 'microos' %}
    - pattern:microos_base
    - pattern:microos_defaults
    - pattern:microos_hardware
  {% elif mode == 'kubic' %}
    - pattern:microos_base
    - pattern:microos_defaults
    - pattern:microos_hardware
    - pattern:microos_apparmor
    - pattern:kubic_worker
  {% elif mode == 'sles' %}
    - product:SLES
    - pattern:base
    - pattern:enhanced_base
    - pattern:yast2_basis
    - pattern:x11_yast
    - pattern:x11
    - pattern:gnome_basic
  {% else %}
    - pattern:enhanced_base
    - glibc-locale
  {% endif %}
    - kernel-default
{% endif %}

salt-minion:
  config: yes

services:
  enabled:
{% if mode == 'kubic' %}
    - crio
    - kubelet
{% endif %}
    - salt-minion

{% if network != 'auto' %}
networks:
  - interface: {{ network }}
{% endif %}

users:
  - username: root
    # Set the password as 'linux'. Do not do that in production
    password: "$1$wYJUgpM5$RXMMeASDc035eX.NbYWFl0"
    # Personal certificate, without the type prefix nor the host
    # suffix
    certificates:
      - "AAAAB3NzaC1yc2EAAAADAQABAAABAQDdP6oez825gnOLVZu70KqJXpqL4fGf\
        aFNk87GSk3xLRjixGtr013+hcN03ZRKU0/2S7J0T/dICc2dhG9xAqa/A31Qac\
        hQeg2RhPxM2SL+wgzx0geDmf6XDhhe8reos5jgzw6Pq59gyWfurlZaMEZAoOY\
        kfNb5OG4vQQN8Z7hldx+DBANPbylApurVz6h5vvRrkPfuRVN5ZxOkI+LeWhpo\
        vX5XK3eTjetAwWEro6AAXpGoQQQDjSOoYHCUmXzcZkmIWEubCZvAI4RZ+XCZs\
        +wTeO2RIRsunqP8J+XW4cZ28RZBc9K4I1BV8C6wBxN328LRQcilzw+Me+Lfre\
        eDPglqx"
07070100000020000081A40000000000000000000000016130D1CF0000001D000000000000000000000000000000000000003100000000yomi-0.0.1+git.1630589391.4557cfd/pillar/top.slsbase:
  '*':
    - installer
07070100000021000041ED0000000000000000000000066130D1CF00000000000000000000000000000000000000000000002700000000yomi-0.0.1+git.1630589391.4557cfd/salt07070100000022000041ED0000000000000000000000026130D1CF00000000000000000000000000000000000000000000003000000000yomi-0.0.1+git.1630589391.4557cfd/salt/_modules07070100000023000081A40000000000000000000000016130D1CF00002BE7000000000000000000000000000000000000003B00000000yomi-0.0.1+git.1630589391.4557cfd/salt/_modules/devices.py# -*- coding: utf-8 -*-
#
# Author: Alberto Planas <aplanas@suse.com>
#
# Copyright 2019 SUSE LLC.
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.

"""
:maintainer:    Alberto Planas <aplanas@suse.com>
:maturity:      new
:depends:       None
:platform:      Linux
"""
from __future__ import absolute_import, print_function, unicode_literals
import logging


LOG = logging.getLogger(__name__)

__virtualname__ = "devices"

__func_alias__ = {
    "filter_": "filter",
}

# Define not exported variables from Salt, so this can be imported as
# a normal module
try:
    __grains__
    __salt__
except NameError:
    __grains__ = {}
    __salt__ = {}


def _udev(udev_info, key):
    """
    Return the value for a udev key.

    The `key` parameter is a lower case text joined by dots. For
    example, 'e.id_bus' will represent the key for
    `udev_info['E']['ID_BUS']`.

    """
    k, _, r = key.partition(".")
    if not k:
        return udev_info
    if not isinstance(udev_info, dict):
        return "n/a"
    if not r:
        return udev_info.get(k.upper(), "n/a")
    return _udev(udev_info.get(k.upper(), {}), r)


def _match(udev_info, match_info):
    """
    Check if `udev_info` match the information from `match_info`.
    """
    res = True
    for key, value in match_info.items():
        udev_value = _udev(udev_info, key)
        if isinstance(udev_value, dict):
            # If is a dict we probably make a mistake in key from
            # match_info, as is not accessing a final value
            LOG.warning(
                "The key %s for the udev information "
                "dictionary is not a leaf element",
                key,
            )
            continue

        # Converting both values to sets make easy to see if there is
        # a coincidence between both values
        value = set(value) if isinstance(value, list) else set([value])
        udev_value = (
            set(udev_value) if isinstance(udev_value, list) else set([udev_value])
        )
        res = res and (value & udev_value)
    return res


def filter_(udev_in=None, udev_ex=None):
    """
    Returns a list of devices, filtered under udev keys.

    udev_in
        A dictionary of key:values that are expected in the device
        udev information

    udev_ex
        A dictionary of key:values that are not expected in the device
        udev information (excluded)

    The key is a lower case string, joined by dots, that represent a
    path in the udev information dictionary. For example, 'e.id_bus'
    will represent the udev entry `udev['E']['ID_BUS']

    If the udev entry is a list, the algorithm will check that at
    least one item match one item of the value of the parameters.

    Returns list of devices that match `udev_in` and do not match
    `udev_ex`.

    CLI Example:

    .. code-block:: bash

       salt '*' devices.filter udev_in='{"e.id_bus": "ata"}'

    """

    udev_in = udev_in if udev_in else {}
    udev_ex = udev_ex if udev_ex else {}

    all_devices = __grains__["disks"]

    # Get the udev information only one time
    udev_info = {d: __salt__["udev.info"](d) for d in all_devices}

    devices_udev_key_in = {d for d in all_devices if _match(udev_info[d], udev_in)}
    devices_udev_key_ex = {
        d for d in all_devices if _match(udev_info[d], udev_ex) if udev_ex
    }

    return sorted(devices_udev_key_in - devices_udev_key_ex)


def wipe(device):
    """
    Remove all the partitions in the device.

    device
        Device name, for example /dev/sda

    Remove all the partitions, labels and flags from the device.

    CLI Example:

    .. code-block:: bash

       salt '*' devices.wipe /dev/sda

    """

    partitions = __salt__["partition.list"](device).get("partitions", [])
    for partition in partitions:
        # Remove filesystem information the the partition
        __salt__["disk.wipe"]("{}{}".format(device, partition))
        __salt__["partition.rm"](device, partition)

    # Remove the MBR information
    __salt__["disk.wipe"]("{}".format(device))
    __salt__["cmd.run"]("dd bs=512 count=1 if=/dev/zero of={}".format(device))

    return True


def _hwinfo_parse_short(report):
    """Parse the output of hwinfo and return a dictionary"""
    result = {}
    current_result = {}
    key_counter = 0
    for line in report.strip().splitlines():
        if line.startswith("    "):
            key = key_counter
            key_counter += 1
            current_result[key] = line.strip()
        elif line.startswith("  "):
            key, value = line.strip().split(" ", 1)
            current_result[key] = value.strip()
        elif line.endswith(":"):
            key = line[:-1]
            value = {}
            result[key] = value
            current_result = value
            key_counter = 0
        else:
            LOG.error("Error parsing hwinfo short output: {}".format(line))

    return result


def _hwinfo_parse_full(report):
    """Parse the output of hwinfo and return a dictionary"""
    result = {}
    result_stack = []
    level = 0
    for line in report.strip().splitlines():
        current_level = line.count("  ")
        if level != current_level or len(result_stack) != result_stack:
            result_stack = result_stack[:current_level]
            level = current_level
        line = line.strip()

        # Ignore empty lines
        if not line:
            continue

        # Initial line of a segment
        if level == 0:
            key, value = line.split(":", 1)
            sub_result = {}
            result[key] = sub_result
            # The first line contains also a sub-element
            key, value = value.strip().split(": ", 1)
            sub_result[key] = value
            result_stack.append(sub_result)
            level += 1
            continue

        # Line is a note
        if line.startswith("[") or ":" not in line:
            sub_result = result_stack[-1]
            sub_result["Note"] = line if not line.startswith("[") else line[1:-1]
            continue

        key, value = line.split(":", 1)
        key, value = key.strip(), value.strip()
        sub_result = result_stack[-1]
        # If there is a value and it not starts with hash, this is a
        # (key, value) entry. But there are exception on the rule,
        # like when is about 'El Torito info', that is the begining of
        # a new dictorionart.
        if value and not value.startswith("#") and key != "El Torito info":
            if key == "I/O Port":
                key = "I/O Ports"
            elif key == "Config Status":
                value = dict(item.split("=") for item in value.split(", "))
            elif key in ("Driver", "Driver Modules"):
                value = value.replace('"', "").split(", ")
            elif key in ("Tags", "Device Files", "Features"):
                # We cannot split by ', ', as using spaces in
                # inconsisten in some fields
                value = [v.strip() for v in value.split(",")]
            else:
                if value.startswith('"'):
                    value = value[1:-1]

            # If there is a collision, we store it as a list
            if key in sub_result:
                current_value = sub_result[key]
                if type(current_value) is not list:
                    current_value = [current_value]
                if value not in current_value:
                    current_value.append(value)
                if len(current_value) == 1:
                    value = current_value[0]
                else:
                    value = current_value
            sub_result[key] = value
        else:
            if value.startswith("#"):
                value = {"Handle": value}
            elif key == "El Torito info":
                value = value.split(", ")
                value = {
                    "platform": value[0].split()[-1],
                    "bootable": "no" if "not" in value[1] else "yes",
                }
            else:
                value = {}

            sub_result[key] = value
            result_stack.append(value)
            level += 1

    return result


def _hwinfo_parse(report, short):
    """Parse the output of hwinfo and return a dictionary"""
    if short:
        return _hwinfo_parse_short(report)
    else:
        return _hwinfo_parse_full(report)


def _hwinfo_efi():
    """Return information about EFI"""
    return {
        "efi": __grains__["efi"],
        "efi-secure-boot": __grains__["efi-secure-boot"],
    }


def _hwinfo_memory():
    """Return information about the memory"""
    return {
        "mem_total": __grains__["mem_total"],
    }


def _hwinfo_network(short):
    """Return network information"""
    info = {
        "fqdn": __grains__["fqdn"],
        "ip_interfaces": __grains__["ip_interfaces"],
    }

    if not short:
        info["dns"] = __grains__["dns"]

    return info


def hwinfo(items=None, short=True, listmd=False, devices=None):
    """
    Probe for hardware

    items
        List of hardware items to inspect. Default ['bios', 'cpu', 'disk',
        'memory', 'network', 'partition']

    short
        Show only a summary. Default True.

    listmd
        Report RAID devices. Default False.

    devices
        List of devices to show information from. Default None.

    CLI Example:

    .. code-block:: bash

       salt '*' devices.hwinfo
       salt '*' devices.hwinfo items='["disk"]' short=no
       salt '*' devices.hwinfo items='["disk"]' short=no devices='["/dev/sda"]'
       salt '*' devices.hwinfo devices=/dev/sda

    """
    result = {}

    if not items:
        items = ["bios", "cpu", "disk", "memory", "network", "partition"]
    if not isinstance(items, (list, tuple)):
        items = [items]

    if not devices:
        devices = []
    if devices and not isinstance(devices, (list, tuple)):
        devices = [devices]

    cmd = ["hwinfo"]
    for item in items:
        cmd.append("--{}".format(item))

    if short:
        cmd.append("--short")

    if listmd:
        cmd.append("--listmd")

    for device in devices:
        cmd.append("--only {}".format(device))

    out = __salt__["cmd.run_stdout"](cmd)
    result["hwinfo"] = _hwinfo_parse(out, short)

    if "bios" in items:
        result["bios grains"] = _hwinfo_efi()

    if "memory" in items:
        result["memory grains"] = _hwinfo_memory()

    if "network" in items:
        result["network grains"] = _hwinfo_network(short)

    return result
07070100000024000081A40000000000000000000000016130D1CF00000703000000000000000000000000000000000000003B00000000yomi-0.0.1+git.1630589391.4557cfd/salt/_modules/filters.py# -*- coding: utf-8 -*-
#
# Author: Alberto Planas <aplanas@suse.com>
#
# Copyright 2019 SUSE LLC.
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.

"""
:maintainer:    Alberto Planas <aplanas@suse.com>
:maturity:      new
:depends:       None
:platform:      Linux
"""
from __future__ import absolute_import, print_function, unicode_literals
import logging


LOG = logging.getLogger(__name__)

__virtualname__ = "filters"


# Define not exported variables from Salt, so this can be imported as
# a normal module
try:
    __pillar__
except NameError:
    __pillar__ = {}


def is_lvm(device):
    """Detect if a device name comes from a LVM volume."""
    devices = ["/dev/{}/".format(i) for i in __pillar__.get("lvm", {})]
    devices.extend(("/dev/mapper/", "/dev/dm-"))
    return device.startswith(tuple(devices))


def is_raid(device):
    """Detect if a device name comes from a RAID array."""
    return device.startswith("/dev/md")


def is_not_raid(device):
    """Detect if a device name comes from a RAID array."""
    return not is_raid(device)
07070100000025000081A40000000000000000000000016130D1CF00001F5D000000000000000000000000000000000000003A00000000yomi-0.0.1+git.1630589391.4557cfd/salt/_modules/images.py# -*- coding: utf-8 -*-
#
# Author: Alberto Planas <aplanas@suse.com>
#
# Copyright 2019 SUSE LLC.
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.

"""
:maintainer:    Alberto Planas <aplanas@suse.com>
:maturity:      new
:depends:       None
:platform:      Linux
"""
from __future__ import absolute_import, print_function, unicode_literals
import logging
import pathlib
import urllib.parse

from salt.exceptions import SaltInvocationError, CommandExecutionError
import salt.utils.args

LOG = logging.getLogger(__name__)

__virtualname__ = "images"

# Define not exported variables from Salt, so this can be imported as
# a normal module
try:
    __salt__
except NameError:
    __salt__ = {}


VALID_SCHEME = (
    "dict",
    "file",
    "ftp",
    "ftps",
    "gopher",
    "http",
    "https",
    "imap",
    "imaps",
    "ldap",
    "ldaps",
    "pop3",
    "pop3s",
    "rtmp",
    "rtsp",
    "scp",
    "sftp",
    "smb",
    "smbs",
    "smtp",
    "smtps",
    "telnet",
    "tftp",
)
VALID_COMPRESSIONS = ("gz", "bz2", "xz")
VALID_CHECKSUMS = ("md5", "sha1", "sha224", "sha256", "sha384", "sha512")


def _checksum_url(url, checksum_type):
    """Generate the URL for the checksum"""
    url_elements = urllib.parse.urlparse(url)
    path = url_elements.path
    suffix = pathlib.Path(path).suffix
    new_suffix = ".{}".format(checksum_type)
    if suffix[1:] in VALID_COMPRESSIONS:
        path = pathlib.Path(path).with_suffix(new_suffix)
    else:
        path = pathlib.Path(path).with_suffix(suffix + new_suffix)
    return urllib.parse.urlunparse(url_elements._replace(path=str(path)))


def _curl_cmd(url, **kwargs):
    """Return curl commmand line"""
    cmd = ["curl"]
    for key, value in salt.utils.args.clean_kwargs(**kwargs).items():
        if len(key) == 1:
            cmd.append("-{}".format(key))
        else:
            cmd.append("--{}".format(key))
        if value is not None:
            cmd.append(value)
    cmd.append(url)
    return cmd


def _fetch_file(url, **kwargs):
    """Get a file and return the content"""
    params = {
        "silent": None,
        "location": None,
    }
    params.update(kwargs)
    return __salt__["cmd.run_stdout"](_curl_cmd(url, **params))


def _find_filesystem(device):
    """Use lsblk to find the filesystem of a partition."""
    cmd = ["lsblk", "--noheadings", "--output", "FSTYPE", device]
    return __salt__["cmd.run_stdout"](cmd)


def fetch_checksum(url, checksum_type, **kwargs):
    """
    Fecht the checksum from an image URL

    url
        URL of the image. The protocol scheme needs to be available in
        curl. For example: http, https, scp, sftp, tftp or ftp.

        The image can be compressed, and the supported extensions are:
        gz, bz2 and xz

    checksum_type
        The type of checksum used to validate the image, possible
        values are 'md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512'.

    Other paramaters send via kwargs will be used during the call for
    curl.

    CLI Example:

    .. code-block:: bash

        salt '*' images.fetch_checksum https://my.url/JeOS.xz checksum_type=md5

    """

    checksum_url = _checksum_url(url, checksum_type)
    checksum = _fetch_file(checksum_url, **kwargs)
    if not checksum:
        raise CommandExecutionError(
            "Checksum file not found in {}".format(checksum_url)
        )
    checksum = checksum.split()[0]
    LOG.info("Checksum for the image {}".format(checksum))
    return checksum


def dump(url, device, checksum_type=None, checksum=None, **kwargs):
    """Download an image and copy it into a device

    url
        URL of the image. The protocol scheme needs to be available in
        curl. For example: http, https, scp, sftp, tftp or ftp.

        The image can be compressed, and the supported extensions are:
        gz, bz2 and xz

    device
        The device or partition where the image will be copied.

    checksum_type
        The type of checksum used to validate the image, possible
        values are 'md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512'.

    checksum
        The checksum value. If omitted but a `checksum_type` was set,
        it will try to download the checksum file from the same URL,
        replacing the extension with the `checksum_type`

    Other paramaters send via kwargs will be used during the call for
    curl.

    If succeed it will return the real checksum of the image. If
    checksum_type is not specified, MD5 will be used.

    CLI Example:

    .. code-block:: bash

        salt '*' images.dump https://my.url/JeOS-btrfs.xz /dev/sda1
        salt '*' images.dump tftp://my.url/JeOS.xz /dev/sda1 checksum_type=md5

    """

    scheme, _, path, *_ = urllib.parse.urlparse(url)
    if scheme not in VALID_SCHEME:
        raise SaltInvocationError("Protocol not valid for URL")

    # We cannot validate the compression extension, as we can have
    # non-restricted file names, like '/my-image.ext3' or
    # 'other-image.raw'.

    if checksum_type and checksum_type not in VALID_CHECKSUMS:
        raise SaltInvocationError("Checksum type not valid")

    if not checksum_type and checksum:
        raise SaltInvocationError("Checksum type not provided")

    if checksum_type and not checksum:
        checksum = fetch_checksum(url, checksum_type, **kwargs)

    params = {
        "fail": None,
        "location": None,
        "silent": None,
    }
    params.update(kwargs)

    # If any element in the pipe fail, exit early
    cmd = ["set -eo pipefail", ";"]
    cmd.extend(_curl_cmd(url, **params))

    suffix = pathlib.Path(path).suffix[1:]
    if suffix in VALID_COMPRESSIONS:
        cmd.append("|")
        cmd.extend(
            {"gz": ["gunzip"], "bz2": ["bzip2", "-d"], "xz": ["xz", "-d"]}[suffix]
        )

    checksum_prg = "{}sum".format(checksum_type) if checksum_type else "md5sum"
    cmd.extend(["|", "tee", device, "|", checksum_prg])
    ret = __salt__["cmd.run_all"](" ".join(cmd), python_shell=True)
    if ret["retcode"]:
        raise CommandExecutionError(
            "Error while fetching image {}: {}".format(url, ret["stderr"])
        )

    new_checksum = ret["stdout"].split()[0]

    if checksum_type and checksum != new_checksum:
        raise CommandExecutionError(
            "Checksum mismatch. "
            "Expected {}, calculated {}".format(checksum, new_checksum)
        )

    filesystem = _find_filesystem(device)

    resize_cmd = {
        "ext2": "e2fsck -f -y {0}; resize2fs {0}".format(device),
        "ext3": "e2fsck -f -y {0}; resize2fs {0}".format(device),
        "ext4": "e2fsck -f -y {0}; resize2fs {0}".format(device),
        "btrfs": "mount {} /mnt; btrfs filesystem resize max /mnt;"
        " umount /mnt".format(device),
        "xfs": "mount {} /mnt; xfs_growfs /mnt; umount /mnt".format(device),
    }
    if filesystem not in resize_cmd:
        raise CommandExecutionError(
            "Filesystem {} cannot be resized.".format(filesystem)
        )

    ret = __salt__["cmd.run_all"](resize_cmd[filesystem], python_shell=True)
    if ret["retcode"]:
        raise CommandExecutionError(
            "Error while resizing the partition {}: {}".format(device, ret["stderr"])
        )

    __salt__["cmd.run"]("sync")

    return new_checksum
07070100000026000081A40000000000000000000000016130D1CF00002BAF000000000000000000000000000000000000003B00000000yomi-0.0.1+git.1630589391.4557cfd/salt/_modules/partmod.py# -*- coding: utf-8 -*-
#
# Author: Alberto Planas <aplanas@suse.com>
#
# Copyright 2019 SUSE LLC.
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.

"""
:maintainer:    Alberto Planas <aplanas@suse.com>
:maturity:      new
:depends:       None
:platform:      Linux
"""
from __future__ import absolute_import, print_function, unicode_literals
import logging

from salt.exceptions import SaltInvocationError

import lp
import disk


LOG = logging.getLogger(__name__)

__virtualname__ = "partmod"


# Define not exported variables from Salt, so this can be imported as
# a normal module
try:
    __grains__
    __salt__
except NameError:
    __grains__ = {}
    __salt__ = {}


PENALIZATION = {
    # Penalization for wasted space remaining in the device
    "free": 1,
    # Default penalizations
    "minimum_recommendation_size": 5,
    "maximum_recommendation_size": 2,
    "decrement_current_partition_size": 10,
    "increment_current_partition_size": 10,
    # /
    "root_minimum_recommendation_size": 5,
    "root_maximum_recommendation_size": 2,
    "root_decrement_current_partition_size": 10,
    "root_increment_current_partition_size": 10,
    # /home
    "home_minimum_recommendation_size": 5,
    "home_maximum_recommendation_size": 2,
    "home_decrement_current_partition_size": 10,
    "home_increment_current_partition_size": 10,
    # /var
    "var_minimum_recommendation_size": 5,
    "var_maximum_recommendation_size": 2,
    "var_decrement_current_partition_size": 10,
    "var_increment_current_partition_size": 10,
}

FREE = "free"
MIN = "minimum_recommendation_size"
MAX = "maximum_recommendation_size"
INC = "decrement_current_partition_size"
DEC = "increment_current_partition_size"

# Default values for some partition parameters
LABEL = "msdos"
INITIAL_GAP = 0
UNITS = "MB"

VALID_PART_TYPE = ("swap", "linux", "boot", "efi", "lvm", "raid")


def _penalization(partition=None, section=FREE):
    """Penalization for a partition."""
    kind = "{}_{}".format(partition, section)
    if kind in PENALIZATION:
        return PENALIZATION[kind]
    return PENALIZATION[section]


def plan(name, constraints, unit="MB", export=False):
    """Analyze the current hardware and make a partition proposal.

    name
        Name of the root element of the dictionary

    constraints
        List of constraints for the partitions. Each element of the
        list will be a tuple with a name of partition, aminimum size
        (None if not required), and a maximum size (None if not
        required).

        Example: "[['swap', null, null], ['home', 524288, null]]"

    unit
        Unit where the sizes are expressed. Are the same valid units
        for the parted module

    export
        Export the partition proposal as a grains under the given name

    CLI Example:

    .. code-block:: bash

        salt '*' pplan.plan proposal "[['swap', null, null], ...]"

    """
    if not constraints:
        raise SaltInvocationError("contraints parameter is required")

    hd_size = __salt__["status.diskusage"]("/dev/sda")["/dev/sda"]["total"]
    # TODO(aplanas) We only work on MB
    hd_size /= 1024

    # TODO(aplanas) Fix the situation with swap.
    # Replace the None in the max position in the constraints with
    # hd_size.
    constraints = [(c[0], c[1], c[2] if c[2] else hd_size) for c in constraints]

    # Generate the variables of our model:
    #   <part>_size, <part>_to_min_size, <part>_from_max_size
    variables = [
        "{}_{}".format(constraint[0], suffix)
        for constraint in constraints
        for suffix in ("size", "to_min_size", "from_max_size")
    ]
    model = lp.Model(variables)

    for constraint in constraints:
        part_size = "{}_size".format(constraint[0])
        part_to_min_size = "{}_to_min_size".format(constraint[0])
        part_from_max_size = "{}_from_max_size".format(constraint[0])
        model_constraints = (
            # <part>_size >= MINIMUM_RECOMMENDATION_SIZE - <part>_to_min_size
            ({part_size: 1, part_to_min_size: 1}, lp.GTE, constraint[1]),
            # <part>_size <= MAXIMUM_RECOMMENDATION_SIZE + <part>_from_max_size
            ({part_size: 1, part_from_max_size: 1}, lp.LTE, constraint[2]),
        )
        for model_constraint in model_constraints:
            model.add_constraint_named(*model_constraint)

    # sum(<part>_size) <= HD_SIZE
    model_constraint = (
        {"{}_size".format(c[0]): 1 for c in constraints},
        lp.LTE,
        hd_size,
    )
    model.add_constraint_named(*model_constraint)

    # Minimize: PENALIZATION_FREE * (HD_SIZE - Sum(<part>_size))
    #   + PENALIZATION_MINIMUM_RECOMMENDATION_SIZE * <part>_to_min_size
    #   + PENALIZATION_MAXIMUM_RECOMMENDATIOM_SIZE * <part>_from_max_size
    coefficients = {
        "{}_{}".format(constraint[0], suffix): _penalization(
            partition=constraint[0],
            section={"to_min_size": MIN, "from_max_size": MAX}[suffix],
        )
        for constraint in constraints
        for suffix in ("to_min_size", "from_max_size")
    }
    coefficients.update(
        {
            "{}_size".format(constraint[0]): -_penalization(section=FREE)
            for constraint in constraints
        }
    )
    model.add_cost_function_named(
        lp.MINIMIZE, coefficients, _penalization(section=FREE) * hd_size
    )

    plan = {name: model.simplex()}
    if export:
        __salt__["grains.setvals"](plan)

    return plan


def prepare_partition_data(partitions):
    """Helper function to prepare the patition data from the pillar."""

    # Validate and normalize the `partitions` pillar. The state will
    # expect a dictionary with this schema:
    #
    # partitions_normalized = {
    #     '/dev/sda': {
    #         'label': 'gpt',
    #         'pmbr_boot': False,
    #         'partitions': [
    #             {
    #                 'part_id': '/dev/sda1',
    #                 'part_type': 'primary'
    #                 'fs_type': 'ext2',
    #                 'flags': ['esp'],
    #                 'start': '0MB',
    #                 'end': '100%',
    #             },
    #         ],
    #     },
    # }

    is_uefi = __grains__["efi"]

    # Get the fallback values for label and initial_gap
    config = partitions.get("config", {})
    global_label = config.get("label", LABEL)
    global_initial_gap = config.get("initial_gap", INITIAL_GAP)

    partitions_normalized = {}
    for device, device_info in partitions["devices"].items():
        label = device_info.get("label", global_label)
        initial_gap = device_info.get("initial_gap", global_initial_gap)
        if initial_gap:
            initial_gap_num, units = disk.units(initial_gap, default=None)
        else:
            initial_gap_num, units = 0, None

        device_normalized = {
            "label": label,
            "pmbr_boot": label == "gpt" and not is_uefi,
            "partitions": [],
        }
        partitions_normalized[device] = device_normalized

        # Control the start of the next partition
        start_size = initial_gap_num
        # Flag to detect if `rest` size was used before
        rest = False

        for index, partition in enumerate(device_info.get("partitions", [])):
            # Detect if there is another partition after we create one
            # that complete the free space
            if rest:
                raise SaltInvocationError(
                    "Partition defined after one filled all the rest free "
                    "space. Use `rest` only on the last partition."
                )

            # Validate the partition type
            part_type = partition.get("type")
            if part_type not in VALID_PART_TYPE:
                raise SaltInvocationError(
                    "Partition type {} not recognized".format(part_type)
                )

            # If part_id is not given, we can create a partition name
            # based on the position of the partition and the name of
            # the device
            #
            # TODO(aplanas) The partition number will be deduced, so
            # the require section in mkfs_partition will fail
            part_id = "{}{}{}".format(
                device,
                "p" if __salt__["filters.is_raid"](device) else "",
                partitions.get("number", index + 1),
            )
            part_id = partition.get("id", part_id)

            # For parted we usually need to set a ext2 filesystem
            # type, except for SWAP or UEFI
            fs_type = {"swap": "linux-swap", "efi": "fat16"}.get(part_type, "ext2")

            # Check if we are changing units inside the device
            if partition["size"] == "rest":
                rest = True
                # If units is not set, we default to '%'
                units = units or "%"
                start = "{}{}".format(start_size, units)
                end = "100%"
            else:
                size, size_units = disk.units(partition["size"])
                if units and size_units and units != size_units:
                    raise SaltInvocationError(
                        "Units needs to be the same for the partitions inside "
                        "a device. Found {} but expected {}. Note that "
                        "`initial_gap` is also considered.".format(size_units, units)
                    )
                # If units and size_units is not set, we default to UNITS
                units = units or size_units or UNITS
                start = "{}{}".format(start_size, units)
                end = "{}{}".format(start_size + size, units)
                start_size += size

            flags = None
            if part_type in ("raid", "lvm"):
                flags = [part_type]
            elif part_type == "boot" and label == "gpt" and not is_uefi:
                flags = ["bios_grub"]
            elif part_type == "efi" and label == "gpt" and is_uefi:
                flags = ["esp"]

            device_normalized["partitions"].append(
                {
                    "part_id": part_id,
                    # TODO(aplanas) If msdos we need to create extended
                    # and logical
                    "part_type": "primary",
                    "fs_type": fs_type,
                    "start": start,
                    "end": end,
                    "flags": flags,
                }
            )

    return partitions_normalized
07070100000027000081A40000000000000000000000016130D1CF00001C66000000000000000000000000000000000000003F00000000yomi-0.0.1+git.1630589391.4557cfd/salt/_modules/suseconnect.py# -*- coding: utf-8 -*-
#
# Author: Alberto Planas <aplanas@suse.com>
#
# Copyright 2019 SUSE LLC.
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.

"""
:maintainer:    Alberto Planas <aplanas@suse.com>
:maturity:      new
:depends:       None
:platform:      Linux
"""
from __future__ import absolute_import, print_function, unicode_literals

import json
import logging
import re

import salt.utils.path
from salt.exceptions import CommandExecutionError

LOG = logging.getLogger(__name__)

__virtualname__ = "suseconnect"


def __virtual__():
    """
    Only load the module if SUSEConnect is installed
    """
    if not salt.utils.path.which("SUSEConnect"):
        return (False, "SUSEConnect is not installed.")
    return __virtualname__


# Define not exported variables from Salt, so this can be imported as
# a normal module
try:
    __salt__
except NameError:
    __salt__ = {}


def _cmd(cmd):
    """Utility function to run commands."""
    result = __salt__["cmd.run_all"](cmd)
    if result["retcode"]:
        raise CommandExecutionError(result["stdout"] + result["stderr"])
    return result["stdout"]


def register(regcode=None, product=None, email=None, url=None, root=None):
    """
    .. versionadded:: TBD

    Register SUSE Linux Enterprise installation with the SUSE Customer
    Center

    regcode
       Subscription registration code for the product to be
       registered. Relates that product to the specified subscription,
       and enables software repositories for that product.

    product
       Specify a product for activation/deactivation. Only one product
       can be processed at a time. Defaults to the base SUSE Linux
       Enterprose product on this system.
       Format: <name>/<version>/<architecture>

    email
       Email address for product registration

    url
       URL for the registration server (will be saved for the next
       use) (e.g. https://scc.suse.com)

    root
       Path to the root folder, uses the same parameter for zypper

    CLI Example:

    .. code-block:: bash

       salt '*' suseconnect.register regcode='xxxx-yyy-zzzz'
       salt '*' suseconnect.register product='sle-ha/15.2/x86_64'

    """
    cmd = ["SUSEConnect"]

    parameters = [
        ("regcode", regcode),
        ("product", product),
        ("email", email),
        ("url", url),
        ("root", root),
    ]

    for parameter, value in parameters:
        if value:
            cmd.extend(["--{}".format(parameter), str(value)])

    return _cmd(cmd)


def deregister(product=None, url=None, root=None):
    """
    .. versionadded:: TBD

    De-register the system and base product, or in cojuntion with
    'product', a single extension, and removes all its services
    installed by SUSEConnect. After de-registration the system no
    longer consumes a subscription slot in SCC.

    product
       Specify a product for activation/deactivation. Only one product
       can be processed at a time. Defaults to the base SUSE Linux
       Enterprose product on this system.
       Format: <name>/<version>/<architecture>

    url
       URL for the registration server (will be saved for the next
       use) (e.g. https://scc.suse.com)

    root
       Path to the root folder, uses the same parameter for zypper

    CLI Example:

    .. code-block:: bash

       salt '*' suseconnect.deregister
       salt '*' suseconnect.deregister product='sle-ha/15.2/x86_64'

    """
    cmd = ["SUSEConnect", "--de-register"]

    parameters = [("product", product), ("url", url), ("root", root)]

    for parameter, value in parameters:
        if value:
            cmd.extend(["--{}".format(parameter), str(value)])

    return _cmd(cmd)


def status(root=None):
    """
    .. versionadded:: TBD

    Get current system registation status.

    root
       Path to the root folder, uses the same parameter for zypper

    CLI Example:

    .. code-block:: bash

       salt '*' suseconnect.status

    """
    cmd = ["SUSEConnect", "--status"]

    parameters = [("root", root)]

    for parameter, value in parameters:
        if value:
            cmd.extend(["--{}".format(parameter), str(value)])

    return json.loads(_cmd(cmd))


def _parse_list_extensions(output):
    """Parse the output of list-extensions result"""
    # We can extract the indentation using this regex:
    #   r'( {4,}).*\s([-\w]+/[-\w\.]+/[-\w]+).*'
    return re.findall(r"\s([-\w]+/[-\w\.]+/[-\w]+)", output)


def list_extensions(url=None, root=None):
    """
    .. versionadded:: TBD

    List all extensions and modules avaiable for installation on this
    system.

    url
       URL for the registration server (will be saved for the next
       use) (e.g. https://scc.suse.com)

    root
       Path to the root folder, uses the same parameter for zypper

    CLI Example:

    .. code-block:: bash

       salt '*' suseconnect.list-extensions
       salt '*' suseconnect.list-extensions url=https://scc.suse.com

    """
    cmd = ["SUSEConnect", "--list-extensions"]

    parameters = [("url", url), ("root", root)]

    for parameter, value in parameters:
        if value:
            cmd.extend(["--{}".format(parameter), str(value)])

    # TODO(aplanas) Implement a better parser
    return _parse_list_extensions(_cmd(cmd))


def cleanup(root=None):
    """
    .. versionadded:: TBD

    Remove olf system credential and all zypper services installed by
    SUSEConnect

    root
       Path to the root folder, uses the same parameter for zypper

    CLI Example:

    .. code-block:: bash

       salt '*' suseconnect.cleanup

    """
    cmd = ["SUSEConnect", "--cleanup"]

    parameters = [("root", root)]

    for parameter, value in parameters:
        if value:
            cmd.extend(["--{}".format(parameter), str(value)])

    return _cmd(cmd)


def rollback(url=None, root=None):
    """
    .. versionadded:: TBD

    Revert the registration state in case of a failed migration.

    url
       URL for the registration server (will be saved for the next
       use) (e.g. https://scc.suse.com)

    root
       Path to the root folder, uses the same parameter for zypper

    CLI Example:

    .. code-block:: bash

       salt '*' suseconnect.rollback
       salt '*' suseconnect.rollback url=https://scc.suse.com

    """
    cmd = ["SUSEConnect", "--rollback"]

    parameters = [("url", url), ("root", root)]

    for parameter, value in parameters:
        if value:
            cmd.extend(["--{}".format(parameter), str(value)])

    return _cmd(cmd)
07070100000028000041ED0000000000000000000000026130D1CF00000000000000000000000000000000000000000000002F00000000yomi-0.0.1+git.1630589391.4557cfd/salt/_states07070100000029000081A40000000000000000000000016130D1CF00000E8D000000000000000000000000000000000000003C00000000yomi-0.0.1+git.1630589391.4557cfd/salt/_states/formatted.py# -*- coding: utf-8 -*-
#
# Author: Alberto Planas <aplanas@suse.com>
#
# Copyright 2019 SUSE LLC.
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.

"""
:maintainer:    Alberto Planas <aplanas@suse.com>
:maturity:      new
:depends:       None
:platform:      Linux
"""
from __future__ import absolute_import, print_function, unicode_literals
import logging
import os.path

LOG = logging.getLogger(__name__)

__virtualname__ = "formatted"


# Define not exported variables from Salt, so this can be imported as
# a normal module
try:
    __opts__
    __salt__
    __states__
except NameError:
    __opts__ = {}
    __salt__ = {}
    __states__ = {}


def __virtual__():
    """
    Formatted can be considered as an extension to blockdev

    """
    return "blockdev.formatted" in __states__


def formatted(name, fs_type="ext4", force=False, **kwargs):
    """
    Manage filesystems of partitions.

    name
        The name of the block device

    fs_type
        The filesystem it should be formatted as

    force
        Force mke2fs to create a filesystem, even if the specified
        device is not a partition on a block special device. This
        option is only enabled for ext and xfs filesystems

        This option is dangerous, use it with caution.

    """
    ret = {
        "name": name,
        "result": False,
        "changes": {},
        "comment": [],
    }

    fs_type = "swap" if fs_type == "linux-swap" else fs_type
    if fs_type != "swap":
        ret = __states__["blockdev.formatted"](name, fs_type, force, **kwargs)
        return ret

    if not os.path.exists(name):
        ret["comment"].append("{} does not exist".format(name))
        return ret

    current_fs = _checkblk(name)

    if current_fs == "swap":
        ret["result"] = True
        return ret
    elif __opts__["test"]:
        ret["comment"].append("Changes to {} will be applied ".format(name))
        ret["result"] = None
        return ret

    cmd = ["mkswap"]
    if force:
        cmd.append("-f")
    if kwargs.pop("check", False):
        cmd.append("-c")
    for parameter, argument in (
        ("-p", "pagesize"),
        ("-L", "label"),
        ("-v", "swapversion"),
        ("-U", "uuid"),
    ):
        if argument in kwargs:
            cmd.extend([parameter, kwargs.pop(argument)])
    cmd.append(name)

    __salt__["cmd.run"](cmd)

    current_fs = _checkblk(name)

    if current_fs == "swap":
        ret["comment"].append(
            ("{} has been formatted with {}").format(name, fs_type)
        )
        ret["changes"] = {"new": fs_type, "old": current_fs}
        ret["result"] = True
    else:
        ret["comment"].append("Failed to format {}".format(name))
        ret["result"] = False
    return ret


def _checkblk(name):
    """
    Check if the blk exists and return its fstype if ok
    """

    blk = __salt__["cmd.run"](
        "blkid -o value -s TYPE {0}".format(name), ignore_retcode=True
    )
    return "" if not blk else blk
0707010000002A000081A40000000000000000000000016130D1CF00001A72000000000000000000000000000000000000003900000000yomi-0.0.1+git.1630589391.4557cfd/salt/_states/images.py# -*- coding: utf-8 -*-
#
# Author: Alberto Planas <aplanas@suse.com>
#
# Copyright 2019 SUSE LLC.
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.

"""
:maintainer:    Alberto Planas <aplanas@suse.com>
:maturity:      new
:depends:       None
:platform:      Linux
"""
from __future__ import absolute_import, print_function, unicode_literals
import logging
import os
import os.path
import tempfile
import urllib.parse

LOG = logging.getLogger(__name__)

__virtualname__ = "images"

# Define not exported variables from Salt, so this can be imported as
# a normal module
try:
    __opts__
    __salt__
    __utils__
except NameError:
    __opts__ = {}
    __salt__ = {}
    __utils__ = {}


# Copied from `images` execution module, as we cannot easly import it
VALID_SCHEME = (
    "dict",
    "file",
    "ftp",
    "ftps",
    "gopher",
    "http",
    "https",
    "imap",
    "imaps",
    "ldap",
    "ldaps",
    "pop3",
    "pop3s",
    "rtmp",
    "rtsp",
    "scp",
    "sftp",
    "smb",
    "smbs",
    "smtp",
    "smtps",
    "telnet",
    "tftp",
)
VALID_COMPRESSIONS = ("gz", "bz2", "xz")
VALID_CHECKSUMS = ("md5", "sha1", "sha224", "sha256", "sha384", "sha512")


def __virtual__():
    """Images depends on images.dump module"""
    return "images.dump" in __salt__


def _mount(device):
    """Mount the device in a temporary place"""
    dest = tempfile.mkdtemp()
    res = __salt__["mount.mount"](name=dest, device=device)
    if res is not True:
        return None
    return dest


def _umount(path):
    """Umount and clean the temporary place"""
    __salt__["mount.umount"](path)
    __utils__["files.rm_rf"](path)


def _checksum_path(root):
    """Return the path where we will store the last checksum"""
    return os.path.join(root, __opts__["cachedir"][1:], "images")


def _read_current_checksum(device, checksum_type):
    """Return the checksum of the current image, if any"""
    checksum = None
    mnt = _mount(device)
    if not mnt:
        return None

    checksum_file = os.path.join(
        _checksum_path(mnt), "checksum.{}".format(checksum_type)
    )
    try:
        checksum = open(checksum_file).read()
        LOG.info("Checksum file %s content: %s", checksum_file, checksum)
    except Exception:
        # If the file cannot be read, we expect that the image needs
        # to be re-applied eventually
        LOG.info("Checksum file %s not found", checksum_file)

    _umount(mnt)
    return checksum


def _save_current_checksum(device, checksum_type, checksum):
    """Save the checksum of the current image"""
    result = False
    mnt = _mount(device)
    if not mnt:
        return result

    checksum_path = _checksum_path(mnt)
    os.makedirs(checksum_path, exist_ok=True)
    checksum_file = os.path.join(checksum_path, "checksum.{}".format(checksum_type))
    try:
        checksum_file = open(checksum_file, "w")
        checksum_file.write(checksum)
        checksum_file.close()
        result = True
        LOG.info("Created checksum file %s content: %s", checksum_file, checksum)
    except Exception:
        LOG.error("Error writing checksum file %s", checksum_file)

    _umount(mnt)
    return result


def _is_dump_needed(device, checksum_type, checksum):
    return True


def dumped(name, device, checksum_type=None, checksum=None, **kwargs):
    """
    Copy an image in the device.

    name
        URL of the image. The protocol scheme needs to be available in
        curl. For example: http, https, scp, sftp, tftp or ftp.

        The image can be compressed, and the supported extensions are:
        gz, bz2 and xz

    device
        The device or partition where the image will be copied.

    checksum_type
        The type of checksum used to validate the image, possible
        values are 'md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512'.

    checksum
        The checksum value. If omitted but a `checksum_type` was set,
        it will try to download the checksum file from the same URL,
        replacing the extension with the `checksum_type`

    Other paramaters send via kwargs will be used during the call for
    curl.

    """
    ret = {
        "name": name,
        "result": False,
        "changes": {},
        "comment": [],
    }

    scheme, _, path, *_ = urllib.parse.urlparse(name)
    if scheme not in VALID_SCHEME:
        ret["comment"].append("Protocol not valid for URL")
        return ret

    # We cannot validate the compression extension, as we can have
    # non-restricted file names, like '/my-image.ext3' or
    # 'other-image.raw'.

    if checksum_type and checksum_type not in VALID_CHECKSUMS:
        ret["comment"].append("Checksum type not valid")
        return ret

    if not checksum_type and checksum:
        ret["comment"].append("Checksum type not provided")
        return ret

    if checksum_type and not checksum:
        checksum = __salt__["images.fetch_checksum"](name, checksum_type, **kwargs)
        if not checksum:
            ret["comment"].append("Checksum no found")
            return ret

    if checksum_type:
        current_checksum = _read_current_checksum(device, checksum_type)

    if __opts__["test"]:
        ret["result"] = None
        if checksum_type:
            ret["changes"]["image"] = current_checksum != checksum
            ret["changes"]["checksum cache"] = ret["changes"]["image"]
        return ret

    if checksum_type and current_checksum != checksum:
        result = __salt__["images.dump"](
            name, device, checksum_type, checksum, **kwargs
        )
        if result != checksum:
            ret["comment"].append("Failed writing the image")
            return ret
        else:
            ret["changes"]["image"] = True

        saved = _save_current_checksum(device, checksum_type, checksum)
        if not saved:
            ret["comment"].append("Checksum failed to be saved in the cache")
            return ret
        else:
            ret["changes"]["checksum cache"] = True

    ret["result"] = True
    return ret
0707010000002B000081A40000000000000000000000016130D1CF00005AB7000000000000000000000000000000000000003E00000000yomi-0.0.1+git.1630589391.4557cfd/salt/_states/partitioned.py#
# Author: Alberto Planas <aplanas@suse.com>
#
# Copyright 2019 SUSE LLC.
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.

"""
:maintainer:    Alberto Planas <aplanas@suse.com>
:maturity:      new
:depends:       None
:platform:      Linux
"""
import logging
import re

import disk
from salt.exceptions import CommandExecutionError

log = logging.getLogger(__name__)

__virtualname__ = "partitioned"

# Define not exported variables from Salt, so this can be imported as
# a normal module
try:
    __grains__
    __opts__
    __salt__
except NameError:
    __grains__ = {}
    __opts__ = {}
    __salt__ = {}


class EnumerateException(Exception):
    pass


def __virtual__():
    """
    Partitioned depends on partition.mkpart module

    """

    return "partition.mkpart" in __salt__


def _check_label(device, label):
    """
    Check if the label match with the device

    """
    label = {"dos": "msdos"}.get(label, label)
    res = __salt__["cmd.run"](["parted", "--list", "--machine", "--script"])
    line = "".join((line for line in res.splitlines() if line.startswith(device)))
    return ":{}:".format(label) in line


def labeled(name, label):
    """
    Make sure that the label of the partition is properly set.

    name
        Device name (/dev/sda, /dev/disk/by-id/scsi-...)

    label
        Label of the partition (usually 'gpt' or 'msdos')

    """
    ret = {
        "name": name,
        "result": False,
        "changes": {},
        "comment": [],
    }

    if not label:
        ret["comment"].append("Label parameter is not optional")
        return ret

    if _check_label(name, label):
        ret["result"] = True
        ret["comment"].append("Label already set to {}".format(label))
        return ret

    if __opts__["test"]:
        ret["result"] = None
        ret["comment"].append("Label will be set to {} in {}".format(label, name))
        ret["changes"]["label"] = "Will be set to {}".format(label)
        return ret

    __salt__["partition.mklabel"](name, label)

    if _check_label(name, label):
        ret["result"] = True
        msg = "Label set to {} in {}".format(label, name)
        ret["comment"].append(msg)
        ret["changes"]["label"] = msg
    else:
        ret["comment"].append("Failed to set label to {}".format(label))

    return ret


def _get_partition_type(device):
    """
    Get partition type of each partition

    Return dictionary: {number: type, ...}

    """
    cmd = "parted -s {0} print".format(device)
    out = __salt__["cmd.run_stdout"](cmd)
    types = re.findall(r"\s*(\d+).*(primary|extended|logical).*", out)
    return dict(types)


def _get_cached_info(device):
    """
    Get the information of a device as a dictionary

    """
    if not hasattr(_get_cached_info, "info"):
        _get_cached_info.info = {}
    info = _get_cached_info.info

    if device not in info:
        info[device] = __salt__["partition.list"](device)["info"]
    return info[device]


def _invalidate_cached_info():
    """
    Invalidate the cached information about devices

    """
    if hasattr(_get_cached_info, "info"):
        delattr(_get_cached_info, "info")


def _get_cached_partitions(device, unit="s"):
    """
    Get the partitions as a dictionary

    """
    # `partitions` will be used as a local cache, to avoid multiple
    # request of the same partition with the same units. Is a
    # dictionary where the key is the `unit`, as we will make request
    # of all partitions under this unit. This potentially can low the
    # complexity algorithm to amortized O(1).
    if not hasattr(_get_cached_partitions, "partitions"):
        _get_cached_partitions.partitions = {}
        # There is a bug in `partition.list`, where `type` is storing
        # the file system information, to workaround this we get the
        # partition type using parted and attach it here.
        _get_cached_partitions.types = _get_partition_type(device)

    if device not in _get_cached_partitions.partitions:
        _get_cached_partitions.partitions[device] = {}
    partitions = _get_cached_partitions.partitions[device]

    if unit not in partitions:
        partitions[unit] = __salt__["partition.list"](device, unit=unit)
        # If the partition comes from a gpt disk, we assign the type
        # as 'primary'
        types = _get_cached_partitions.types
        for number, partition in partitions[unit]["partitions"].items():
            partition["type"] = types.get(number, "primary")

    return partitions[unit]["partitions"]


def _invalidate_cached_partitions():
    """
    Invalidate the cached information about partitions

    """
    if hasattr(_get_cached_partitions, "partitions"):
        delattr(_get_cached_partitions, "partitions")
        delattr(_get_cached_partitions, "types")


OVERLAPPING_ERROR = 0.75


def _check_partition(device, number, part_type, start, end):
    """
    Check if the proposed partition match the current one.

    Returns a tri-state value:
      - `True`: the proposed partition match
      - `False`: the proposed partition do not match
      - `None`: the proposed partition is a new partition
    """
    # The `start` and `end` fields are expressed with units (the same
    # kind of units that `parted` allows). To make a fair comparison
    # we need to normalize each field to the same units that we can
    # use to read the current partitions. A good candidate is sector
    # ('s'). The problem is that we need to reimplement the same
    # conversion logic from `parted` here [1], as we need the same
    # round logic when we convert from 'MiB' to 's', for example.
    #
    # To avoid this duplicity of code we can do a trick: for each
    # field in the proposed partition we request a `partition.list`
    # with the same unit. We make `parted` to make the conversion for
    # us, in exchange for an slower algorithm.
    #
    # We can change it once we decide to take care of alignment.
    #
    # [1] Check libparted/unit.c

    number = str(number)
    partitions = _get_cached_partitions(device)
    if number not in partitions:
        return None

    if part_type != partitions[number]["type"]:
        return False

    for value, name in ((start, "start"), (end, "end")):
        value, unit = disk.units(value)
        p_value = _get_cached_partitions(device, unit)[number][name]
        p_value = disk.units(p_value)[0]
        min_value = value - OVERLAPPING_ERROR
        max_value = value + OVERLAPPING_ERROR
        if not min_value <= p_value <= max_value:
            return False

    return True


def _get_first_overlapping_partition(device, start):
    """
    Return the first partition that contains the start point.

    """
    # Check if there is a partition in the system that start at
    # specified point.
    value, unit = disk.units(start)
    value += OVERLAPPING_ERROR

    partitions = _get_cached_partitions(device, unit)
    partition_number = None
    partition_start = 0
    for number, partition in partitions.items():
        p_start = disk.units(partition["start"])[0]
        p_end = disk.units(partition["end"])[0]
        if p_start <= value <= p_end:
            if partition_number is None or partition_start < p_start:
                partition_number = number
                partition_start = p_start
    return partition_number


def _get_partition_number(device, part_type, start, end):
    """
    Return a partition number for a [start, end] range and a partition
    type.

    If the range is allocated and the partition type match, return the
    partition number. If the type do not match but is a logical
    partition inside an extended one, return the next partition
    number.

    If the range is not allocated, return the next partition number.

    """

    unit = disk.units(start)[1]
    partitions = _get_cached_partitions(device, unit)

    # Check if there is a partition in the system that start or
    # containst the start point
    number = _get_first_overlapping_partition(device, start)
    if number:
        if partitions[number]["type"] == part_type:
            return number
        elif not (partitions[number]["type"] == "extended" and part_type == "logical"):
            raise EnumerateException("Do not overlap partitions")

    def __primary_partition_free_slot(partitions, label):
        if label == "msdos":
            max_primary = 4
        else:
            max_primary = 1024
        for i in range(1, max_primary + 1):
            i = str(i)
            if i not in partitions:
                return i

    # The partition is not already there, we guess the next number
    label = _get_cached_info(device)["partition table"]
    if part_type == "primary":
        candidate = __primary_partition_free_slot(partitions, label)
        if not candidate:
            raise EnumerateException("No free slot for primary partition")
        return candidate
    elif part_type == "extended":
        if label == "gpt":
            raise EnumerateException("Extended partitions not allowed in gpt")
        if "extended" in (info["type"] for info in partitions.values()):
            raise EnumerateException("Already found a extended partition")
        candidate = __primary_partition_free_slot(partitions, label)
        if not candidate:
            raise EnumerateException("No free slot for extended partition")
        return candidate
    elif part_type == "logical":
        if label == "gpt":
            raise EnumerateException("Extended partitions not allowed in gpt")
        if "extended" not in (part["type"] for part in partitions.values()):
            raise EnumerateException("Missing extended partition")
        candidate = max(
            (
                int(part["number"])
                for part in partitions.values()
                if part["type"] == "logical"
            ),
            default=4,
        )
        return str(candidate + 1)


def _get_partition_flags(device, number):
    """
    Return the current list of flags for a partition.
    """

    def _is_valid(flag):
        """Return True if is a valid flag"""
        if flag == "swap" or flag.startswith("type="):
            return False
        return True

    result = []
    number = str(number)
    partitions = __salt__["partition.list"](device)["partitions"]
    if number in partitions:
        # In parted the field for flags is reused to mark other
        # situations, so we need to remove values that do not
        # represent flags
        flags = partitions[number]["flags"].split(", ")
        result = [flag for flag in flags if flag and _is_valid(flag)]
    return result


def mkparted(name, part_type, fs_type=None, start=None, end=None, flags=None):
    """
    Make sure that a partition is allocated in the disk.

    name
        Device or partition name. If the name is like /dev/sda, parted
        will take care of creating the partition on the next slot. If
        the name is like /dev/sda1, we will consider partition 1 as a
        reference for the match.

    part_type
        Type of partition, should be one of "primary", "logical", or
        "extended".

    fs_type
        Expected filesystem, following the parted names.

    start
        Start of the partition (in parted units)

    end
        End of the partition (in parted units)

    flags
        List of flags present in the partition

    """
    ret = {
        "name": name,
        "result": False,
        "changes": {},
        "comment": [],
    }

    if part_type not in ("primary", "extended", "logical"):
        ret["comment"].append("Partition type not recognized")
    if not start or not end:
        ret["comment"].append("Parameters start and end are not optional")

    # Normalize fs_type. Some versions of salt contains a bug were
    # only a subset of file systems are valid for mkpart, even if are
    # supported by parted. As mkpart do not format the partition, is
    # safe to make a normalization here. Eventually this is only used
    # to set the type in the flag section (partition id).
    #
    # We can drop this check in the next version of salt.
    if fs_type and fs_type not in set(
        [
            "ext2",
            "fat32",
            "fat16",
            "linux-swap",
            "reiserfs",
            "hfs",
            "hfs+",
            "hfsx",
            "NTFS",
            "ufs",
            "xfs",
            "zfs",
        ]
    ):
        fs_type = "ext2"

    flags = flags if flags else []

    # If the user do not provide any partition number we get generate
    # the next available for the partition type
    device_md, device_no_md, number = re.search(
        r"(?:(/dev/md[^p]+)p?|(\D+))(\d*)", name
    ).groups()
    device = device_md if device_md else device_no_md
    if not number:
        try:
            number = _get_partition_number(device, part_type, start, end)
        except EnumerateException as e:
            ret["comment"].append(str(e))

    # If at this point we have some comments, we return with a fail
    if ret["comment"]:
        return ret

    # Check if the partition is already there or we need to create a
    # new one
    partition_match = _check_partition(device, number, part_type, start, end)

    if partition_match:
        ret["result"] = True
        ret["comment"].append("Partition {}{} already in place".format(device, number))
        return ret
    elif partition_match is None:
        ret["changes"]["new"] = "Partition {}{} will be created".format(device, number)
    elif partition_match is False:
        ret["comment"].append(
            "Partition {}{} cannot be replaced".format(device, number)
        )
        return ret

    if __opts__["test"]:
        ret["result"] = None
        return ret

    if partition_match is None:
        # TODO(aplanas) with parted we cannot force a partition number
        res = __salt__["partition.mkpart"](device, part_type, fs_type, start, end)
        ret["changes"]["output"] = res

        # Wipe the filesystem information from the partition to remove
        # old data that was on the disk.  As a side effect, this will
        # force the mkfs state to happend.
        __salt__["disk.wipe"]("{}{}".format(device, number))

        _invalidate_cached_info()
        _invalidate_cached_partitions()

    # The first time that we create a partition we do not have a
    # partition number for it
    if not number:
        number = _get_partition_number(device, part_type, start, end)

    partition_match = _check_partition(device, number, part_type, start, end)
    if partition_match:
        ret["result"] = True
    elif not partition_match:
        ret["comment"].append(
            "Partition {}{} fail to be created".format(device, number)
        )
        ret["result"] = False

    # We set the correct flags for the partition
    current_flags = _get_partition_flags(device, number)
    flags_to_set = set(flags) - set(current_flags)
    flags_to_unset = set(current_flags) - set(flags)

    for flag in flags_to_set:
        try:
            out = __salt__["partition.set"](device, number, flag, "on")
        except CommandExecutionError as e:
            out = e
        if out:
            ret["comment"].append(
                "Error setting flag {} in {}{}: {}".format(flag, device, number, out)
            )
            ret["result"] = False
        else:
            ret["changes"][flag] = True

    for flag in flags_to_unset:
        try:
            out = __salt__["partition.set"](device, number, flag, "off")
        except CommandExecutionError as e:
            out = e
        if out:
            ret["comment"].append(
                "Error unsetting flag {} in {}{}: {}".format(flag, device, number, out)
            )
            ret["result"] = False
        else:
            ret["changes"][flag] = False

    return ret


def _check_partition_name(device, number, name):
    """
    Check if the partition have this name.

    Returns a tri-state value:
      - `True`: the partition already have this label
      - `False`: the partition do not have this label
      - `None`: there is not such partition
    """
    number = str(number)
    partitions = _get_cached_partitions(device)
    if number in partitions:
        return partitions[number]["name"] == name


def named(name, device, partition=None):
    """
    Make sure that a gpt partition have set a name.

    name
        Name or label for the partition

    device
        Device name (/dev/sda, /dev/disk/by-id/scsi-...) or partition

    partition
        Partition number (can be in the device)

    """
    ret = {
        "name": name,
        "result": False,
        "changes": {},
        "comment": [],
    }

    if not partition:
        device, partition = re.search(r"(\D+)(\d*)", device).groups()
    if not partition:
        ret["comment"].append("Partition number not provided")

    if not _check_label(device, "gpt"):
        ret["comment"].append("Only gpt partitions can be named")

    name_match = _check_partition_name(device, partition, name)
    if name_match:
        ret["result"] = True
        ret["comment"].append(
            "Name of the partition {}{} is "
            'already "{}"'.format(device, partition, name)
        )
    elif name_match is None:
        ret["comment"].append("Partition {}{} not found".format(device, partition))

    if ret["comment"]:
        return ret

    if __opts__["test"]:
        ret["comment"].append(
            "Partition {}{} will be named " '"{}"'.format(device, partition, name)
        )
        ret["changes"]["name"] = "Name will be set to {}".format(name)
        return ret

    changes = __salt__["partition.name"](device, partition, name)
    _invalidate_cached_info()
    _invalidate_cached_partitions()

    if _check_partition_name(device, partition, name):
        ret["result"] = True
        ret["comment"].append("Name set to {} in {}{}".format(name, device, partition))
        ret["changes"]["name"] = changes
    else:
        ret["comment"].append("Failed to set name to {}".format(name))

    return ret


def _check_disk_flags(device, flag):
    """
    Return True if the flag for a device is already set.
    """
    flags = __salt__["partition.list"](device)["info"]["disk flags"]
    return flag in flags


def disk_set(name, flag, enabled=True):
    """
    Make sure that a disk flag is set or unset.

    name
        Device name (/dev/sda, /dev/disk/by-id/scsi-...)

    flag
        A valid parted disk flag (see ``parted.disk_set``)

    enabled
        Boolean value

    CLI Example:

    .. code-block:: bash

        salt '*' partitioned.disk_set /dev/sda pmbr_boot
        salt '*' partitioned.disk_set /dev/sda pmbr_boot False

    """
    ret = {
        "name": name,
        "result": False,
        "changes": {},
        "comment": [],
    }

    is_flag = _check_disk_flags(name, flag)
    if enabled == is_flag:
        ret["result"] = True
        ret["comment"].append(
            "Flag {} in {} already {}".format(flag, name, "set" if enabled else "unset")
        )
        return ret

    if __opts__["test"]:
        ret["comment"].append(
            "Flag {} in {} will be {}".format(flag, name, "set" if enabled else "unset")
        )
        ret["changes"][flag] = enabled
        return ret

    __salt__["partition.disk_set"](name, flag, "on" if enabled else "off")

    is_flag = _check_disk_flags(name, flag)
    if enabled == is_flag:
        ret["result"] = True
        ret["comment"].append(
            "Flag {} {} in {}".format(flag, "set" if enabled else "unset", name)
        )
        ret["changes"][flag] = enabled
    else:
        ret["comment"].append(
            "Failed to {} {} in {}".format("set" if enabled else "unset", flag, name)
        )

    return ret


def _check_partition_flags(device, number, flag):
    """
    Return True if the flag for a partition is already set.

    Returns a tri-state value:
      - `True`: the partition already have this flag
      - `False`: the partition do not have this flag
      - `None`: there is not such partition
    """
    number = str(number)
    partitions = __salt__["partition.list"](device)["partitions"]
    if number in partitions:
        return flag in partitions[number]["flags"]


def partition_set(name, flag, partition=None, enabled=True):
    """
    Make sure that a partition flag is set or unset.

    name
        Device name (/dev/sda, /dev/disk/by-id/scsi-...) or partition

    flag
        A valid parted disk flag (see ``parted.disk_set``)

    partition
        Partition number (can be in the device name)

    enabled
        Boolean value

    CLI Example:

    .. code-block:: bash

        salt '*' partitioned.partition_set /dev/sda1 bios_grub
        salt '*' partitioned.partition_set /dev/sda bios_grub 1 False

    """
    ret = {
        "name": name,
        "result": False,
        "changes": {},
        "comment": [],
    }

    if not partition:
        name, partition = re.search(r"(\D+)(\d*)", name).groups()
    if not partition:
        ret["comment"].append("Partition number not provided")

    is_flag = _check_partition_flags(name, partition, flag)
    if enabled == is_flag:
        ret["result"] = True
        ret["comment"].append(
            "Flag {} in {}{} already {}".format(
                flag, name, partition, "set" if enabled else "unset"
            )
        )
    elif is_flag is None:
        ret["comment"].append("Partition {}{} not found".format(name, partition))

    if ret["comment"]:
        return ret

    if __opts__["test"]:
        ret["comment"].append(
            "Flag {} in {}{} will be {}".format(
                flag, name, partition, "set" if enabled else "unset"
            )
        )
        ret["changes"][flag] = enabled
        return ret

    __salt__["partition.set"](name, partition, flag, "on" if enabled else "off")

    is_flag = _check_partition_flags(name, partition, flag)
    if enabled == is_flag:
        ret["result"] = True
        ret["comment"].append(
            "Flag {} {} in {}{}".format(
                flag, "set" if enabled else "unset", name, partition
            )
        )
        ret["changes"][flag] = enabled
    else:
        ret["comment"].append(
            "Failed to {} {} in {}{}".format(
                "set" if enabled else "unset", flag, name, partition
            )
        )
    return ret
0707010000002C000081A40000000000000000000000016130D1CF0000284A000000000000000000000000000000000000004200000000yomi-0.0.1+git.1630589391.4557cfd/salt/_states/snapper_install.py# -*- coding: utf-8 -*-
#
# Author: Alberto Planas <aplanas@suse.com>
#
# Copyright 2019 SUSE LLC.
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.

"""
:maintainer:    Alberto Planas <aplanas@suse.com>
:maturity:      new
:depends:       None
:platform:      Linux
"""
from __future__ import absolute_import, print_function, unicode_literals
import functools
import logging
import os.path
import tempfile
import traceback

log = logging.getLogger(__name__)

INSTALLATION_HELPER = "/usr/lib/snapper/installation-helper"
SNAPPER = "/usr/bin/snapper"

__virtualname__ = "snapper_install"


# Define not exported variables from Salt, so this can be imported as
# a normal module
try:
    __grains__
    __opts__
    __salt__
    __utils__
except NameError:
    __grains__ = {}
    __opts__ = {}
    __salt__ = {}
    __utils__ = {}


def __virtual__():
    """
    snapper_install requires the installation helper binary.

    """
    if not os.path.exists(INSTALLATION_HELPER):
        return (False, "{} binary not found".format(INSTALLATION_HELPER))
    return True


def _mount(device):
    """
    Mount the device in a temporary place.
    """
    dest = tempfile.mkdtemp()
    res = __salt__["mount.mount"](name=dest, device=device)
    if res is not True:
        log.error("Cannot mount device %s in %s", device, dest)
        _umount(dest)
        return None
    return dest


def _umount(path):
    """
    Umount and clean the temporary place.
    """
    __salt__["mount.umount"](path)
    __utils__["files.rm_rf"](path)


def __mount_device(action):
    """
    Small decorator to makes sure that the mount and umount happends in
    a transactional way.
    """

    @functools.wraps(action)
    def wrapper(*args, **kwargs):
        device = kwargs.get("device", args[1] if len(args) > 1 else None)

        ret = {
            "name": device,
            "result": False,
            "changes": {},
            "comment": ["Some error happends during the operation."],
        }
        try:
            dest = _mount(device)
            if not dest:
                msg = "Device {} cannot be mounted".format(device)
                ret["comment"].append(msg)
            kwargs["__dest"] = dest
            ret = action(*args, **kwargs)
        except Exception as e:
            log.error("""Traceback: {}""".format(traceback.format_exc()))
            ret["comment"].append(e)
        finally:
            _umount(dest)
        return ret

    return wrapper


def step_one(name, device, description):
    """
    Step one of the installation-helper tool

    name
        Name of the state

    device
        Device where to install snapper

    description
        Description for the fist snapshot

    """
    ret = {
        "name": name,
        "result": False,
        "changes": {},
        "comment": [],
    }

    # Mount the device and check if /etc/snapper/configs is present
    dest = _mount(device)
    if not dest:
        ret["comment"].append(
            "Fail mounting {} in temporal directory {}".format(device, dest)
        )
        return ret

    is_configs = os.path.exists(os.path.join(dest, "etc/snapper/configs"))
    _umount(dest)

    if is_configs:
        ret["result"] = None if __opts__["test"] else True
        ret["comment"].append("Step one already applied to {}".format(device))
        return ret

    if __opts__["test"]:
        ret["comment"].append("Step one will be applied to {}".format(device))
        return ret

    cmd = [
        INSTALLATION_HELPER,
        "--step",
        "1",
        "--device",
        device,
        "--description",
        description,
    ]
    res = __salt__["cmd.run_all"](cmd)

    if res["retcode"] or res["stderr"]:
        ret["comment"].append("Failed to execute step one {}".format(res["stderr"]))
    else:
        ret["result"] = True
        ret["changes"]["step one"] = True
    return ret


@__mount_device
def step_two(name, device, prefix=None, __dest=None):
    """
    Step two of the installation-helper tool

    name
       Name of the state

    device
        Device where to install snapper

    prefix
        Default root prefix for the subvolumes

    """
    ret = {
        "name": name,
        "result": False,
        "changes": {},
        "comment": [],
    }

    snapshots = os.path.join(__dest, ".snapshots")
    if os.path.exists(snapshots):
        ret["result"] = None if __opts__["test"] else True
        ret["comment"].append("Step two aleady applied to {}".format(device))
        return ret

    if __opts__["test"]:
        ret["comment"].append("Step two will be applied to {}".format(device))
        return ret

    cmd = [
        INSTALLATION_HELPER,
        "--step",
        "2",
        "--device",
        device,
        "--root-prefix",
        __dest,
    ]

    if prefix:
        cmd.extend(["--default-subvolume-name", prefix])

    res = __salt__["cmd.run_all"](cmd)

    if res["retcode"] or res["stderr"]:
        ret["comment"].append("Failed to execute step two {}".format(res["stderr"]))
    else:
        ret["result"] = True
        ret["changes"]["step two"] = True

    # Internally step two mounts a new subvolume called .snapshots
    for i in range(5):
        res = __salt__["mount.umount"](snapshots)
        if res is not True:
            log.warning("Retry %s: Failed to umount %s: %s", i, snapshots, res)
        else:
            break
    else:
        # We fail to umount .snapshots directory, bit the installation
        # step was properly executed, so we still return True
        ret["comment"].append("Failed to umount {}: {}".format(snapshots, res))

    return ret


def step_four(name, root):
    """
    Step four of the installation-helper tool

    name
        Name of the state

    root
        Target directory where to chroot

    """
    ret = {
        "name": name,
        "result": False,
        "changes": {},
        "comment": [],
    }

    if os.path.exists(os.path.join(root, ".snapshots/grub-snapshot.cfg")):
        ret["result"] = None if __opts__["test"] else True
        ret["comment"].append("Step four already applied to {}".format(root))
        return ret

    if __opts__["test"]:
        ret["comment"].append("Step four will be applied to {}".format(root))
        return ret

    cmd = [INSTALLATION_HELPER, "--step", "4"]
    res = __salt__["cmd.run_chroot"](root, cmd)

    if res["retcode"] or res["stderr"]:
        ret["comment"].append("Failed to execute step four {}".format(res["stderr"]))
        return ret

    # Set the initial configuration and quota as YaST is doing
    cmd = [
        SNAPPER,
        "--no-dbus",
        "set-config",
        "NUMBER_CLEANUP=yes",
        "NUMBER_LIMIT=2-10",
        "NUMBER_LIMIT_IMPORTANT=4-10",
        "TIMELINE_CREATE=no",
    ]
    res = __salt__["cmd.run_chroot"](root, cmd)

    if res["retcode"] or res["stderr"]:
        ret["comment"].append(
            "Failed to set configuration in step four {}".format(res["stderr"])
        )
        return ret

    cmd = [SNAPPER, "--no-dbus", "setup-quota"]
    res = __salt__["cmd.run_chroot"](root, cmd)

    if res["retcode"] or res["stderr"]:
        ret["comment"].append(
            "Failed to set quota in step four {}".format(res["stderr"])
        )
        return ret

    ret["result"] = True
    ret["changes"]["step four"] = True
    return ret


def step_five(name, root, snapshot_type, description, important, cleanup):
    """
    Step five of the installation-helper tool

    name
        Name of the state

    root
        Target directory where to chroot

    snapshot_type
        Type of snapshot: {single, pre, post}

    description
        Description for the snapshot

    important
        Is the snapshot important

    cleanup
        Type or snapper cleanup angorithm: {number, timeline,
        empty-pre-post}

    """
    ret = {
        "name": name,
        "result": False,
        "changes": {},
        "comment": [],
    }

    if snapshot_type not in ("single", "pre", "post"):
        ret["comment"].append("Value for snapshot_type not recognized")
        return ret

    if not description:
        ret["comment"].append("Value for description is empty")
        return ret

    if cleanup not in ("number", "timeline", " empty-pre-post "):
        ret["comment"].append("Value for cleanup not recognized")
        return ret

    cmd = [SNAPPER, "--no-dbus", "list"]
    res = __salt__["cmd.run_chroot"](root, cmd)

    if res["retcode"] or res["stderr"]:
        ret["comment"].append(
            "Failed to list snapshots in step five {}".format(res["stderr"])
        )
        return ret

    if description in res["stdout"]:
        ret["result"] = None if __opts__["test"] else True
        ret["comment"].append("Step five already applied to {}".format(root))
        return ret

    if __opts__["test"]:
        ret["comment"].append("Step five will be applied to {}".format(root))
        return ret

    cmd = [
        INSTALLATION_HELPER,
        "--step",
        "5",
        "--snapshot-type",
        snapshot_type,
        "--description",
        '"{}"'.format(description),
        "--userdata",
        "important={}".format("yes" if important else "no"),
        "--cleanup",
        cleanup,
    ]
    res = __salt__["cmd.run_chroot"](root, cmd)

    if res["retcode"] or res["stderr"]:
        ret["comment"].append("Failed to execute step five {}".format(res["stderr"]))
    else:
        ret["result"] = True
        ret["changes"]["step five"] = True
    return ret
0707010000002D000081A40000000000000000000000016130D1CF00001A3F000000000000000000000000000000000000003E00000000yomi-0.0.1+git.1630589391.4557cfd/salt/_states/suseconnect.py# -*- coding: utf-8 -*-
#
# Author: Alberto Planas <aplanas@suse.com>
#
# Copyright 2019 SUSE LLC.
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.

"""
:maintainer:    Alberto Planas <aplanas@suse.com>
:maturity:      new
:depends:       None
:platform:      Linux
"""
from __future__ import absolute_import, print_function, unicode_literals

import logging
import re

from salt.exceptions import CommandExecutionError

LOG = logging.getLogger(__name__)

__virtualname__ = "suseconnect"

# Define not exported variables from Salt, so this can be imported as
# a normal module
try:
    __opts__
    __salt__
    __states__
except NameError:
    __opts__ = {}
    __salt__ = {}
    __states__ = {}


def __virtual__():
    """
    SUSEConnect module is required
    """
    return "suseconnect.register" in __salt__


def _status(root):
    """
    Return the list of resitered modules and subscriptions
    """
    status = __salt__["suseconnect.status"](root=root)
    registered = [
        "{}/{}/{}".format(i["identifier"], i["version"], i["arch"])
        for i in status
        if i["status"] == "Registered"
    ]
    subscriptions = [
        "{}/{}/{}".format(i["identifier"], i["version"], i["arch"])
        for i in status
        if i.get("subscription_status") == "ACTIVE"
    ]
    return registered, subscriptions


def _is_registered(product, root):
    """
    Check if a product is registered
    """
    # If the user provides a product, and the product is registered,
    # or if the user do not provide a product name, but some
    # subscription is active, we consider that there is nothing else
    # to do.
    registered, subscriptions = _status(root)
    if (product and product in registered) or (not product and subscriptions):
        return True
    return False


def registered(name, regcode=None, product=None, email=None, url=None, root=None):
    """
    .. versionadded:: TBD

    Register SUSE Linux Enterprise installation with the SUSE Customer
    Center

    name
       If follows the product name rule, will be the name of the
       product.

    regcode
       Subscription registration code for the product to be
       registered. Relates that product to the specified subscription,
       and enables software repositories for that product.

    product
       Specify a product for activation/deactivation. Only one product
       can be processed at a time. Defaults to the base SUSE Linux
       Enterprose product on this system.
       Format: <name>/<version>/<architecture>

    email
       Email address for product registration

    url
       URL for the registration server (will be saved for the next
       use) (e.g. https://scc.suse.com)

    root
       Path to the root folder, uses the same parameter for zypper

    """
    ret = {
        "name": name,
        "result": False,
        "changes": {},
        "comment": [],
    }

    if not product and re.match(r"[-\w]+/[-\w\.]+/[-\w]+", name):
        product = name
    name = product if product else "default"

    if _is_registered(product, root):
        ret["result"] = True
        ret["comment"].append("Product or module {} already registered".format(name))
        return ret

    if __opts__["test"]:
        ret["result"] = None
        ret["comment"].append("Product or module {} would be registered".format(name))
        ret["changes"][name] = True
        return ret

    try:
        __salt__["suseconnect.register"](
            regcode, product=product, email=email, url=url, root=root
        )
    except CommandExecutionError as e:
        ret["comment"].append("Error registering {}: {}".format(name, e))
        return ret

    ret["changes"][name] = True

    if _is_registered(product, root):
        ret["result"] = True
        ret["comment"].append("Product or module {} registered".format(name))
    else:
        ret["comment"].append("Product or module {} failed to register".format(name))

    return ret


def deregistered(name, product=None, url=None, root=None):
    """
    .. versionadded:: TBD

    De-register the system and base product, or in cojuntion with
    'product', a single extension, and removes all its services
    installed by SUSEConnect. After de-registration the system no
    longer consumes a subscription slot in SCC.

    name
       If follows the product name rule, will be the name of the
       product.

    product
       Specify a product for activation/deactivation. Only one product
       can be processed at a time. Defaults to the base SUSE Linux
       Enterprose product on this system.
       Format: <name>/<version>/<architecture>

    url
       URL for the registration server (will be saved for the next
       use) (e.g. https://scc.suse.com)

    root
       Path to the root folder, uses the same parameter for zypper

    """
    ret = {
        "name": name,
        "result": False,
        "changes": {},
        "comment": [],
    }

    if not product and re.match(r"[-\w]+/[-\w\.]+/[-\w]+", name):
        product = name
    name = product if product else "default"

    if not _is_registered(product, root):
        ret["result"] = True
        ret["comment"].append("Product or module {} already deregistered".format(name))
        return ret

    if __opts__["test"]:
        ret["result"] = None
        ret["comment"].append("Product or module {} would be deregistered".format(name))
        ret["changes"][name] = True
        return ret

    try:
        __salt__["suseconnect.deregister"](product=product, url=url, root=root)
    except CommandExecutionError as e:
        ret["comment"].append("Error deregistering {}: {}".format(name, e))
        return ret

    ret["changes"][name] = True

    if not _is_registered(product, root):
        ret["result"] = True
        ret["comment"].append("Product or module {} deregistered".format(name))
    else:
        ret["comment"].append("Product or module {} failed to deregister".format(name))

    return ret
0707010000002E000041ED0000000000000000000000026130D1CF00000000000000000000000000000000000000000000002E00000000yomi-0.0.1+git.1630589391.4557cfd/salt/_utils0707010000002F000081A40000000000000000000000016130D1CF00000689000000000000000000000000000000000000003600000000yomi-0.0.1+git.1630589391.4557cfd/salt/_utils/disk.py# -*- coding: utf-8 -*-
#
# Author: Alberto Planas <aplanas@suse.com>
#
# Copyright 2019 SUSE LLC.
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.

import re


class ParseException(Exception):
    pass


def units(value, default="MB"):
    """
    Split a value expressed (optionally) with units.

    Returns the tuple (value, unit)
    """
    valid_units = (
        "s",
        "B",
        "kB",
        "MB",
        "MiB",
        "GB",
        "GiB",
        "TB",
        "TiB",
        "%",
        "cyl",
        "chs",
        "compact",
    )
    match = re.search(r"^([\d.]+)(\D*)$", str(value))
    if match:
        value, unit = match.groups()
        unit = unit if unit else default
        if unit in valid_units:
            return (float(value), unit)
        else:
            raise ParseException("{} not recognized as a valid unit".format(unit))
    raise ParseException("{} cannot be parsed".format(value))
07070100000030000081A40000000000000000000000016130D1CF00004145000000000000000000000000000000000000003400000000yomi-0.0.1+git.1630589391.4557cfd/salt/_utils/lp.py# -*- coding: utf-8 -*-
#
# Author: Alberto Planas <aplanas@suse.com>
#
# Copyright 2019 SUSE LLC.
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.

EQ = "="
LTE = "<="
GTE = ">="

MINIMIZE = "-"
MAXIMIZE = "+"


def _vec_scalar(vector, scalar):
    """Multiply a vector by an scalar."""
    return [v * scalar for v in vector]


def _vec_vec_scalar(vector_a, vector_b, scalar):
    """Linear combination of two vectors and a scalar."""
    return [a * scalar + b for a, b in zip(vector_a, vector_b)]


def _vec_plus_vec(vector_a, vector_b):
    """Sum of two vectors."""
    return [a + b for a, b in zip(vector_a, vector_b)]


class Model:
    """Class that represent a linear programming problem."""

    def __init__(self, variables):
        """Create a model with named variables."""
        # All variables are bound and >= 0. We do not support
        # unbounded variables.
        self.variables = variables

        self._constraints = []
        self._cost_function = None

        self._slack_variables = []
        self._standard_constraints = []
        self._standard_cost_function = None

        self._canonical_constraints = []
        self._canonical_cost_function = None
        self._canonical_artificial_function = None

    def add_constraint(self, coefficients, operator, free_term):
        """Add a constraint in non-standard form."""
        # We can express constraints in a general form as:
        #
        #   a_1 x_1 + a_2 x_2 + ... + a_n x_n <= b
        #
        # For this case the values are:
        #   * coefficients = [a_1, a_2, ..., a_n]
        #   * operator = '<='
        #   * free_term = b
        #
        assert len(coefficients) == len(self.variables), (
            "Coefficients length must match the number of variables"
        )
        assert operator in (EQ, LTE, GTE), "Operator not valid"
        self._constraints.append((coefficients, operator, free_term))

    def add_cost_function(self, action, coefficients, free_term):
        """Add a cost function in non-standard form."""
        # We can express a cost function as:
        #
        #   Miminize: z = c_1 x_1 + c_2 x_2 + ... + c_n x_n + z_0
        #
        # For this case the values are:
        #   * action = '-'
        #   * coefficients = [c_1, c_2, ..., c_n]
        #   * free_term = z_0
        #
        assert action in (MINIMIZE, MAXIMIZE), "Action not valid"
        assert len(coefficients) == len(self.variables), (
            "Coefficients length must match the number of variables"
        )
        self._cost_function = (action, coefficients, free_term)

    def _coeff(self, coefficients):
        """Translate a coefficients dictionary into a list."""
        coeff = [0] * len(self.variables)
        for idx, variable in enumerate(self.variables):
            coeff[idx] = coefficients.get(variable, 0)
        return coeff

    def add_constraint_named(self, coefficients, operator, free_term):
        """Add a constraint in non-standard form."""
        self.add_constraint(self._coeff(coefficients), operator, free_term)

    def add_cost_function_named(self, action, coefficients, free_term):
        """Add a cost function in non-standard form."""
        self.add_cost_function(action, self._coeff(coefficients), free_term)

    def simplex(self):
        """Resolve a linear programing model."""
        self._convert_to_standard_form()
        self._convert_to_canonical_form()
        tableau = self._build_tableau_canonical_form()
        tableau.simplex()
        tableau.drop_artificial()
        tableau.simplex()

        constraints = tableau.constraints()
        solution = {i: 0 for i in self.variables}
        for idx_cons, idx_var in enumerate(tableau._basic_variables):
            try:
                variable = self.variables[idx_var]
                solution[variable] = constraints[idx_cons][-1]
            except IndexError:
                pass
        return solution

    def _convert_to_standard_form(self):
        """Convert constraints and cost function to standard form."""
        slack_vars = len([c for c in self._constraints if c[1] != EQ])

        self._standard_constraints = []
        slack_var_idx = 0
        base_slack_var_idx = len(self.variables)
        for coefficients, operator, free_term in self._constraints:
            slack_coeff = [0] * slack_vars
            if operator in (LTE, GTE):
                slack_coeff[slack_var_idx] = 1 if operator == LTE else -1
                self._slack_variables.append(base_slack_var_idx + slack_var_idx)
                slack_var_idx += 1
            self._standard_constraints.append((coefficients + slack_coeff, free_term))

        # Adjust the cost function
        action, coefficients, free_term = self._cost_function
        slack_coeff = [0] * slack_vars
        if action == MAXIMIZE:
            coefficients = _vec_scalar(coefficients, -1)
        self._standard_cost_function = (coefficients + slack_coeff, -free_term)

    def _convert_to_canonical_form(self):
        """Convert the model into canonical form."""
        artificial_vars = len(self._constraints)

        self._canonical_constraints = []
        artificial_var_idx = 0

        slack_vars = len([c for c in self._constraints if c[1] != EQ])
        coeff_acc = [0] * (len(self.variables) + slack_vars)

        free_term_acc = 0
        for coefficients, free_term in self._standard_constraints:
            if free_term < 0:
                coefficients = _vec_scalar(coefficients, -1)
                free_term *= -1
            artificial_coeff = [0] * artificial_vars
            artificial_coeff[artificial_var_idx] = 1
            artificial_var_idx += 1
            self._canonical_constraints.append(
                (coefficients + artificial_coeff, free_term)
            )

            coeff_acc = _vec_plus_vec(coeff_acc, coefficients)
            free_term_acc += free_term

        coefficients, free_term = self._standard_cost_function
        artificial_coeff = [0] * artificial_vars
        self._canonical_cost_function = (coefficients + artificial_coeff, free_term)

        coeff_acc = _vec_scalar(coeff_acc, -1)
        self._canonical_artificial_function = (
            coeff_acc + artificial_coeff,
            -free_term_acc,
        )

    def _build_tableau_canonical_form(self):
        """Build the tableau related with the canonical form."""
        # Total number of variables
        n = len(self._canonical_artificial_function[0])
        # Basic variables (in canonical form there is one per constraint)
        m = len(self._constraints)
        tableau = Tableau(n, m)
        canonical_constraints = enumerate(self._canonical_constraints)
        for (idx, (coefficients, free_term)) in canonical_constraints:
            tableau.add_constraint(coefficients + [free_term], n - m + idx)

        coefficients, free_term = self._canonical_cost_function
        tableau.add_cost_function(coefficients + [free_term])

        coefficients, free_term = self._canonical_artificial_function
        tableau.add_artificial_function(coefficients + [free_term])
        return tableau

    def _str_coeff(self, coefficients):
        """Transform a coefficient array into a string."""
        result = []
        for coefficient, variable in zip(coefficients, self.variables):
            if result:
                result.append("+" if coefficient >= 0 else "-")
                coefficient = abs(coefficient)
            result.append("{} {}".format(coefficient, variable))
        return " ".join(result)

    def __str__(self):
        result = []
        """String representation of a model."""
        if self._cost_function:
            result.append(
                {MINIMIZE: "Minimize:", MAXIMIZE: "Maximize:"}[self._cost_function[0]]
            )
            free_term = self._cost_function[2]
            free_term_sign = "+" if free_term >= 0 else "-"
            z = " ".join(
                (
                    self._str_coeff(self._cost_function[1]),
                    free_term_sign,
                    str(abs(free_term)),
                )
            )
            result.append("  " + z)
            result.append("")

        result.append("Subject to:")
        for constraint in self._constraints:
            c = " ".join(
                (self._str_coeff(constraint[0]), constraint[1], str(constraint[2]))
            )
            result.append("  " + c)

        c = ", ".join(self.variables) + " >= 0"
        result.append("  " + c)

        return "\n".join(result)


class Tableau:
    # To sumarize the steps of the simplex method, starting with the
    # problem in canonical form.
    #
    # 1. if all c_j >= 0, the minimum value of the objective function
    #    has been achieved.
    #
    # 2. If there exists an s such that c_s < 0 and a_{is} <= 0 for
    #    all i, the objective function is not bounded below.
    #
    # 3. Otherwise, pivot. To determine the pivot term:
    #
    #    (a) Pivot in any column with a negative c_j term. If there
    #    are several negative c_j's, pivoting in the column with the
    #    smallest c_j may reduce the total number of steps necessary
    #    to complete the problem. Assume that we pivot column s.
    #
    #    (b) To determine the row of the pivot of the pivot term, find
    #    that row, say row r, such that
    #
    #      b_r / a_{rs} = Min { b_i / a_{is}: a_{is} > 0 }
    #
    #    Notice that here only those ratios b_i / a_{is} with a_{is} >
    #    0 are considered. If the minimum of there ratios is attained
    #    in several rows, a simple rule such as choosing the row with
    #    the smallest index can be used to determine the pivoting row.
    #
    # 4. After pivoting, the problem remains in canonical form at a
    #    different basic feasible solution. Now return to step 1.
    #
    # If the problem contains a degenerate b.f.s., proceed as above.

    def __init__(self, n, m):
        self.n = n
        self.m = m

        self._basic_variables = []
        self._tableau = []

        self._artificial = False

    def add_constraint(self, constraint, basic_variable):
        """Add a contraint into the tableau."""
        assert len(constraint) == self.n + 1, "Wrong size for the constraint"
        assert (
            basic_variable not in self._basic_variables
        ), "Basic variable is already registered"
        assert (
            len(self._basic_variables) == len(self._tableau)
            and len(self._tableau) < self.m
        ), "Too many constraints registered"

        self._basic_variables.append(basic_variable)
        self._tableau.append(constraint)

    def add_cost_function(self, cost_function):
        """Add the const function in the tableau."""
        assert len(cost_function) == self.n + 1, "Wrong size for the cost function"
        assert (
            len(self._basic_variables) == len(self._tableau)
            and len(self._tableau) == self.m
        ), "Too few constraints registered"

        self._tableau.append(cost_function)

    def add_artificial_function(self, artificial_function):
        """Add the artificial function in the tableau."""
        assert (
            len(artificial_function) == self.n + 1
        ), "Wrong size for the cost function"
        assert (
            len(self._basic_variables) == len(self._tableau) - 1
            and len(self._tableau) == self.m + 1
        ), ("Too few constraints or not cost function registered")

        self._artificial = True
        self._tableau.append(artificial_function)

    def constraints(self):
        """Return the constraints in the tableau."""
        last = -1 if not self._artificial else -2
        return self._tableau[:last]

    def cost_function(self):
        """Return the cost function in the tableau."""
        # If we use the artificial cost function, is still in the last
        # position.
        return self._tableau[-1]

    def drop_artificial(self):
        """Transform the tableau in one without artificial variables."""
        assert self._artificial, "Tableau already without artificial variables"
        assert self.is_minimum(), "Tableau is not in minimum state"

        # Check that the basic variables are not artificial variables
        artificial_variables = range(self.n - self.m, self.n)
        assert not any(
            i in self._basic_variables for i in artificial_variables
        ), "At least one artificial variable is a basic variable"

        # Remove the artificial cost function
        self._tableau.pop()

        # Drop all artificial variable coefficients
        tableau = []
        for line in self._tableau:
            tableau.append(line[: -self.m - 1] + [line[-1]])
        self._tableau = tableau

        self._artificial = False

    def simplex(self):
        """Resolve the constraints via the simplex algorithm."""
        while not self.is_minimum():
            column = self._get_pivoting_column()
            row = self._get_pivoting_row(column)
            self._pivote(row, column)
            self._basic_variables[row] = column

    def is_canonical(self):
        """Check if is in canonical form."""
        result = True

        # The system of constraints is in canonical form
        for idx, constraint in zip(self._basic_variables, self.constraints()):
            result = result and all(
                constraint[i] == (1 if idx == i else 0) for i in self._basic_variables
            )

        # We need to check that the associated basic solution is
        # feasible. But we separate this check in a different method.
        # result = result and self.is_basic_feasible_solution()

        # The objective function is expressed in therms of only the
        # nonbasic variables
        cost_function = self.cost_function()
        result = result and all(cost_function[i] == 0 for i in self._basic_variables)
        return result

    def is_minimum(self):
        """Check if the cost function is already minimized."""
        return all(c >= 0 for c in self.cost_function()[:-1])

    def is_basic_feasible_solution(self):
        """Check if there is a basic feasible solution."""
        assert self.is_canonical(), "Tableau is not in canonical form"

        if self._artificial:
            assert self.is_minimum(), (
                "If there are artificial variables, we need to be minimized."
            )
            return self.cost_functions()[-1] == 0
        else:
            return all(c[-1] >= 0 for c in self.constraints())

    def is_bound(self):
        """Check if the cost function is bounded."""
        candidates_idx = [i for i, c in enumerate(self.cost_function()[:-1]) if c < 0]
        return all(
            all(row[i] >= 0 for row in self.constraints()) for i in candidates_idx
        )

    def _get_pivoting_column(self):
        """Returm the column number where we can pivot."""
        candidates = [(i, c) for i, c in enumerate(self.cost_function()[:-1]) if c < 0]
        assert candidates, "Cost function already minimal."
        return min(candidates, key=lambda x: x[1])[0]

    def _get_pivoting_row(self, column):
        """Return the row number where we can pivot."""
        candidates = [
            (i, row[-1] / row[column])
            for i, row in enumerate(self.constraints())
            if row[column] > 0
        ]
        # NOTE(aplanas): Not sure that this is the case
        assert candidates, "Not basic feasible solution found."
        return min(candidates, key=lambda x: x[1])[0]

    def _pivote(self, row, column):
        """Pivote the tableau in (row, column)."""
        # Normalize the row
        vec = _vec_scalar(self._tableau[row], 1 / self._tableau[row][column])
        self._tableau[row] = vec
        for row_b, vec_b in enumerate(self._tableau):
            if row_b != row:
                self._tableau[row_b] = _vec_vec_scalar(vec, vec_b, -vec_b[column])
07070100000031000081A40000000000000000000000016130D1CF000006CF000000000000000000000000000000000000003200000000yomi-0.0.1+git.1630589391.4557cfd/salt/macros.yml# Author: Alberto Planas <aplanas@suse.com>
#
# Copyright 2019 SUSE LLC.
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.
{% set config = pillar['config'] %}

{% macro send_enter(name) -%}
event_{{ name }}_enter:
  module.run:
    - event.send:
      - tag: yomi/{{ name }}/enter
      # - with_grains: [id, hwaddr_interfaces]
{%- endmacro %}

{% macro send_success(state, name) -%}
event_{{ name }}_success:
  module.run:
    - event.send:
      - tag: yomi/{{ name }}/success
      # - with_grains: [id, hwaddr_interfaces]
    - onchanges:
      - {{ state }}: {{ name }}
{%- endmacro %}

{% macro send_fail(state, name) -%}
event_{{ name }}_fail:
  module.run:
    - event.send:
      - tag: yomi/{{ name }}/fail
      # - with_grains: [id, hwaddr_interfaces]
    - onfail:
      - {{ state }}: {{ name }}
{%- endmacro %}

{% macro log(state, name) -%}
{% if config.get('events', True) %}
{{ send_enter(name) }}
{{ send_success(state, name) }}
{{ send_fail(state, name) }}
{% endif %}
{%- endmacro %}

07070100000032000081A40000000000000000000000016130D1CF00000018000000000000000000000000000000000000002F00000000yomi-0.0.1+git.1630589391.4557cfd/salt/top.slsbase:
  '*':
    - yomi
07070100000033000041ED0000000000000000000000076130D1CF00000000000000000000000000000000000000000000002C00000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi07070100000034000081A40000000000000000000000016130D1CF0000016C000000000000000000000000000000000000004000000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/_default_target.sls{% import 'macros.yml' as macros %}

{% set config = pillar['config'] %}
{% set target = config.get('target', 'multi-user.target') %}

{{ macros.log('cmd', 'systemd_set_target') }}
systemd_set_target:
  cmd.run:
    - name: systemctl set-default {{ target }}
    - unless: readlink -f /mnt/etc/systemd/system/default.target | grep -q {{ target }}
    - root: /mnt
07070100000035000081A40000000000000000000000016130D1CF000004A0000000000000000000000000000000000000003B00000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/_firstboot.sls{% import 'macros.yml' as macros %}

{% set config = pillar['config'] %}

# We execute the systemctl call inside the chroot, so we can guarantee
# that will work on containers
{{ macros.log('module', 'systemd_firstboot') }}
systemd_firstboot:
  module.run:
    - chroot.call:
      - root: /mnt
      - function: service.firstboot
      - locale: {{ config.get('locale', 'en_US.utf8') }}
{% if config.get('locale_messages') %}
      - locale_message: {{ config['locale_messages'] }}
{% endif %}
      - keymap: {{ config.get('keymap', 'us') }}
      - timezone: {{ config.get('timezone', 'UTC') }}
{% if config.get('hostname') %}
      - hostname: {{ config['hostname'] }}
{% endif %}
{% if config.get('machine_id') %}
      - machine_id: {{ config['machine_id'] }}
{% endif %}
    - creates:
        - /mnt/etc/hostname
        - /mnt/etc/locale.conf
        - /mnt/etc/localtime
        - /mnt/etc/machine-id
        - /mnt/etc/vconsole.conf

{% if not config.get('machine_id') %}
{{ macros.log('module', 'create_machine-id') }}
create_machine-id:
  module.run:
    - file.copy:
      - src: /etc/machine-id
      - dst: /mnt/etc/machine-id
      - remove_existing: yes
{% endif %}
07070100000036000041ED0000000000000000000000026130D1CF00000000000000000000000000000000000000000000003700000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/bootloader07070100000037000081A40000000000000000000000016130D1CF00000327000000000000000000000000000000000000004900000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/bootloader/grub2_install.sls{% import 'macros.yml' as macros %}

{% set bootloader = pillar['bootloader'] %}
{% set arch = {'aarch64': 'arm64'}.get(grains['cpuarch'], grains['cpuarch'])%}

{{ macros.log('cmd', 'grub2_install') }}
grub2_install:
  cmd.run:
{% if grains['efi'] %}
  {% if grains['efi-secure-boot'] %}
    - name: shim-install --config-file=/boot/grub2/grub.cfg
  {% else %}
    - name: grub2-install --target={{ arch }}-efi --efi-directory=/boot/efi --bootloader-id=GRUB
  {% endif %}
    - creates: /mnt/boot/efi/EFI/GRUB
{% else %}
    - name: grub2-install --force {{ bootloader.device }}
    - creates: /mnt/boot/grub2/i386-pc/normal.mod
{% endif %}
{% if pillar.get('lvm') %}
    - binds: [/run]
    - env:
      - LVM_SUPPRESS_FD_WARNINGS: 1
{% endif %}
    - root: /mnt
    - require:
      - cmd: grub2_mkconfig
07070100000038000081A40000000000000000000000016130D1CF0000086B000000000000000000000000000000000000004A00000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/bootloader/grub2_mkconfig.sls{% import 'macros.yml' as macros %}

{% set config = pillar['config'] %}
{% set bootloader = pillar['bootloader'] %}

{% if config.get('snapper') %}
include:
  {% if config.get('snapper') %}
  - ..storage.snapper.grub2_mkconfig
  {% endif %}
{% endif %}

{% if grains['efi'] and grains['cpuarch'] != 'aarch64' %}
{{ macros.log('file', 'config_grub2_efi') }}
config_grub2_efi:
  file.append:
    - name: /mnt/etc/default/grub
    - text: GRUB_USE_LINUXEFI="true"
{% endif %}

{% if bootloader.get('theme') %}
{{ macros.log('file', 'config_grub2_theme') }}
config_grub2_theme:
  file.append:
    - name: /mnt/etc/default/grub
    - text:
      - GRUB_TERMINAL="{{ bootloader.get('terminal', 'gfxterm') }}"
      - GRUB_GFXMODE="{{ bootloader.get('gfxmode', 'auto') }}"
      - GRUB_BACKGROUND=
      # - GRUB_THEME="/boot/grub2/themes/openSUSE/theme.txt"
{% endif %}

{{ macros.log('file', 'config_grub2_resume') }}
config_grub2_resume:
  file.append:
    - name: /mnt/etc/default/grub
    - text:
      - GRUB_TIMEOUT={{ bootloader.get('timeout', 8) }}
{% if not pillar.get('lvm') %}
      - GRUB_DEFAULT="saved"
      # - GRUB_SAVEDEFAULT="true"
{% endif %}

{% set serial_command = bootloader.get('serial_command')%}
{{ macros.log('file', 'config_grub2_config') }}
config_grub2_config:
  file.append:
    - name: /mnt/etc/default/grub
    - text:
      - GRUB_CMDLINE_LINUX_DEFAULT="{{ bootloader.get('kernel', 'splash=silent quiet') }}"
      - GRUB_DISABLE_OS_PROBER="{{ true if bootloader.get('disable_os_prober') else false }}"
{% if serial_command %}
      - GRUB_TERMINAL="serial"
      - GRUB_SERIAL_COMMAND="{{ serial_command }}"
{% endif %}

{{ macros.log('cmd', 'grub2_set_default') }}
grub2_set_default:
  cmd.run:
    - name: (source /etc/os-release; grub2-set-default "${PRETTY_NAME}")
    - root: /mnt
    - onlyif: "[ -e /mnt/etc/os-release ]"
    - watch:
      - file: /mnt/etc/default/grub

{{ macros.log('cmd', 'grub2_mkconfig') }}
grub2_mkconfig:
  cmd.run:
    - name: grub2-mkconfig -o /boot/grub2/grub.cfg
    - root: /mnt
{% if pillar.get('lvm') %}
    - binds: [/run]
{% endif %}
    - watch:
      - file: /mnt/etc/default/grub
07070100000039000081A40000000000000000000000016130D1CF00000055000000000000000000000000000000000000004000000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/bootloader/init.sls{% set config = pillar['config'] %}

include:
  - .grub2_mkconfig
  - .grub2_install
0707010000003A000081A40000000000000000000000016130D1CF00000325000000000000000000000000000000000000004400000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/bootloader/software.sls{% import 'macros.yml' as macros %}

{% set bootloader = pillar['bootloader'] %}
{% set arch = {'aarch64': 'arm64'}.get(grains['cpuarch'], grains['cpuarch'])%}

{% set software = pillar['software'] %}
{% set software_config = software.get('config', {}) %}

{{ macros.log('pkg', 'install_grub2') }}
install_grub2:
  pkg.installed:
    - pkgs:
      - grub2
{% if bootloader.get('theme') %}
      - grub2-branding
{% endif %}
{% if grains['efi'] %}
      - grub2-{{ arch }}-efi
  {% if grains['efi-secure-boot'] %}
      - shim
  {% endif %}
{% endif %}
    - resolve_capabilities: yes
  {% if software_config.get('minimal') %}
    - no_recommends: yes
  {% endif %}
  {% if not software_config.get('verify') %}
    - skip_verify: yes
  {% endif %}
    - root: /mnt
    - require:
      - mount: mount_/mnt
0707010000003B000041ED0000000000000000000000026130D1CF00000000000000000000000000000000000000000000003300000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/chroot0707010000003C000081A40000000000000000000000016130D1CF0000015F000000000000000000000000000000000000003D00000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/chroot/mount.sls{% import 'macros.yml' as macros %}

{% for fstype, fs_file in (('devtmpfs', '/mnt/dev'), ('proc', '/mnt/proc'), ('sysfs', '/mnt/sys')) %}
{{ macros.log('mount', 'mount_' ~ fs_file) }}
mount_{{ fs_file }}:
  mount.mounted:
    - name: {{ fs_file }}
    - device: {{ fstype }}
    - fstype: {{ fstype }}
    - mkmnt: yes
    - persist: no
{% endfor %}
0707010000003D000081A40000000000000000000000016130D1CF00000131000000000000000000000000000000000000004400000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/chroot/post_install.sls{% import 'macros.yml' as macros %}

{{ macros.log('module', 'unfreeze_chroot') }}
unfreeze_chroot:
  module.run:
    - freezer.restore:
      - name: yomi-chroot
      - clean: True
      - includes: [pattern]
      - root: /mnt
    - onlyif: "[ -e /var/cache/salt/minion/freezer/yomi-chroot-pkgs.yml ]"
0707010000003E000081A40000000000000000000000016130D1CF000002E6000000000000000000000000000000000000004000000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/chroot/software.sls{% import 'macros.yml' as macros %}

{% set software = pillar['software'] %}
{% set software_config = software.get('config', {}) %}

{{ macros.log('module', 'freeze_chroot') }}
freeze_chroot:
  module.run:
    - freezer.freeze:
      - name: yomi-chroot
      - includes: [pattern]
      - root: /mnt
    - unless: "[ -e /var/cache/salt/minion/freezer/yomi-chroot-pkgs.yml ]"

{{ macros.log('pkg', 'install_python3-base') }}
install_python3-base:
  pkg.installed:
    - name: python3-base
    - resolve_capabilities: yes
  {% if software_config.get('minimal') %}
    - no_recommends: yes
  {% endif %}
  {% if not software_config.get('verify') %}
    - skip_verify: yes
  {% endif %}
    - root: /mnt
    - require:
      - mount: mount_/mnt
0707010000003F000081A40000000000000000000000016130D1CF00000104000000000000000000000000000000000000003E00000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/chroot/umount.sls{% import 'macros.yml' as macros %}

{% for fs_file in ('/mnt/sys', '/mnt/proc', '/mnt/dev' ) %}
{{ macros.log('mount', 'umount_' ~ fs_file) }}
umount_{{ fs_file }}:
  mount.unmounted:
    - name: {{ fs_file }}
    - requires: mount_{{ fs_file }}
{% endfor %}
07070100000040000081A40000000000000000000000016130D1CF000001E6000000000000000000000000000000000000003500000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/init.sls{% set filesystems = pillar['filesystems'] %}

{% set ns = namespace(installed=False) %}
{% for device, info in filesystems.items() %}
  {% if info.get('mountpoint') == '/' %}
    {% if salt.cmd.run('findmnt --list --noheadings --output SOURCE /') == device %}
      {% set ns.installed = True %}
    {% endif %}
  {% endif %}
{% endfor %}

{% if not ns.installed %}
include:
  - .storage
  - .software
  - .users
  - .bootloader
  - .services
  - .post_install
  - .reboot
{% endif %}
07070100000041000081A40000000000000000000000016130D1CF00000061000000000000000000000000000000000000003D00000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/post_install.slsinclude:
  - .chroot.post_install
  - ._firstboot
  - ._default_target
  - .storage.post_install
07070100000042000081A40000000000000000000000016130D1CF00000439000000000000000000000000000000000000003700000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/reboot.sls{% import 'macros.yml' as macros %}

{% set config = pillar['config'] %}
{% set reboot = config.get('reboot', True) %}

{% if reboot == 'kexec' %}
{{ macros.log('cmd', 'grub_command_line') }}
grub_command_line:
  cmd.run:
    - name: grep -m 1 -E '^[[:space:]]*linux(efi)?[[:space:]]+[^[:space:]]+vmlinuz.*$' /mnt/boot/grub2/grub.cfg | cut -d ' ' -f 2-3 > /tmp/command_line
    - create: /tmp/command_line

{{ macros.log('cmd', 'prepare_kexec') }}
prepare_kexec:
  cmd.run:
    - name: kexec -a -l /mnt/boot/vmlinuz --initrd=/mnt/boot/initrd --command-line="$(cat /tmp/command_line)"
    - onlyif: "[ -e /tmp/command_line ]"

{{ macros.log('cmd', 'execute_kexec') }}
execute_kexec:
  cmd.run:
    - name: systemctl kexec

{% elif reboot == 'halt' %}
{{ macros.log('module', 'halt') }}
halt:
  module.run:
    - system.halt:

{% elif reboot == 'shutdown' %}
{{ macros.log('module', 'shutdown') }}
shutdown:
  module.run:
    - system.shutdown:

{% elif reboot == 'yes' or reboot == True %}
{{ macros.log('module', 'reboot') }}
reboot:
  module.run:
    - system.reboot:
{% endif %}
07070100000043000041ED0000000000000000000000036130D1CF00000000000000000000000000000000000000000000003500000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/services07070100000044000081A40000000000000000000000016130D1CF000003C6000000000000000000000000000000000000003E00000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/services/init.sls{% import 'macros.yml' as macros %}

{% set services = pillar.get('services', {}) %}

include:
  - .network
{% if pillar.get('salt-minion') %}
  - .salt-minion
{% endif %}

{% for service in services.get('enabled', []) %}
# We execute the systemctl call inside the chroot, so we can guarantee
# that will work on containers
{{ macros.log('module', 'enable_service_' ~ service) }}
enable_service_{{ service }}:
  module.run:
    - chroot.call:
      - root: /mnt
      - function: service.enable
      - name: {{ service }}
    - unless: systemctl --root=/mnt --quiet is-enabled {{ service }} 2> /dev/null
{% endfor %}

{% for service in services.get('disabled', []) %}
{{ macros.log('module', 'disable_service_' ~ service) }}
disable_service_{{ service }}:
  module.run:
    - chroot.call:
      - root: /mnt
      - function: service.disable
      - name: {{ service }}
    - onlyif: systemctl --root=/mnt --quiet is-enabled {{ service }} 2> /dev/null
{% endfor %}
07070100000045000081A40000000000000000000000016130D1CF000007CB000000000000000000000000000000000000004100000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/services/network.sls{% import 'macros.yml' as macros %}

{% set networks = pillar.get('networks') %}

{% if networks %}
  {% for network in networks %}
create_ifcfg_{{ network.interface }}:
  file.append:
    - name: /mnt/etc/sysconfig/network/ifcfg-{{ network.interface }}
    - text: |
        NAME=''
        BOOTPROTO='dhcp'
        STARTMODE='auto'
        ZONE=''
  {% endfor %}
{% else %}
# This assume that the image used for deployment is under a
# predictable network interface name, like Tumbleweed. For SLE, boot
# the image with `net.ifnames=1`

  {% set interfaces = salt.network.interfaces() %}
  {% set interfaces_except_lo = interfaces | select('!=', 'lo') %}

  {% for interface in interfaces_except_lo %}
{{ macros.log('file', 'create_ifcfg_' ~ interface) }}
create_ifcfg_{{ interface }}:
  file.append:
    - name: /mnt/etc/sysconfig/network/ifcfg-{{ interface }}
    - text: |
        NAME=''
        BOOTPROTO='dhcp'
        STARTMODE='auto'
        ZONE=''
    - unless: "[ -e /mnt/usr/lib/udev/rules.d/75-persistent-net-generator.rules ]"

{{ macros.log('file', 'create_ifcfg_eth' ~ loop.index0) }}
create_ifcfg_eth{{ loop.index0 }}:
  file.append:
    - name: /mnt/etc/sysconfig/network/ifcfg-eth{{ loop.index0 }}
    - text: |
        NAME=''
        BOOTPROTO='dhcp'
        STARTMODE='auto'
        ZONE=''
    - onlyif: "[ -e /mnt/usr/lib/udev/rules.d/75-persistent-net-generator.rules ]"

{{ macros.log('cmd', 'write_net_rules_eth' ~ loop.index0) }}
write_net_rules_eth{{ loop.index0 }}:
  cmd.run:
    - name: /usr/lib/udev/write_net_rules
    - env:
        - INTERFACE: eth{{ loop.index0 }}
        - MATCHADDR: "{{ interfaces[interface].hwaddr }}"
    - root: /mnt
    - onlyif: "[ -e /mnt/usr/lib/udev/rules.d/75-persistent-net-generator.rules ]"
  {% endfor %}
{% endif %}

{{ macros.log('file', 'dhcp_hostname') }}
dhcp_hostname:
  file.append:
    - name: /mnt/etc/sysconfig/network/dhcp
    - text:
        - DHCLIENT_SET_HOSTNAME="yes"
        - WRITE_HOSTNAME_TO_HOSTS="no"
07070100000046000041ED0000000000000000000000026130D1CF00000000000000000000000000000000000000000000004100000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/services/salt-minion07070100000047000081A40000000000000000000000016130D1CF0000036A000000000000000000000000000000000000004A00000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/services/salt-minion/init.sls{% import 'macros.yml' as macros %}

{% set salt_minion = pillar['salt-minion'] %}

{% if salt_minion.get('config') %}
{{ macros.log('module', 'synchronize_salt-minion_etc') }}
synchronize_salt-minion_etc:
  module.run:
    - file.copy:
      - src: /etc/salt
      - dst: /mnt/etc/salt
      - recurse: yes
      - remove_existing: yes
    - unless: "[ -e /mnt/etc/salt/pki/minion/minion.pem ]"

{{ macros.log('module', 'synchronize_salt-minion_var') }}
synchronize_salt-minion_var:
  module.run:
    - file.copy:
      - src: /var/cache/salt
      - dst: /mnt/var/cache/salt
      - recurse: yes
      - remove_existing: yes
    - unless: "[ -e /mnt/var/cache/salt/minion/extmods ]"

{{ macros.log('file', 'clean_salt-minion_var') }}
clean_salt-minion_var:
  file.tidied:
    - name: /mnt/var/cache/salt/minion
    - matches:
      - ".*\\.pyc"
      - "\\d+"
{% endif %}
07070100000048000081A40000000000000000000000016130D1CF000001CF000000000000000000000000000000000000004E00000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/services/salt-minion/software.sls{% import 'macros.yml' as macros %}

{% set software = pillar['software'] %}
{% set software_config = software.get('config', {}) %}

{{ macros.log('pkg', 'install_salt-minion') }}
install_salt-minion:
  pkg.installed:
    - name: salt-minion
  {% if software_config.get('minimal') %}
    - no_recommends: yes
  {% endif %}
  {% if not software_config.get('verify') %}
    - skip_verify: yes
  {% endif %}
    - root: /mnt
    - require:
      - mount: mount_/mnt
07070100000049000081A40000000000000000000000016130D1CF00000085000000000000000000000000000000000000004200000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/services/software.sls{% if pillar.get('salt-minion') %}
include:
  {% if pillar.get('salt-minion') %}
  - .salt-minion.software
  {% endif %}
{% endif %}
0707010000004A000041ED0000000000000000000000026130D1CF00000000000000000000000000000000000000000000003500000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/software0707010000004B000081A40000000000000000000000016130D1CF000002A1000000000000000000000000000000000000003F00000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/software/image.sls{% import 'macros.yml' as macros %}

{% set filesystems = pillar['filesystems'] %}
{% set software = pillar['software'] %}

{% for device, info in filesystems.items() %}
  {% if info.get('mountpoint') == '/' %}
{{ macros.log('module', 'dump_image_into_' ~ device) }}
dump_image_into_{{ device }}:
  images.dumped:
    - name: {{ software.image.url }}
    - device: {{ device }}
    {% for checksum_type in ('md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512') %}
      {% if checksum_type in software.image %}
    - checksum_type: {{ checksum_type }}
    - checksum: {{ software.image[checksum_type] or '' }}
      {% endif %}
    {% endfor %}
  {% endif %}
{% endfor %}
0707010000004C000081A40000000000000000000000016130D1CF000001C4000000000000000000000000000000000000003E00000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/software/init.sls{% set software = pillar['software'] %}

include:
{# TODO: Remove the double check (SumaForm bug) #}
{% if software.get('image', {}).get('url') %}
  - .image
  - ..storage.fstab
  - ..storage.mount
{% endif %}
  - .repository
  - .software
{% if pillar.get('suseconnect', {}).get('config', {}).get('regcode') %}
  - .suseconnect
{% endif %}
  - ..storage.software
  - ..bootloader.software
  - ..services.software
  - ..chroot.software
  - .recreatedb
0707010000004D000081A40000000000000000000000016130D1CF000001F8000000000000000000000000000000000000004400000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/software/recreatedb.sls{% import 'macros.yml' as macros %}

{{ macros.log('cmd', 'rpm_exportdb') }}
rpm_exportdb:
  cmd.run:
    - name: rpmdb --root /mnt --exportdb > /mnt/tmp/exportdb
    - creates: /mnt/tmp/exportdb

{{ macros.log('file', 'clean_usr_lib_sysimage_rpm') }}
clean_usr_lib_sysimage_rpm:
  file.absent:
    - name: /mnt/usr/lib/sysimage/rpm

{{ macros.log('cmd', 'rpm_importdb') }}
rpm_importdb:
  cmd.run:
    - name: rpmdb --importdb < /tmp/exportdb
    - root: /mnt
    - onchanges:
      - cmd: rpm_exportdb
0707010000004E000081A40000000000000000000000016130D1CF0000089C000000000000000000000000000000000000004400000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/software/repository.sls{% import 'macros.yml' as macros %}

{% set software = pillar['software'] %}
{% set software_config = software.get('config', {}) %}

{% if software_config.get('transfer') %}
{{ macros.log('module', 'transfer_repositories') }}
migrate_repositories:
  pkgrepo.migrated:
    - name: /mnt
    - keys: yes

  {% for cert_dir in ['/usr/share/pki/trust/anchors', '/usr/share/pki/trust/blacklist',
                      '/etc/pki/trust/anchors', '/etc/pki/trust/blacklist'] %}
migrate_{{ cert_dir }}:
  module.run:
    - file.copy:
      - src: {{ cert_dir }}
      - dst: /mnt{{ cert_dir }}
      - recurse: yes
      - remove_existing: yes
    - unless: "[ -e /mnt{{ cert_dir }} ]"
  {% endfor %}
{% endif %}

# TODO: boo#1178910 - This zypper bug creates /var/lib/rpm and
# /usr/lib/sysimage/rpm independently, and not linked together
{{ macros.log('file', 'create_usr_lib_sysimage_rpm') }}
create_usr_lib_sysimage_rpm:
  file.directory:
    - name: /mnt/usr/lib/sysimage/rpm
    - makedirs: yes

{{ macros.log('file', 'symlink_var_lib_rpm') }}
symlink_var_lib_rpm:
  file.symlink:
    - name: /mnt/var/lib/rpm
    - target: ../../usr/lib/sysimage/rpm
    - makedirs: yes

{% for alias, repository in software.get('repositories', {}).items() %}
  {% if repository is mapping %}
    {% set url = repository['url'] %}
  {% else %}
    {% set url = repository %}
    {% set repository = {} %}
  {% endif %}
{{ macros.log('pkgrepo', 'add_repository_' ~ alias) }}
add_repository_{{ alias }}:
  pkgrepo.managed:
    - baseurl: {{ url }}
    - name: {{ alias }}
  {% if repository.get('name') %}
    - humanname: {{ repository.name }}
  {% endif %}
    - enabled: {{ repository.get('enabled', software_config.get('enabled', 'yes')) }}
    - refresh: {{ repository.get('refresh', software_config.get('refresh', 'yes')) }}
    - priority: {{ repository.get('priority', 0) }}
    - gpgcheck: {{ repository.get('gpgcheck', software_config.get('gpgcheck', 'yes')) }}
    - gpgautoimport: {{ repository.get('gpgautoimport', software_config.get('gpgautoimport', 'yes')) }}
    - cache: {{ repository.get('cache', software_config.get('cache', 'no')) }}
    - root: /mnt
    - require:
      - mount: mount_/mnt
{% endfor %}
0707010000004F000081A40000000000000000000000016130D1CF00000441000000000000000000000000000000000000004200000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/software/software.sls{% import 'macros.yml' as macros %}

{% set software = pillar['software'] %}
{% set software_config = software.get('config', {}) %}

{% if software_config.get('minimal') %}
{{ macros.log('file', 'config_zypp_minimal_host') }}
config_zypp_minimal_host:
  file.append:
    - name: /etc/zypp/zypp.conf
    - text:
        - solver.onlyRequires = true
        - rpm.install.excludedocs = yes
        - multiversion =
{% endif %}

{% if software.get('packages') %}
{{ macros.log('pkg', 'install_packages') }}
install_packages:
  pkg.installed:
    - pkgs: {{ software.packages }}
  {% if software_config.get('minimal') %}
    - no_recommends: yes
  {% endif %}
  {% if not software_config.get('verify') %}
    - skip_verify: yes
  {% endif %}
    - includes: [product, pattern]
    - root: /mnt
{% endif %}

{% if software_config.get('minimal') %}
{{ macros.log('file', 'config_zypp_minimal') }}
config_zypp_minimal:
  file.append:
    - name: /mnt/etc/zypp/zypp.conf
    - text:
        - solver.onlyRequires = true
        - rpm.install.excludedocs = yes
        - multiversion =
{% endif %}
07070100000050000081A40000000000000000000000016130D1CF0000070A000000000000000000000000000000000000004500000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/software/suseconnect.sls{% import 'macros.yml' as macros %}

{% set suseconnect = pillar['suseconnect'] %}
{% set suseconnect_config = suseconnect['config'] %}

{{ macros.log('suseconnect', 'register_product') }}
register_product:
  suseconnect.registered:
    - regcode: {{ suseconnect_config['regcode'] }}
{% if suseconnect_config.get('email') %}
    - email: {{ suseconnect_config['email'] }}
{% endif %}
{% if suseconnect_config.get('url') %}
    - url: {{ suseconnect_config['url'] }}
{% endif %}
    - root: /mnt
    - require:
      - mount: mount_/mnt

{% for product in suseconnect.get('products', []) %}
  {% set regcode = suseconnect_config['regcode'] %}
  {% if product is mapping %}
    {% set regcode = product.get('regcode', regcode) %}
    {% set product = product['name'] %}
  {% endif %}
  {% if 'version' in suseconnect_config and 'arch' in suseconnect_config %}
    {% if suseconnect_config['version'] not in product %}
      {% set product = '%s/%s/%s'|format(product, suseconnect_config['version'], suseconnect_config['arch']) %}
    {% endif %}
  {% endif %}
{{ macros.log('suseconnect', 'register_' ~ product) }}
register_{{ product }}:
  suseconnect.registered:
    - regcode: {{ regcode }}
    - product: {{ product }}
{% if suseconnect_config.get('email') %}
    - email: {{ suseconnect_config['email'] }}
{% endif %}
{% if suseconnect_config.get('url') %}
    - url: {{ suseconnect_config['url'] }}
{% endif %}
    - root: /mnt
    - require:
      - mount: mount_/mnt
{% endfor %}

{% if suseconnect.get('packages') %}
{{ macros.log('pkg', 'install_packages_product') }}
install_packages_product:
  pkg.installed:
    - pkgs: {{ suseconnect.packages }}
    - no_recommends: yes
    - includes: [product, pattern]
    - root: /mnt
    - require:
        - suseconnect: register_product
{% endif %}
07070100000051000041ED0000000000000000000000076130D1CF00000000000000000000000000000000000000000000003400000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/storage07070100000052000081A40000000000000000000000016130D1CF0000045F000000000000000000000000000000000000004300000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/storage/_partition.sls{% import 'macros.yml' as macros %}

{% set partitions = salt.partmod.prepare_partition_data(pillar['partitions']) %}
{% set is_uefi = grains['efi'] %}

{% for device, device_info in partitions.items() if filter(device) %}
{{ macros.log('partitioned', 'create_disk_label_' ~ device) }}
create_disk_label_{{ device }}:
  partitioned.labeled:
    - name: {{ device }}
    - label: {{ device_info.label }}

  {% if device_info.pmbr_boot %}
{{ macros.log('partitioned', 'set_pmbr_boot_' ~ device) }}
set_pmbr_boot_{{ device }}:
  partitioned.disk_set:
    - name: {{ device }}
    - flag: pmbr_boot
    - enabled: yes
  {% endif %}

  {% for partition in device_info.get('partitions', []) %}
{{ macros.log('partitioned', 'create_partition_' ~ partition.part_id) }}
create_partition_{{ partition.part_id }}:
  partitioned.mkparted:
    - name: {{ partition.part_id }}
    - part_type: {{ partition.part_type }}
    - fs_type: {{ partition.fs_type }}
    - start: {{ partition.start }}
    - end: {{ partition.end }}
    {% if partition.flags %}
    - flags: {{ partition.flags }}
    {% endif %}
  {% endfor %}
{% endfor %}
07070100000053000041ED0000000000000000000000026130D1CF00000000000000000000000000000000000000000000003A00000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/storage/btrfs07070100000054000081A40000000000000000000000016130D1CF000005ED000000000000000000000000000000000000004400000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/storage/btrfs/fstab.sls{% import 'macros.yml' as macros %}

{% set filesystems = pillar['filesystems'] %}

{% for device, info in filesystems.items() %}
  {% if info.get('mountpoint') == '/' %}
{{ macros.log('mount', 'mount_btrfs_fstab') }}
mount_btrfs_fstab:
  mount.mounted:
    - name: /mnt
    - device: {{ device }}
    - fstype: {{ info.filesystem }}
    - persist: no
  {% endif %}
{% endfor %}

{% for device, info in filesystems.items() %}
  {% if info.filesystem == 'btrfs' and info.get('subvolumes') %}
    {% set prefix = info.subvolumes.get('prefix', '') %}
    {% for subvol in info.subvolumes.subvolume %}
      {% set fs_file = '/'|path_join(subvol.path) %}
      {% set fs_mntops = 'subvol=%s'|format('/'|path_join(prefix, subvol.path)) %}
      {% if not subvol.get('copy_on_write', True) %}
        {# TODO(aplanas) nodatacow seems optional if chattr was used #}
        {% set fs_mntops = fs_mntops ~ ',nodatacow' %}
      {% endif %}
{{ macros.log('mount', 'add_fstab' ~ '_' ~ fs_file) }}
add_fstab_{{ fs_file }}:
  mount.fstab_present:
    - name: {{ device }}
    - fs_file: {{ fs_file }}
    - fs_vfstype: {{ info.filesystem }}
    - fs_mntops: {{ fs_mntops }}
    - fs_freq: 0
    - fs_passno: 0
    - mount_by: uuid
    - mount: no
    - not_change: yes
    - config: /mnt/etc/fstab
    - require:
      - mount: mount_btrfs_fstab
    {% endfor %}
  {% endif %}
{% endfor %}

{{ macros.log('mount', 'umount_btrfs_fstab') }}
umount_btrfs_fstab:
  mount.unmounted:
    - name: /mnt
    - requires: mount_btrfs_fstab
07070100000055000081A40000000000000000000000016130D1CF0000034F000000000000000000000000000000000000004400000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/storage/btrfs/mount.sls{% import 'macros.yml' as macros %}

{% set filesystems = pillar['filesystems'] %}

{% for device, info in filesystems.items() %}
  {% if info.filesystem == 'btrfs' and info.get('subvolumes') %}
    {% set prefix = info.subvolumes.get('prefix', '') %}
    {% for subvol in info.subvolumes.subvolume %}
      {% set fs_file = '/mnt'|path_join(subvol.path) %}
      {% set fs_mntops = 'subvol=%s'|format('/'|path_join(prefix, subvol.path)) %}
      {% if not subvol.get('copy_on_write', True) %}
        {% set fs_mntops = fs_mntops ~ ',nodatacow' %}
      {% endif %}
{{ macros.log('mount', 'mount_' ~ fs_file) }}
mount_{{ fs_file }}:
  mount.mounted:
    - name: {{ fs_file }}
    - device: {{ device }}
    - fstype: {{ info.filesystem }}
    - mkmnt: yes
    - opts: {{ fs_mntops }}
    - persist: no
    {% endfor %}
  {% endif %}
{% endfor %}
07070100000056000081A40000000000000000000000016130D1CF000001C4000000000000000000000000000000000000004B00000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/storage/btrfs/post_install.sls{% import 'macros.yml' as macros %}

{% set filesystems = pillar['filesystems'] %}

{% for device, info in filesystems.items() %}
  {% if info.filesystem == 'btrfs' and 'ro' in info.get('options', []) %}
{{ macros.log('btrfs', 'set_property_ro_' ~ info.mountpoint) }}
set_property_ro_{{ info.mountpoint }}:
  btrfs.properties:
    - name: {{ info.mountpoint }}
    - device: {{ device }}
    - use_default: yes
    - ro: yes
  {% endif %}
{% endfor %}
07070100000057000081A40000000000000000000000016130D1CF0000042B000000000000000000000000000000000000004800000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/storage/btrfs/subvolume.sls{% import 'macros.yml' as macros %}

{% set filesystems = pillar['filesystems'] %}

{% for device, info in filesystems.items() %}
  {% if info.filesystem == 'btrfs' and info.get('subvolumes') %}
    {# TODO(aplanas) is prefix optional? #}
    {% set prefix = info.subvolumes.get('prefix', '') %}
    {% if prefix %}
{{ macros.log('btrfs', 'subvol_create_' ~ device ~ '_prefix') }}
subvol_create_{{ device }}_prefix:
  btrfs.subvolume_created:
    - name: '{{ prefix }}'
    - device: {{ device }}
    - set_default: yes
    - force_set_default: no
    {% endif %}

    {% for subvol in info.subvolumes.subvolume %}
      {% if prefix %}
        {% set path = prefix|path_join(subvol.path) %}
      {% else %}
        {% set path = subvol.path %}
      {% endif %}
{{ macros.log('btrfs', 'subvol_create_' ~ device ~ '_' ~ subvol.path) }}
subvol_create_{{ device }}_{{ subvol.path }}:
  btrfs.subvolume_created:
    - name: '{{ path }}'
    - device: {{ device }}
    - copy_on_write: {{ subvol.get('copy_on_write', True) }}
    {% endfor %}
  {% endif %}
{% endfor %}
07070100000058000081A40000000000000000000000016130D1CF00000228000000000000000000000000000000000000004500000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/storage/btrfs/umount.sls{% import 'macros.yml' as macros %}

{% set filesystems = pillar['filesystems'] %}

{% for device, info in filesystems.items() %}
  {% if info.filesystem == 'btrfs' and info.get('subvolumes') %}
    {% set prefix = info.subvolumes.get('prefix', '') %}
    {% for subvol in info.subvolumes.subvolume %}
      {% set fs_file = '/mnt'|path_join(subvol.path) %}
{{ macros.log('mount', 'umount_' ~ fs_file) }}
umount_{{ fs_file }}:
  mount.unmounted:
    - name: {{ fs_file }}
    - requires: mount_{{ fs_file }}
    {% endfor %}
  {% endif %}
{% endfor %}
07070100000059000081A40000000000000000000000016130D1CF000002FE000000000000000000000000000000000000004500000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/storage/create_fstab.sls{% import 'macros.yml' as macros %}

{% set filesystems = pillar['filesystems'] %}

{% for device, info in filesystems.items() %}
  {% if info.get('mountpoint') == '/' %}
{{ macros.log('mount', 'mount_create_fstab') }}
mount_create_fstab:
  mount.mounted:
    - name: /mnt
    - device: {{ device }}
    - fstype: {{ info.filesystem }}
    - persist: no

{{ macros.log('file', 'create_fstab') }}
create_fstab:
  file.managed:
    - name: /mnt/etc/fstab
    - user: root
    - group: root
    - mode: 644
    - makedirs: yes
    - dir_mode: 755
    - replace: no
    - requires: mount_create_fstab

{{ macros.log('mount', 'umount_create_fstab') }}
umount_create_fstab:
  mount.unmounted:
    - name: /mnt
    - requires: mount_create_fstab
  {% endif %}
{% endfor %}
0707010000005A000041ED0000000000000000000000026130D1CF00000000000000000000000000000000000000000000003B00000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/storage/device0707010000005B000081A40000000000000000000000016130D1CF0000047F000000000000000000000000000000000000004500000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/storage/device/fstab.sls{% import 'macros.yml' as macros %}

{% set filesystems = pillar['filesystems'] %}

{% for device, info in filesystems.items() %}
  {% if info.get('mountpoint') == '/' %}
{{ macros.log('mount', 'mount_device_fstab') }}
mount_device_fstab:
  mount.mounted:
    - name: /mnt
    - device: {{ device }}
    - fstype: {{ info.filesystem }}
    - persist: no
  {% endif %}
{% endfor %}

{% for device, info in filesystems.items() %}
  {% set fs_file = 'swap' if info.filesystem == 'swap' else info.mountpoint %}
{{ macros.log('mount', 'add_fstab_' ~ fs_file) }}
add_fstab_{{ fs_file }}:
  mount.fstab_present:
    - name: {{ device }}
    - fs_file: {{ fs_file }}
    - fs_vfstype: {{ info.filesystem }}
    - fs_mntops: {{ ','.join(info.get('options', ['defaults'])) }}
    - fs_freq: 0
    - fs_passno: 0
  {% if not salt.filters.is_lvm(device) %}
    - mount_by: uuid
  {% endif %}
    - mount: no
    - not_change: yes
    - config: /mnt/etc/fstab
    - require:
      - mount: mount_device_fstab
{% endfor %}

{{ macros.log('mount', 'umount_device_fstab') }}
umount_device_fstab:
  mount.unmounted:
    - name: /mnt
    - requires: mount_device_fstab
0707010000005C000081A40000000000000000000000016130D1CF0000033A000000000000000000000000000000000000004500000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/storage/device/mount.sls{% import 'macros.yml' as macros %}

{% set filesystems = pillar['filesystems'] %}

{% for device, info in filesystems.items() %}
  {% if info.get('mountpoint') == '/' %}
{{ macros.log('mount', 'mount_/mnt') }}
mount_/mnt:
  mount.mounted:
    - name: /mnt
    - device: {{ device }}
    - fstype: {{ info.filesystem }}
    - persist: no
  {% endif %}
{% endfor %}

{% for device, info in filesystems.items() %}
  {% if info.get('mountpoint') and info.mountpoint != '/' %}
    {% set fs_file = '/mnt'|path_join(info.mountpoint[1:] if info.mountpoint.startswith('/') else info.mountpoint) %}
{{ macros.log('mount', 'mount_' ~ fs_file) }}
mount_{{ fs_file }}:
  mount.mounted:
    - name: {{ fs_file }}
    - device: {{ device }}
    - fstype: {{ info.filesystem }}
    - mkmnt: yes
    - persist: no
  {% endif %}
{% endfor %}
0707010000005D000081A40000000000000000000000016130D1CF000002CE000000000000000000000000000000000000004600000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/storage/device/umount.sls{% import 'macros.yml' as macros %}

{% set filesystems = pillar['filesystems'] %}

{% for device, info in filesystems.items() %}
  {% if info.get('mountpoint') and info.mountpoint != '/' %}
    {% set fs_file = '/mnt'|path_join(info.mountpoint[1:] if info.mountpoint.startswith('/') else info.mountpoint) %}
{{ macros.log('mount', 'umount_' ~ fs_file) }}
umount_{{ fs_file }}:
  mount.unmounted:
    - name: {{ fs_file }}
    - requires: mount_{{ fs_file }}
  {% endif %}
{% endfor %}

{% for device, info in filesystems.items() %}
  {% if info.get('mountpoint') == '/' %}
{{ macros.log('mount', 'umount_/mnt') }}
umount_/mnt:
  mount.unmounted:
    - name: /mnt
    - requires: mount_/mnt
  {% endif %}
{% endfor %}
0707010000005E000081A40000000000000000000000016130D1CF000001A5000000000000000000000000000000000000003F00000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/storage/format.sls{% import 'macros.yml' as macros %}

{% set filesystems = pillar['filesystems'] %}

{% for device, info in filesystems.items() %}
{{ macros.log('formatted', 'mkfs_partition_' ~ device) }}
mkfs_partition_{{ device }}:
  formatted.formatted:
    - name: {{ device }}
    - fs_type: {{ info.filesystem }}
  {% if info.filesystem in ('fat', 'vfat') and info.get('fat') %}
    - fat: {{ info.fat }}
  {% endif %}
{% endfor %}
0707010000005F000081A40000000000000000000000016130D1CF000000A1000000000000000000000000000000000000003E00000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/storage/fstab.sls{% set config = pillar['config'] %}

include:
  - .create_fstab
  - .device.fstab
  - .btrfs.fstab
{% if config.get('snapper') %}
  - .snapper.fstab
{% endif %}
07070100000060000081A40000000000000000000000016130D1CF000000FA000000000000000000000000000000000000003D00000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/storage/init.sls{% set software = pillar['software'] %}

include:
  - .partition
  - .raid
  - .volumes
  - .format
  - .subvolumes
{# TODO: Remove the double check (SumaForm bug) #}
{% if not software.get('image', {}).get('url') %}
  - .fstab
  - .mount
{% endif %}07070100000061000041ED0000000000000000000000026130D1CF00000000000000000000000000000000000000000000003800000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/storage/lvm07070100000062000081A40000000000000000000000016130D1CF000001BA000000000000000000000000000000000000004500000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/storage/lvm/software.sls{% import 'macros.yml' as macros %}

{% set software = pillar['software'] %}
{% set software_config = software.get('config', {}) %}

{{ macros.log('pkg', 'install_lvm2') }}
install_lvm2:
  pkg.installed:
    - name: lvm2
  {% if software_config.get('minimal') %}
    - no_recommends: yes
  {% endif %}
  {% if not software_config.get('verify') %}
    - skip_verify: yes
  {% endif %}
    - root: /mnt
    - require:
      - mount: mount_/mnt
07070100000063000081A40000000000000000000000016130D1CF0000055D000000000000000000000000000000000000004300000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/storage/lvm/volume.sls{% import 'macros.yml' as macros %}

{% set lvm = pillar.get('lvm', {}) %}

{% for group, group_info in lvm.items() %}
  {% set devices = [] %}
  {% for device in group_info['devices'] %}
    {% set info = {} %}
    {# We can store the device information inside a dict #}
    {% if device is mapping %}
      {% set info = device %}
      {% set device = device['name'] %}
    {% endif %}
    {% do devices.append(device) %}
{{ macros.log('lvm', 'create_physical_volume_' ~ device) }}
create_physical_volume_{{ device }}:
  lvm.pv_present:
    - name: {{ device }}
    {% for key, value in info.items() if key != 'name' %}
    - {{ key }}: {{ value }}
    {% endfor %}
  {% endfor %}

{{ macros.log('lvm', 'create_virtual_group_' ~ group) }}
create_virtual_group_{{ group }}:
  lvm.vg_present:
    - name: {{ group }}
    - devices: [{{ ', '.join(devices) }}]
    {% for key, value in group_info.items() if key not in ('devices', 'volumes') %}
    - {{ key }}: {{ value }}
    {% endfor %}

  {% for volume in group_info['volumes'] %}
{{ macros.log('lvm', 'create_logical_volume_' ~ volume['name']) }}
create_logical_volume_{{ volume['name'] }}:
  lvm.lv_present:
    - name: {{ volume['name'] }}
    - vgname: {{ group }}
    {% for key, value in volume.items() if key not in ('name', 'vgname') %}
    - {{ key }}: {{ value }}
    {% endfor %}
  {% endfor %}
{% endfor %}
07070100000064000081A40000000000000000000000016130D1CF000000A1000000000000000000000000000000000000003E00000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/storage/mount.sls{% set config = pillar['config'] %}

include:
  - .device.mount
  - .btrfs.mount
{% if config.get('snapper') %}
  - .snapper.mount
{% endif %}
  - ..chroot.mount07070100000065000081A40000000000000000000000016130D1CF0000004B000000000000000000000000000000000000004200000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/storage/partition.sls{% set filter=salt.filters.is_not_raid %}
{% include './_partition.sls' %}
07070100000066000081A40000000000000000000000016130D1CF000000CB000000000000000000000000000000000000004500000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/storage/post_install.sls{% set config = pillar['config'] %}

include:
{% if config.get('snapper') %}
  - .snapper.post_install
{% endif %}
  - .btrfs.post_install
{% if not config.get('reboot', True) %}
  - .umount
{% endif %}
07070100000067000041ED0000000000000000000000026130D1CF00000000000000000000000000000000000000000000003900000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/storage/raid07070100000068000081A40000000000000000000000016130D1CF00000023000000000000000000000000000000000000004200000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/storage/raid/init.slsinclude:
  - .mdadm
  - .partition
07070100000069000081A40000000000000000000000016130D1CF000001AD000000000000000000000000000000000000004300000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/storage/raid/mdadm.sls{% import 'macros.yml' as macros %}

{% set raid = pillar.get('raid', {}) %}

{% for device, info in raid.items() %}
{{ macros.log('raid', 'create_raid_' ~ device) }}
create_raid_{{ device }}:
  raid.present:
    - name: {{ device }}
    - level: {{ info.level }}
    - devices: {{ info.devices }}
  {% for key, value in info.items() if key not in ('level', 'devices') %}
    - {{ key }}: {{ value }}
  {% endfor %}
{% endfor %}
0707010000006A000081A40000000000000000000000016130D1CF00000048000000000000000000000000000000000000004700000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/storage/raid/partition.sls{% set filter=salt.filters.is_raid %}
{% include '../_partition.sls' %}
0707010000006B000081A40000000000000000000000016130D1CF000001D2000000000000000000000000000000000000004600000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/storage/raid/software.sls{% import 'macros.yml' as macros %}

{% set software = pillar['software'] %}
{% set software_config = software.get('config', {}) %}

{{ macros.log('pkg', 'install_raid') }}
install_raid:
  pkg.installed:
    - pkgs:
      - mdadm
      - dmraid
  {% if software_config.get('minimal') %}
    - no_recommends: yes
  {% endif %}
  {% if not software_config.get('verify') %}
    - skip_verify: yes
  {% endif %}
    - root: /mnt
    - require:
      - mount: mount_/mnt
0707010000006C000041ED0000000000000000000000026130D1CF00000000000000000000000000000000000000000000003C00000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/storage/snapper0707010000006D000081A40000000000000000000000016130D1CF00000526000000000000000000000000000000000000004600000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/storage/snapper/fstab.sls{% import 'macros.yml' as macros %}

{% set filesystems = pillar['filesystems'] %}

{% for device, info in filesystems.items() %}
  {% if info.get('mountpoint') == '/' %}
{{ macros.log('mount', 'mount_snapper_fstab') }}
mount_snapper_fstab:
  mount.mounted:
    - name: /mnt
    - device: {{ device }}
    - fstype: {{ info.filesystem }}
    - persist: no
  {% endif %}
{% endfor %}

{% for device, info in filesystems.items() %}
  {% if info.filesystem == 'btrfs' and info.mountpoint == '/' %}
    {% set prefix = info.subvolumes.get('prefix', '') %}
    {% set fs_file = '/'|path_join('.snapshots') %}
    {% set fs_mntops = 'subvol=%s'|format('/'|path_join(prefix, '.snapshots')) %}
{{ macros.log('mount', 'add_fstab_' ~ fs_file) }}
add_fstab_{{ fs_file }}:
  mount.fstab_present:
    - name: {{ device }}
    - fs_file: {{ fs_file }}
    - fs_vfstype: {{ info.filesystem }}
    - fs_mntops: {{ fs_mntops }}
    - fs_freq: 0
    - fs_passno: 0
    {% if not salt.filters.is_lvm(device) %}
    - mount_by: uuid
    {% endif %}
    - mount: no
    - not_change: yes
    - config: /mnt/etc/fstab
    - require:
      - mount: mount_snapper_fstab
  {% endif %}
{% endfor %}

{{ macros.log('mount', 'umount_snapper_fstab') }}
umount_snapper_fstab:
  mount.unmounted:
    - name: /mnt
    - requires: mount_snapper_fstab
0707010000006E000081A40000000000000000000000016130D1CF000000CC000000000000000000000000000000000000004F00000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/storage/snapper/grub2_mkconfig.sls{% import 'macros.yml' as macros %}

{{ macros.log('file', 'config_snapper_grub2') }}
config_snapper_grub2:
  file.append:
    - name: /mnt/etc/default/grub
    - text: SUSE_BTRFS_SNAPSHOT_BOOTING="true"
0707010000006F000081A40000000000000000000000016130D1CF0000028B000000000000000000000000000000000000004600000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/storage/snapper/mount.sls{% import 'macros.yml' as macros %}

{% set filesystems = pillar['filesystems'] %}

{% for device, info in filesystems.items() %}
  {% if info.filesystem == 'btrfs' and info.mountpoint == '/' %}
    {% set prefix = info.subvolumes.get('prefix', '') %}
    {% set fs_mntops = 'subvol=%s'|format('/'|path_join(prefix, '.snapshots')) %}
    {% set fs_file = '/mnt'|path_join('.snapshots') %}
{{ macros.log('mount', 'mount_' ~ fs_file) }}
mount_{{ fs_file }}:
  mount.mounted:
    - name: {{ fs_file }}
    - device: {{ device }}
    - fstype: {{ info.filesystem }}
    - mkmnt: no
    - opts: {{ fs_mntops }}
    - persist: no
  {% endif %}
{% endfor %}
07070100000070000081A40000000000000000000000016130D1CF00000270000000000000000000000000000000000000004D00000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/storage/snapper/post_install.sls{% import 'macros.yml' as macros %}

{% set filesystems = pillar['filesystems'] %}

{% for device, info in filesystems.items() %}
  {% if info.filesystem == 'btrfs' and info.mountpoint == '/' %}
{{ macros.log('snapper_install', 'snapper_step_four_' ~ device) }}
snapper_step_four_{{ device }}:
  snapper_install.step_four:
    - root: /mnt

{{ macros.log('snapper_install', 'snapper_step_five_' ~ device) }}
snapper_step_five_{{ device }}:
  snapper_install.step_five:
    - root: /mnt
    - snapshot_type: single
    - description: 'after installation'
    - important: yes
    - cleanup: number
  {% endif %}
{% endfor %}
07070100000071000081A40000000000000000000000016130D1CF00000217000000000000000000000000000000000000004900000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/storage/snapper/software.sls{% import 'macros.yml' as macros %}

{% set software = pillar['software'] %}
{% set software_config = software.get('config', {}) %}

{{ macros.log('pkg', 'install_snapper') }}
install_snapper:
  pkg.installed:
    - pkgs:
      - snapper
      - grub2-snapper-plugin
      - snapper-zypp-plugin
      - btrfsprogs
  {% if software_config.get('minimal') %}
    - no_recommends: yes
  {% endif %}
  {% if not software_config.get('verify') %}
    - skip_verify: yes
  {% endif %}
    - root: /mnt
    - require:
      - mount: mount_/mnt
07070100000072000081A40000000000000000000000016130D1CF0000026E000000000000000000000000000000000000004A00000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/storage/snapper/subvolume.sls{% import 'macros.yml' as macros %}

{% set filesystems = pillar['filesystems'] %}

{% for device, info in filesystems.items() %}
  {% if info.filesystem == 'btrfs' and info.mountpoint == '/' %}
{{ macros.log('snapper_install', 'snapper_step_one_' ~ device) }}
snapper_step_one_{{ device }}:
  snapper_install.step_one:
    - device: {{ device }}
    - description: 'first root filesystem'

{{ macros.log('snapper_install', 'snapper_step_two_' ~ device) }}
snapper_step_two_{{ device }}:
  snapper_install.step_two:
    - device: {{ device }}
    - prefix: "{{ info.subvolumes.get('prefix') }}"
  {% endif %}
{% endfor %}
07070100000073000081A40000000000000000000000016130D1CF0000011D000000000000000000000000000000000000004700000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/storage/snapper/umount.sls{% import 'macros.yml' as macros %}

{% set filesystems = pillar['filesystems'] %}

{% set fs_file = '/mnt'|path_join('.snapshots') %}
{{ macros.log('mount', 'umount_' ~ fs_file) }}
umount_{{ fs_file }}:
  mount.unmounted:
    - name: {{ fs_file }}
    - requires: mount_{{ fs_file }}
07070100000074000081A40000000000000000000000016130D1CF00000145000000000000000000000000000000000000004100000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/storage/software.sls{% set config = pillar['config'] %}

{% if pillar.get('raid') or pillar.get('lvm') or config.get('snapper') %}
include:
  {% if pillar.get('raid') %}
  - .raid.software
  {% endif %}
  {% if pillar.get('lvm') %}
  - .lvm.software
  {% endif %}
  {% if config.get('snapper') %}
  - .snapper.software
  {% endif %}
{% endif %}
07070100000075000081A40000000000000000000000016130D1CF00000085000000000000000000000000000000000000004300000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/storage/subvolumes.sls{% set config = pillar['config'] %}

include:
  - .btrfs.subvolume
{% if config.get('snapper') %}
  - .snapper.subvolume
{% endif %}
07070100000076000081A40000000000000000000000016130D1CF000000A6000000000000000000000000000000000000003F00000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/storage/umount.sls{% set config = pillar['config'] %}

include:
  - ..chroot.umount
{% if config.get('snapper') %}
  - .snapper.umount
{% endif %}
  - .btrfs.umount
  - .device.umount
07070100000077000081A40000000000000000000000016130D1CF00000019000000000000000000000000000000000000004000000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/storage/volumes.slsinclude:
  - .lvm.volume
07070100000078000081A40000000000000000000000016130D1CF00000121000000000000000000000000000000000000003D00000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/storage/wipe.sls{% import 'macros.yml' as macros %}

{% set partitions = salt.partmod.prepare_partition_data(pillar['partitions']) %}

{% for device in partitions %}
{{ macros.log('module', 'wipe_' ~ device) }}
wipe_{{ device }}:
  module.run:
    - devices.wipe:
      - device: {{ device }}
{% endfor %}07070100000079000081A40000000000000000000000016130D1CF0000056E000000000000000000000000000000000000003600000000yomi-0.0.1+git.1630589391.4557cfd/salt/yomi/users.sls{% import 'macros.yml' as macros %}

{% set users = pillar['users'] %}

{% for user in users %}
{{ macros.log('module', 'create_user_' ~ user.username) }}
create_user_{{ user.username }}:
  module.run:
    - user.add:
      - name: {{ user.username }}
      - createhome: yes
      - root: /mnt
    - unless: grep -q '{{ user.username }}' /mnt/etc/shadow

  {% if user.get('password') %}
{{ macros.log('module', 'set_password_user_' ~ user.username) }}
# We should use here the root parameter, but we move to chroot.call
# because bsc#1167909
set_password_user_{{ user.username }}:
  module.run:
    - chroot.call:
      - root: /mnt
      - function: shadow.set_password
      - name: {{ user.username }}
      - password: "'{{ user.password }}'"
      - use_usermod: yes
    - unless: grep -q '{{ user.username }}:{{ user.password }}' /mnt/etc/shadow
  {% endif %}

  {% for certificate in user.get('certificates', []) %}
{{ macros.log('module', 'add_certificate_user_' ~ user.username ~ '_' ~ loop.index) }}
add_certificate_user_{{ user.username }}_{{ loop.index }}:
  module.run:
    - chroot.call:
      - root: /mnt
      - function: ssh.set_auth_key
      - user: {{ user.username }}
      - key: "'{{ certificate }}'"
    - unless: grep -q '{{ certificate }}' /mnt/{{ 'home/' if user.username != 'root' else '' }}{{ user.username }}/.ssh/authorized_keys
  {% endfor %}
{% endfor %}
0707010000007A000041ED0000000000000000000000036130D1CF00000000000000000000000000000000000000000000002800000000yomi-0.0.1+git.1630589391.4557cfd/tests0707010000007B000081A40000000000000000000000016130D1CF00000000000000000000000000000000000000000000003400000000yomi-0.0.1+git.1630589391.4557cfd/tests/__init__.py0707010000007C000041ED0000000000000000000000026130D1CF00000000000000000000000000000000000000000000003100000000yomi-0.0.1+git.1630589391.4557cfd/tests/fixtures0707010000007D000081A40000000000000000000000016130D1CF00000905000000000000000000000000000000000000004000000000yomi-0.0.1+git.1630589391.4557cfd/tests/fixtures/ay_complex.xml<?xml version="1.0"?>
<!DOCTYPE profile>
<profile xmlns="http://www.suse.com/1.0/yast2ns" xmlns:config="http://www.suse.com/1.0/configns">
  <partitioning config:type="list">
    <drive>
      <device>/dev/sda</device>
      <initialize config:type="boolean">true</initialize>
      <partitions config:type="list">
	<partition>
	  <create config:type="boolean" >false</create>
	  <crypt_fs config:type="boolean">false</crypt_fs>
	  <mount>/</mount>
	  <fstopt>
	    ro,noatime,user,data=ordered,acl,user_xattr
	  </fstopt>
	  <label>mydata</label>
	  <uuid>UUID</uuid>
	  <size>10G</size>
	  <filesystem config:type="symbol">btrfs</filesystem>
	  <mkfs_options>-I 128</mkfs_options>
	  <partition_nr config:type="integer">1</partition_nr>
	  <partition_id config:type="integer">131</partition_id>
	  <partition_type>primary</partition_type>
	  <mountby config:type="symbol">label</mountby>
	  <subvolumes config:type="list">
	    <path>tmp</path>
	    <path>opt</path>
	    <path>srv</path>
	    <path>var/crash</path>
	    <path>var/lock</path>
	    <path>var/run</path>
	    <path>var/tmp</path>
	    <path>var/spool</path>
	  </subvolumes>
	  <create_subvolumes config:type="boolean" >false</create_subvolumes>
	  <subvolumes_prefix>@</subvolumes_prefix>
	  <lv_name>opt_lv</lv_name>
	  <stripes config:type="integer">2</stripes>
	  <stripesize config:type="integer">4</stripesize>
	  <lvm_group>system</lvm_group>
	  <pool config:type="boolean">false</pool>
	  <used_pool>my_thin_pool</used_pool>
	  <raid_name>/dev/md/0</raid_name>
	  <raid_options>
	    <chunk_size>4</chunk_size>
	    <parity_algorithm>left_asymmetric</parity_algorithm>
	    <raid_type>raid1</raid_type>
	    <device_order config:type="list">
              <device>/dev/sdb2</device>
              <device>/dev/sda1</device>
            </device_order>
	  </raid_options>
	  <bcache_backing_for>/dev/bcache0</bcache_backing_for>
	  <bcache_caching_for config:type="list">
	    <listentry>/dev/bcache0</listentry>
	  </bcache_caching_for>
	  <resize config:type="boolean">false</resize>
	</partition>
      </partitions>
      <use>all</use>
      <type config:type="symbol">CT_DISK</type>
      <disklabel>gpt</disklabel>
      <enable_snapshots config:type="boolean">true</enable_snapshots>
    </drive>
  </partitioning>
</profile>
0707010000007E000081A40000000000000000000000016130D1CF000005B5000000000000000000000000000000000000004100000000yomi-0.0.1+git.1630589391.4557cfd/tests/fixtures/ay_lvm_ext3.xml<?xml version="1.0"?>
<!DOCTYPE profile>
<profile xmlns="http://www.suse.com/1.0/yast2ns" xmlns:config="http://www.suse.com/1.0/configns">
  <partitioning config:type="list">
    <drive>
      <device>/dev/sda</device>
      <partitions config:type="list">
	<partition>
	  <!-- <create config:type="boolean">true</create> -->
          <lvm_group>system</lvm_group>
          <partition_type>primary</partition_type>
	  <partition_id config:type="integer">142</partition_id>
	  <partition_nr config:type="integer">1</partition_nr>
          <size>max</size>
	</partition>
      </partitions>
      <use>all</use>
    </drive>
    <drive>
      <device>/dev/system</device>
      <is_lvm_vg config:type="boolean">true</is_lvm_vg>
      <partitions config:type="list">
        <partition>
          <filesystem config:type="symbol">ext3</filesystem>
          <lv_name>user_lv</lv_name>
          <mount>/usr</mount>
          <size>15G</size>
        </partition>
        <partition>
          <filesystem config:type="symbol">ext3</filesystem>
          <lv_name>opt_lv</lv_name>
          <mount>/opt</mount>
          <size>10G</size>
        </partition>
        <partition>
          <filesystem config:type="symbol">ext3</filesystem>
          <lv_name>var_lv</lv_name>
          <mount>/var</mount>
          <size>1G</size>
        </partition>
      </partitions>
      <pesize>4M</pesize>
      <use>all</use>
    </drive>
  </partitioning>
</profile>
0707010000007F000081A40000000000000000000000016130D1CF00000565000000000000000000000000000000000000004200000000yomi-0.0.1+git.1630589391.4557cfd/tests/fixtures/ay_raid_ext3.xml<?xml version="1.0"?>
<!DOCTYPE profile>
<profile xmlns="http://www.suse.com/1.0/yast2ns" xmlns:config="http://www.suse.com/1.0/configns">
  <partitioning config:type="list">
    <drive>
      <device>/dev/sda</device>
      <partitions config:type="list">
	<partition>
	  <filesystem config:type="symbol">ext3</filesystem>
          <mount>/</mount>
          <size>20G</size>
	</partition>
	<partition>
          <raid_name>/dev/md/0</raid_name>
          <size>max</size>
	</partition>
      </partitions>
      <use>all</use>
    </drive>
    <drive>
      <device>/dev/sdb</device>
      <disklabel>none</disklabel>
      <partitions config:type="list">
	<partition>
          <raid_name>/dev/md/0</raid_name>
	</partition>
      </partitions>
      <use>all</use>
    </drive>
    <drive>
      <device>/dev/md/0</device>
      <partitions config:type="list">
	<partition>
	  <filesystem config:type="symbol">ext3</filesystem>
          <mount>/home</mount>
          <size>40G</size>
	</partition>
	<partition>
	  <filesystem config:type="symbol">ext3</filesystem>
          <mount>/srv</mount>
          <size>10G</size>
	</partition>
      </partitions>
      <raid_options>
	<chunk_size>4</chunk_size>
	<parity_algorithm>left_asymmetric</parity_algorithm>
	<raid_type>raid1</raid_type>
      </raid_options>
      <use>all</use>
    </drive>
  </partitioning>
</profile>
07070100000080000081A40000000000000000000000016130D1CF00000259000000000000000000000000000000000000004F00000000yomi-0.0.1+git.1630589391.4557cfd/tests/fixtures/ay_raid_no_partition_ext3.xml<?xml version="1.0"?>
<!DOCTYPE profile>
<profile xmlns="http://www.suse.com/1.0/yast2ns" xmlns:config="http://www.suse.com/1.0/configns">
  <partitioning config:type="list">
    <drive>
      <device>/dev/md/0</device>
      <disklabel>none</disklabel>
      <partitions config:type="list">
	<partition>
	  <mount>/home</mount>
	  <size>40G</size>
	</partition>
      </partitions>
      <raid_options>
	<chunk_size>4</chunk_size>
	<parity_algorithm>left_asymmetric</parity_algorithm>
	<raid_type>raid1</raid_type>
      </raid_options>
      <use>all</use>
    </drive>
  </partitioning>
</profile>
07070100000081000081A40000000000000000000000016130D1CF000006B4000000000000000000000000000000000000004500000000yomi-0.0.1+git.1630589391.4557cfd/tests/fixtures/ay_single_btrfs.xml<?xml version="1.0"?>
<!DOCTYPE profile>
<profile xmlns="http://www.suse.com/1.0/yast2ns" xmlns:config="http://www.suse.com/1.0/configns">

  <partitioning config:type="list">
    <drive>
      <device>/dev/sda</device>
      <initialize config:type="boolean">true</initialize>
      <partitions config:type="list">
	<partition>
	  <create config:type="boolean">true</create>
	  <size>1M</size>
	  <format config:type="boolean">false</format>
	  <partition_nr config:type="integer">1</partition_nr>
	</partition>
	<partition>
	  <create config:type="boolean">true</create>
	  <mount>swap</mount>
	  <size>2G</size>
	  <format config:type="boolean">true</format>
	  <filesystem config:type="symbol">swap</filesystem>
	  <partition_nr config:type="integer">2</partition_nr>
	  <partition_id config:type="integer">130</partition_id>
	</partition>
	<partition>
	  <create config:type="boolean">true</create>
	  <mount>/</mount>
	  <size>max</size>
	  <format config:type="boolean">true</format>
	  <filesystem config:type="symbol">btrfs</filesystem>
	  <partition_nr config:type="integer">3</partition_nr>
	  <partition_id config:type="integer">131</partition_id>
	  <subvolumes config:type="list">
	    <listentry>tmp</listentry>
	    <listentry>opt</listentry>
	    <listentry>srv</listentry>
	    <listentry>
	      <path>var/lib/pgsql</path>
	      <copy_on_write config:type="boolean">false</copy_on_write>
	    </listentry>
	  </subvolumes>
	  <subvolumes_prefix>@</subvolumes_prefix>
	</partition>
      </partitions>
      <use>all</use>
      <type>CT_DISK</type>
      <disklabel>gpt</disklabel>
      <enable_snapshots config:type="boolean">false</enable_snapshots>
    </drive>
  </partitioning>

</profile>
07070100000082000081A40000000000000000000000016130D1CF00003902000000000000000000000000000000000000004400000000yomi-0.0.1+git.1630589391.4557cfd/tests/fixtures/ay_single_ext3.xml<?xml version="1.0"?>
<!DOCTYPE profile>
<profile xmlns="http://www.suse.com/1.0/yast2ns" xmlns:config="http://www.suse.com/1.0/configns">
  <general>
    <mode>
      <activate_systemd_default_target config:type="boolean">
	true
      </activate_systemd_default_target>
      <confirm config:type="boolean">true</confirm>
      <confirm_base_product_license config:type="boolean">
	false
      </confirm_base_product_license>
      <final_halt config:type="boolean">false</final_halt>
      <final_reboot config:type="boolean">true</final_reboot>
      <final_restart_services config:type="boolean">
	true
      </final_restart_services>
      <forceboot config:type="boolean">false</forceboot>
      <halt config:type="boolean">false</halt>
      <max_systemd_wait config:type="integer">30</max_systemd_wait>
      <ntp_sync_time_before_installation>
	0.de.pool.ntp.org
      </ntp_sync_time_before_installation>
      <second_stage config:type="boolean">true</second_stage>
    </mode>
    <proposals config:type="list">
      <proposal>partitions_proposal</proposal>
      <proposal>timezone_proposal</proposal>
      <proposal>software_proposal</proposal>
    </proposals>
    <self_update config:type="boolean">true</self_update>
    <self_update_url>
      http://example.com/updates/$arch
    </self_update_url>
    <semi-automatic config:type="list">
      <semi-automatic_entry>networking</semi-automatic_entry>
      <semi-automatic_entry>scc</semi-automatic_entry>
      <semi-automatic_entry>partitioning</semi-automatic_entry>
    </semi-automatic>
    <signature-handling>
      <accept_unsigned_file config:type="boolean">
	false
      </accept_unsigned_file>
      <accept_file_without_checksum config:type="boolean">
	false
      </accept_file_without_checksum>
      <accept_verification_failed config:type="boolean">
	false
      </accept_verification_failed>
      <accept_unknown_gpg_key config:type="boolean">
	false
      </accept_unknown_gpg_key>
      <accept_non_trusted_gpg_key config:type="boolean">
	false
      </accept_non_trusted_gpg_key>
      <import_gpg_key config:type="boolean">
	false
      </import_gpg_key>
    </signature-handling>
    <storage>
      <start_multipath config:type="boolean">false</start_multipath>
    </storage>
    <wait>
      <pre-modules config:type="list">
	<module>
	  <name>networking</name>
	  <sleep>
	    <time config:type="integer">10</time>
	    <feedback config:type="boolean">true</feedback>
	  </sleep>
	  <script>
	    <source>echo foo</source>
	    <debug config:type="boolean">false</debug>
	  </script>
	</module>
      </pre-modules>
      <post-modules config:type="list">
	<module>
	  <name>networking</name>
	  <sleep>
	    <time config:type="integer">10</time>
	    <feedback config:type="boolean">true</feedback>
	  </sleep>
	  <script>
	    <source>echo foo</source>
	    <debug config:type="boolean">false</debug>
	  </script>
	</module>
      </post-modules>
    </wait>
    <cio_ignore config:type="boolean">false</cio_ignore>
  </general>

  <report>
    <errors>
      <show config:type="boolean">true</show>
      <timeout config:type="integer">0</timeout>
      <log config:type="boolean">true</log>
    </errors>
    <warnings>
      <show config:type="boolean">true</show>
      <timeout config:type="integer">10</timeout>
      <log config:type="boolean">true</log>
    </warnings>
    <messages>
      <show config:type="boolean">true</show>
      <timeout config:type="integer">10</timeout>
      <log config:type="boolean">true</log>
    </messages>
    <yesno_messages>
      <show config:type="boolean">true</show>
      <timeout config:type="integer">10</timeout>
      <log config:type="boolean">true</log>
    </yesno_messages>
  </report>

  <suse_register>
    <do_registration config:type="boolean">true</do_registration>
    <email>tux@example.com</email>
    <reg_code>MY_SECRET_REGCODE</reg_code>
    <install_updates config:type="boolean">true</install_updates>
    <slp_discovery config:type="boolean">false</slp_discovery>
    <reg_server>
      https://smt.example.com
    </reg_server>
    <reg_server_cert_fingerprint_type>
      SHA1
    </reg_server_cert_fingerprint_type>
    <reg_server_cert_fingerprint>
      01:AB...:EF
    </reg_server_cert_fingerprint>
    <reg_server_cert>
      http://smt.example.com/smt.crt
    </reg_server_cert>
    <addons config:type="list">
      <addon>
	<name>sle-module-basesystem</name>
	<version>15.1</version>
	<arch>x86_64</arch>
      </addon>
    </addons>
  </suse_register>

  <bootloader>
    <loader_type>
      grub2-efi
    </loader_type>
    <global>
      <activate config:type="boolean">true</activate>
      <append>nomodeset vga=0x317</append>
      <boot_boot>false</boot_boot>
      <boot_custom>/dev/sda</boot_custom>
      <boot_extended>false</boot_extended>
      <boot_mbr>false</boot_mbr>
      <boot_root>false</boot_root>
      <generic_mbr config:type="boolean">false</generic_mbr>
      <gfxmode>1280x1024x24</gfxmode>
      <os_prober config:type="boolean">false</os_prober>
      <cpu_mitigations>auto</cpu_mitigations>
      <suse_btrfs config:type="boolean">true</suse_btrfs>
      <serial>
	serial --speed=115200 --unit=0 --word=8 --parity=no --stop=1
      </serial>
      <terminal>serial</terminal>
      <timeout config:type="integer">10</timeout>
      <trusted_boot config:type="boolean">true</trusted_boot>
      <vgamode>0x317</vgamode>
      <xen_append>nomodeset vga=0x317</xen_append>
      <xen_kernel_append>dom0_mem=768M</xen_kernel_append>
    </global>
    <device_map config:type="list">
      <device_map_entry>
	<firmware>hd0</firmware>
	<linux>/dev/disk/by-id/ata-ST3500418AS_6VM23FX0</linux>
      </device_map_entry>
    </device_map>
  </bootloader>

  <partitioning config:type="list">
    <drive>
      <device>/dev/sda</device>
      <initialize config:type="boolean">true</initialize>
      <partitions config:type="list">
	<partition>
	  <create config:type="boolean">true</create>
	  <size>1M</size>
	  <format config:type="boolean">false</format>
	  <partition_nr config:type="integer">1</partition_nr>
	</partition>
	<partition>
	  <create config:type="boolean">true</create>
	  <mount>swap</mount>
	  <size>2G</size>
	  <format config:type="boolean">true</format>
	  <filesystem config:type="symbol">swap</filesystem>
	  <partition_nr config:type="integer">2</partition_nr>
	  <partition_id config:type="integer">130</partition_id>
	</partition>
	<partition>
	  <create config:type="boolean">true</create>
	  <mount>/</mount>
	  <size>max</size>
	  <format config:type="boolean">true</format>
	  <filesystem config:type="symbol">ext3</filesystem>
	  <partition_nr config:type="integer">3</partition_nr>
	  <partition_id config:type="integer">131</partition_id>
	</partition>
      </partitions>
      <use>all</use>
      <type>CT_DISK</type>
      <disklabel>gpt</disklabel>
      <enable_snapshots config:type="boolean">false</enable_snapshots>
    </drive>
  </partitioning>

  <language>
    <language>en_GB</language>
    <languages>de_DE,en_US</languages>
  </language>

  <timezone>
    <hwclock>UTC</hwclock>
    <timezone>Europe/Berlin</timezone>
  </timezone>

  <keyboard>
    <keymap>german</keymap>
  </keyboard>

  <software>
    <products config:type="list">
      <product>SLED</product>
    </products>
    <patterns config:type="list">
      <pattern>directory_server</pattern>
    </patterns>
    <packages config:type="list">
      <package>apache</package>
      <package>postfix</package>
    </packages>
    <remove-packages config:type="list">
      <package>postfix</package>
    </remove-packages>
    <do_online_update config:type="boolean">true</do_online_update>
    <kernel>kernel-default</kernel>
    <install_recommended config:type="boolean">false</install_recommended>
    <post-packages config:type="list">
      <package>yast2-cim</package>
    </post-packages>
    <post-patterns config:type="list">
      <pattern>apparmor</pattern>
    </post-patterns>
  </software>

  <add-on>
    <add_on_products config:type="list">
      <listentry>
	<media_url>cd:///sdk</media_url>
	<product>sle-sdk</product>
	<alias>SLES SDK</alias>
	<product_dir>/</product_dir>
	<priority config:type="integer">20</priority>
	<ask_on_error config:type="boolean">false</ask_on_error>
	<confirm_license config:type="boolean">false</confirm_license>
	<name>SUSE Linux Enterprise Software Development Kit</name>
      </listentry>
    </add_on_products>
    <add_on_others config:type="list">
      <listentry>
	<media_url>https://download.opensuse.org/repositories/YaST:/Head/openSUSE_Leap_15.1/</media_url>
	<alias>yast2_head</alias>
	<priority config:type="integer">30</priority>
	<name>Latest YaST2 packages from OBS</name>
      </listentry>
    </add_on_others>
  </add-on>

  <services-manager>
    <default_target>multi-user</default_target>
    <services>
      <disable config:type="list">
	<service>libvirtd</service>
      </disable>
      <enable config:type="list">
	<service>sshd</service>
      </enable>
      <on_demand config:type="list">
	<service>cups</service>
      </on_demand>
    </services>
  </services-manager>

  <networking>
    <dns>
      <dhcp_hostname config:type="boolean">true</dhcp_hostname>
      <domain>site</domain>
      <hostname>linux-bqua</hostname>
      <nameservers config:type="list">
	<nameserver>192.168.1.116</nameserver>
	<nameserver>192.168.1.117</nameserver>
	<nameserver>192.168.1.118</nameserver>
      </nameservers>
      <resolv_conf_policy>auto</resolv_conf_policy>
      <searchlist config:type="list">
	<search>example.com</search>
	<search>example.net</search>
      </searchlist>
      <write_hostname config:type="boolean">false</write_hostname>
    </dns>
    <interfaces config:type="list">
      <interface>
	<bootproto>dhcp</bootproto>
	<device>eth0</device>
	<startmode>auto</startmode>
      </interface>
      <interface>
	<bootproto>static</bootproto>
	<broadcast>127.255.255.255</broadcast>
	<device>lo</device>
	<firewall>no</firewall>
	<ipaddr>127.0.0.1</ipaddr>
	<netmask>255.0.0.0</netmask>
	<network>127.0.0.0</network>
	<prefixlen>8</prefixlen>
	<startmode>nfsroot</startmode>
	<usercontrol>no</usercontrol>
      </interface>
    </interfaces>
    <ipv6 config:type="boolean">true</ipv6>
    <keep_install_network config:type="boolean">false</keep_install_network>
    <managed config:type="boolean">false</managed>       ###### NetworkManager?
    <net-udev config:type="list">
      <rule>
	<name>eth0</name>
	<rule>ATTR{address}</rule>
	<value>00:30:6E:08:EC:80</value>
      </rule>
    </net-udev>
    <s390-devices config:type="list">
      <listentry>
	<chanids>0.0.0800 0.0.0801 0.0.0802</chanids>
	<type>qeth</type>
      </listentry>
    </s390-devices>
    <routing>
      <ipv4_forward config:type="boolean">false</ipv4_forward>
      <ipv6_forward config:type="boolean">false</ipv6_forward>
      <routes config:type="list">
	<route>
          <destination>192.168.2.1</destination>
          <device>eth0</device>
          <extrapara>foo</extrapara>
          <gateway>-</gateway>
          <netmask>-</netmask>
	</route>
	<route>
          <destination>default</destination>
          <device>eth0</device>
          <gateway>192.168.1.1</gateway>
          <netmask>-</netmask>
	</route>
	<route>
          <destination>default</destination>
          <device>lo</device>
          <gateway>192.168.5.1</gateway>
          <netmask>-</netmask>
	</route>
      </routes>
    </routing>
  </networking>

  <users config:type="list">
    <user>
      <username>root</username>
      <user_password>password</user_password>
      <uid>1001</uid>
      <gid>100</gid>
      <encrypted config:type="boolean">false</encrypted>
      <fullname>Root User</fullname>
      <authorized_keys config:type="list">
	<listentry>command="/opt/login.sh" ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDKLt1vnW2vTJpBp3VK91rFsBvpY97NljsVLdgUrlPbZ/L51FerQQ+djQ/ivDASQjO+567nMGqfYGFA/De1EGMMEoeShza67qjNi14L1HBGgVojaNajMR/NI2d1kDyvsgRy7D7FT5UGGUNT0dlcSD3b85zwgHeYLidgcGIoKeRi7HpVDOOTyhwUv4sq3ubrPCWARgPeOLdVFa9clC8PTZdxSeKp4jpNjIHEyREPin2Un1luCIPWrOYyym7aRJEPopCEqBA9HvfwpbuwBI5F0uIWZgSQLfpwW86599fBo/PvMDa96DpxH1VlzJlAIHQsMkMHbsCazPNC0++Kp5ZVERiH root@example.net</listentry>
      </authorized_keys>
    </user>
    <user>
      <username>tux</username>
      <user_password>password</user_password>
      <uid>1002</uid>
      <gid>100</gid>
      <encrypted config:type="boolean">false</encrypted>
      <fullname>Plain User</fullname>
      <home>/Users/plain</home>
      <password_settings>
	<max>120</max>
	<inact>5</inact>
      </password_settings>
    </user>
  </users>

  <groups config:type="list">
    <group>
      <gid>100</gid>
      <groupname>users</groupname>
      <userlist>bob,alice</userlist>
    </group>
  </groups>

  <login_settings>
    <autologin_user>vagrant</autologin_user>
    <password_less_login config:type="boolean">true</password_less_login>
  </login_settings>

  <sysconfig config:type="list" >
    <sysconfig_entry>
      <sysconfig_key>XNTPD_INITIAL_NTPDATE</sysconfig_key>
      <sysconfig_path>/etc/sysconfig/xntp</sysconfig_path>
      <sysconfig_value>ntp.host.com</sysconfig_value>
    </sysconfig_entry>
    <sysconfig_entry>
      <sysconfig_key>HTTP_PROXY</sysconfig_key>
      <sysconfig_path>/etc/sysconfig/proxy</sysconfig_path>
      <sysconfig_value>proxy.host.com:3128</sysconfig_value>
    </sysconfig_entry>
    <sysconfig_entry>
      <sysconfig_key>FTP_PROXY</sysconfig_key>
      <sysconfig_path>/etc/sysconfig/proxy</sysconfig_path>
      <sysconfig_value>proxy.host.com:3128</sysconfig_value>
    </sysconfig_entry>
  </sysconfig>

  <firewall>
    <enable_firewall>true</enable_firewall>
    <log_denied_packets>all</log_denied_packets>
    <default_zone>external</default_zone>
    <zones config:type="list">
      <zone>
	<name>public</name>
	<interfaces config:type="list">
          <interface>eth0</interface>
	</interfaces>
	<services config:type="list">
          <service>ssh</service>
          <service>dhcp</service>
          <service>dhcpv6</service>
          <service>samba</service>
          <service>vnc-server</service>
	</services>
	<ports config:type="list">
          <port>21/udp</port>
          <port>22/udp</port>
          <port>80/tcp</port>
          <port>443/tcp</port>
          <port>8080/tcp</port>
	</ports>
      </zone>
      <zone>
	<name>dmz</name>
	<interfaces config:type="list">
          <interface>eth1</interface>
	</interfaces>
      </zone>
    </zones>
  </firewall>

</profile>
07070100000083000081A40000000000000000000000016130D1CF00000983000000000000000000000000000000000000004500000000yomi-0.0.1+git.1630589391.4557cfd/tests/fixtures/list_extensions.txt[1mAVAILABLE EXTENSIONS AND MODULES[0m

    [1mBasesystem Module 15 SP2 x86_64[0m [33m(Activated)[0m
    Deactivate with: SUSEConnect [31m-d[0m -p sle-module-basesystem/15.2/x86_64

        [1mContainers Module 15 SP2 x86_64[0m
        Activate with: SUSEConnect -p sle-module-containers/15.2/x86_64

        [1mDesktop Applications Module 15 SP2 x86_64[0m
        Activate with: SUSEConnect -p sle-module-desktop-applications/15.2/x86_64

            [1mDevelopment Tools Module 15 SP2 x86_64[0m
            Activate with: SUSEConnect -p sle-module-development-tools/15.2/x86_64

            [1mSUSE Linux Enterprise Workstation Extension 15 SP2 x86_64 (ALPHA)[0m
            Activate with: SUSEConnect -p sle-we/15.2/x86_64 -r [32m[1mADDITIONAL REGCODE[0m 

        [1mPython 2 Module 15 SP2 x86_64[0m
        Activate with: SUSEConnect -p sle-module-python2/15.2/x86_64

        [1mSUSE Linux Enterprise Live Patching 15 SP2 x86_64 (ALPHA)[0m
        Activate with: SUSEConnect -p sle-module-live-patching/15.2/x86_64 -r [32m[1mADDITIONAL REGCODE[0m 

        [1mSUSE Package Hub 15 SP2 x86_64[0m
        Activate with: SUSEConnect -p PackageHub/15.2/x86_64

        [1mServer Applications Module 15 SP2 x86_64[0m [33m(Activated)[0m
        Deactivate with: SUSEConnect [31m-d[0m -p sle-module-server-applications/15.2/x86_64

            [1mLegacy Module 15 SP2 x86_64[0m
            Activate with: SUSEConnect -p sle-module-legacy/15.2/x86_64

            [1mPublic Cloud Module 15 SP2 x86_64[0m
            Activate with: SUSEConnect -p sle-module-public-cloud/15.2/x86_64

            [1mSUSE Linux Enterprise High Availability Extension 15 SP2 x86_64 (ALPHA)[0m
            Activate with: SUSEConnect -p sle-ha/15.2/x86_64 -r [32m[1mADDITIONAL REGCODE[0m 

            [1mWeb and Scripting Module 15 SP2 x86_64[0m
            Activate with: SUSEConnect -p sle-module-web-scripting/15.2/x86_64

        [1mTransactional Server Module 15 SP2 x86_64[0m
        Activate with: SUSEConnect -p sle-module-transactional-server/15.2/x86_64


[1mREMARKS[0m

[31m(Not available)[0m The module/extension is [1mnot[0m enabled on your RMT/SMT
[33m(Activated)[0m     The module/extension is activated on your system

[1mMORE INFORMATION[0m

You can find more information about available modules here:
https://www.suse.com/documentation/sles-15/singlehtml/art_modules/art_modules.html
07070100000084000081ED0000000000000000000000016130D1CF000006FB000000000000000000000000000000000000003500000000yomi-0.0.1+git.1630589391.4557cfd/tests/run_tests.sh#! /bin/bash

#
# Author: Alberto Planas <aplanas@suse.com>
#
# Copyright 2019 SUSE LLC.
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.

# Setup the environment so the Python modules living in salt/_* can be
# found.

cd "$(dirname "${BASH_SOURCE[0]}")"

test_env=$(mktemp -d -t tmp.XXXX)
tear_down() {
    rm -fr "$test_env"
}

trap tear_down EXIT

# Create the temporary Python modules, that once added in the
# PYTHON_PATH can be found and imported
touch "$test_env"/__init__.py
for module in modules states grains utils; do
    mkdir "$test_env"/"$module"
    touch "$test_env"/"$module"/__init__.py
    [ "$(ls -A ../salt/_"$module")" ] && ln -sr ../salt/_"$module"/* "$test_env"/"$module"/
done

for binary in autoyast2yomi monitor; do
    ln -sr ../"$binary" "$test_env"/"$binary".py
done

if [ -z $PYTHONPATH ]; then
    export PYTHONPATH="$test_env":"$test_env"/utils:.
else
    export PYTHONPATH="$PATHONPATH":"$test_env":"$test_env"/utils:.
fi

if [ -z "$*" ]; then
    python3 -m unittest discover
else
    python3 -m unittest "$@"
fi
07070100000085000081A40000000000000000000000016130D1CF00005A1B000000000000000000000000000000000000003E00000000yomi-0.0.1+git.1630589391.4557cfd/tests/test_autoyast2yomi.py# -*- coding: utf-8 -*-
#
# Author: Alberto Planas <aplanas@suse.com>
#
# Copyright 2019 SUSE LLC.
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.

import os.path
import unittest
from unittest.mock import patch
import xml.etree.ElementTree as ET

import autoyast2yomi


class AutoYaST2YomiTestCase(unittest.TestCase):
    def _parse_xml(self, name):
        name = os.path.join(os.path.dirname(__file__), "fixtures/{}".format(name))
        return ET.parse(name)

    def setUp(self):
        self.maxDiff = None

    def test__find(self):
        control = self._parse_xml("ay_single_ext3.xml")
        general = autoyast2yomi.Convert._find(control.getroot(), "general")
        self.assertEqual(general.tag, "{http://www.suse.com/1.0/yast2ns}general")

        non_existent = autoyast2yomi.Convert._find(control, "non-existent")
        self.assertIsNone(non_existent)

    def test__get_tag(self):
        control = ET.fromstring('<a xmlns="http://www.suse.com/1.0/yast2ns"><b/></a>')
        self.assertEqual(autoyast2yomi.Convert._get_tag(control[0]), "b")

    def test__get_type(self):
        control = ET.fromstring(
            '<a xmlns="http://www.suse.com/1.0/yast2ns" '
            'xmlns:config="http://www.suse.com/1.0/configns">'
            '<b config:type="integer"/></a>'
        )
        self.assertEqual(autoyast2yomi.Convert._get_type(control[0]), "integer")

        control = ET.fromstring(
            '<a xmlns="http://www.suse.com/1.0/yast2ns" '
            'xmlns:config="http://www.suse.com/1.0/configns">'
            "<b/></a>"
        )
        self.assertIsNone(autoyast2yomi.Convert._get_type(control[0]))

    def test__get_text(self):
        control = ET.fromstring(
            '<a xmlns="http://www.suse.com/1.0/yast2ns"><b>text</b></a>'
        )
        value = autoyast2yomi.Convert._get_text(control[0])
        self.assertEqual(value, "text")

        non_text = autoyast2yomi.Convert._get_text(None)
        self.assertIsNone(non_text)

    def test__get_bool(self):
        control = ET.fromstring(
            '<a xmlns="http://www.suse.com/1.0/yast2ns"><b>true</b>' "<c>false</c></a>"
        )
        value = autoyast2yomi.Convert._get_bool(control[0])
        self.assertTrue(value)

        value = autoyast2yomi.Convert._get_bool(control[1])
        self.assertFalse(value)

        non_bool = autoyast2yomi.Convert._get_bool(control)
        self.assertIsNone(non_bool)

    def test__get_int(self):
        control = ET.fromstring(
            '<a xmlns="http://www.suse.com/1.0/yast2ns"><b>0</b>' "<c>1</c></a>"
        )
        value = autoyast2yomi.Convert._get_int(control[0])
        self.assertEqual(value, 0)

        value = autoyast2yomi.Convert._get_int(control[1])
        self.assertEqual(value, 1)

        non_int = autoyast2yomi.Convert._get_int(control)
        self.assertIsNone(non_int)

    def test__parse_single_text(self):
        control = ET.fromstring('<a xmlns="http://www.suse.com/1.0/yast2ns">text</a>')
        self.assertEqual(autoyast2yomi.Convert._parse(control), {"a": "text"})

    def test__parse_single_bool(self):
        control = ET.fromstring(
            '<a xmlns="http://www.suse.com/1.0/yast2ns" '
            'xmlns:config="http://www.suse.com/1.0/configns" '
            'config:type="boolean">true</a>'
        )
        self.assertEqual(autoyast2yomi.Convert._parse(control), {"a": True})

    def test__parse_single_int(self):
        control = ET.fromstring(
            '<a xmlns="http://www.suse.com/1.0/yast2ns" '
            'xmlns:config="http://www.suse.com/1.0/configns" '
            'config:type="integer">10</a>'
        )
        self.assertEqual(autoyast2yomi.Convert._parse(control), {"a": 10})

    def test__parse_single_list(self):
        control = ET.fromstring(
            '<a xmlns="http://www.suse.com/1.0/yast2ns" '
            'xmlns:config="http://www.suse.com/1.0/configns" '
            'config:type="list"><b>one</b><b>two</b></a>'
        )
        self.assertEqual(autoyast2yomi.Convert._parse(control), {"a": ["one", "two"]})

    def test__parse_single_dict(self):
        control = ET.fromstring(
            '<a xmlns="http://www.suse.com/1.0/yast2ns">' "<b>text</b><c>other</c></a>"
        )
        self.assertEqual(
            autoyast2yomi.Convert._parse(control), {"a": {"b": "text", "c": "other"}}
        )

    def test__parse_complex(self):
        control = self._parse_xml("ay_complex.xml").getroot()
        self.assertEqual(
            autoyast2yomi.Convert._parse(control),
            {
                "profile": {
                    "partitioning": [
                        {
                            "device": "/dev/sda",
                            "disklabel": "gpt",
                            "enable_snapshots": True,
                            "initialize": True,
                            "partitions": [
                                {
                                    "bcache_backing_for": "/dev/bcache0",
                                    "bcache_caching_for": ["/dev/bcache0"],
                                    "create": False,
                                    "create_subvolumes": False,
                                    "crypt_fs": False,
                                    "filesystem": "btrfs",
                                    "fstopt": (
                                        "ro,noatime,user,data=ordered," "acl,user_xattr"
                                    ),
                                    "label": "mydata",
                                    "lv_name": "opt_lv",
                                    "lvm_group": "system",
                                    "mkfs_options": "-I 128",
                                    "mount": "/",
                                    "mountby": "label",
                                    "partition_id": 131,
                                    "partition_nr": 1,
                                    "partition_type": "primary",
                                    "pool": False,
                                    "raid_name": "/dev/md/0",
                                    "raid_options": {
                                        "chunk_size": "4",
                                        "device_order": ["/dev/sdb2", "/dev/sda1"],
                                        "parity_algorithm": "left_asymmetric",
                                        "raid_type": "raid1",
                                    },
                                    "resize": False,
                                    "size": "10G",
                                    "stripes": 2,
                                    "stripesize": 4,
                                    "subvolumes": [
                                        "tmp",
                                        "opt",
                                        "srv",
                                        "var/crash",
                                        "var/lock",
                                        "var/run",
                                        "var/tmp",
                                        "var/spool",
                                    ],
                                    "subvolumes_prefix": "@",
                                    "used_pool": "my_thin_pool",
                                    "uuid": "UUID",
                                }
                            ],
                            "type": "CT_DISK",
                            "use": "all",
                        }
                    ]
                }
            },
        )

    @patch("autoyast2yomi.logging")
    def test__convert_config_single_ext3(self, logging):
        control = self._parse_xml("ay_single_ext3.xml")
        convert = autoyast2yomi.Convert(control)
        convert._control = autoyast2yomi.Convert._parse(control.getroot())
        convert._convert_config()
        self.assertEqual(
            convert.pillar,
            {
                "config": {
                    "events": True,
                    "reboot": True,
                    "snapper": False,
                    "locale": "en_GB",
                    "keymap": "de-nodeadkeys",
                    "timezone": "Europe/Berlin",
                    "hostname": "linux-bqua",
                    "target": "multi-user",
                }
            },
        )

    @patch("autoyast2yomi.logging")
    def test__convert_partitions_single_ext3(self, logging):
        control = self._parse_xml("ay_single_ext3.xml")
        convert = autoyast2yomi.Convert(control)
        convert._control = autoyast2yomi.Convert._parse(control.getroot())
        convert._convert_partitions()
        self.assertEqual(
            convert.pillar,
            {
                "partitions": {
                    "devices": {
                        "/dev/sda": {
                            "label": "gpt",
                            "partitions": [
                                {"number": 1, "size": "1M", "type": "boot"},
                                {"number": 2, "size": "2G", "type": "swap"},
                                {"number": 3, "size": "rest", "type": "linux"},
                            ],
                        }
                    }
                }
            },
        )

    @patch("autoyast2yomi.logging")
    def test__convert_partitions_lvm_ext3(self, logging):
        control = self._parse_xml("ay_lvm_ext3.xml")
        convert = autoyast2yomi.Convert(control)
        convert._control = autoyast2yomi.Convert._parse(control.getroot())
        convert._convert_partitions()
        self.assertEqual(
            convert.pillar,
            {
                "partitions": {
                    "devices": {
                        "/dev/sda": {
                            "label": "gpt",
                            "partitions": [
                                {"number": 1, "size": "rest", "type": "lvm"}
                            ],
                        }
                    }
                }
            },
        )

    @patch("autoyast2yomi.logging")
    def test__convert_partitions_raid_ext3(self, logging):
        control = self._parse_xml("ay_raid_ext3.xml")
        convert = autoyast2yomi.Convert(control)
        convert._control = autoyast2yomi.Convert._parse(control.getroot())
        convert._convert_partitions()
        self.assertEqual(
            convert.pillar,
            {
                "partitions": {
                    "devices": {
                        "/dev/sda": {
                            "label": "gpt",
                            "partitions": [
                                {"number": 1, "size": "20G", "type": "linux"},
                                {"number": 2, "size": "rest", "type": "raid"},
                            ],
                        },
                        "/dev/sdb": {
                            "partitions": [
                                {"number": 1, "size": "rest", "type": "raid"},
                            ]
                        },
                        "/dev/md/0": {
                            "partitions": [
                                {"number": 1, "size": "40G", "type": "linux"},
                                {"number": 2, "size": "10G", "type": "linux"},
                            ]
                        },
                    }
                }
            },
        )

    @patch("autoyast2yomi.logging")
    def test__convert_lvm_ext3(self, logging):
        control = self._parse_xml("ay_lvm_ext3.xml")
        convert = autoyast2yomi.Convert(control)
        convert._control = autoyast2yomi.Convert._parse(control.getroot())
        convert._convert_lvm()
        self.assertEqual(
            convert.pillar,
            {
                "lvm": {
                    "system": {
                        "devices": ["/dev/sda1"],
                        "physicalextentsize": "4M",
                        "volumes": [
                            {"name": "user_lv", "size": "15G"},
                            {"name": "opt_lv", "size": "10G"},
                            {"name": "var_lv", "size": "1G"},
                        ],
                    }
                }
            },
        )

    @patch("autoyast2yomi.logging")
    def test__convert_raid_ext3(self, logging):
        control = self._parse_xml("ay_raid_ext3.xml")
        convert = autoyast2yomi.Convert(control)
        convert._control = autoyast2yomi.Convert._parse(control.getroot())
        convert._convert_raid()
        self.assertEqual(
            convert.pillar,
            {
                "raid": {
                    "/dev/md/0": {
                        "level": "raid1",
                        "devices": ["/dev/sda2", "/dev/sdb1"],
                        "chunk": "4",
                        "parity": "left-asymmetric",
                    }
                }
            },
        )

    @patch("autoyast2yomi.logging")
    def test__convert_filesystems_single_ext3(self, logging):
        control = self._parse_xml("ay_single_ext3.xml")
        convert = autoyast2yomi.Convert(control)
        convert._control = autoyast2yomi.Convert._parse(control.getroot())
        convert._convert_filesystems()
        self.assertEqual(
            convert.pillar,
            {
                "filesystems": {
                    "/dev/sda2": {"filesystem": "swap", "mountpoint": "swap"},
                    "/dev/sda3": {"filesystem": "ext3", "mountpoint": "/"},
                }
            },
        )

    @patch("autoyast2yomi.logging")
    def test__convert_filesystems_single_btrfs(self, logging):
        control = self._parse_xml("ay_single_btrfs.xml")
        convert = autoyast2yomi.Convert(control)
        convert._control = autoyast2yomi.Convert._parse(control.getroot())
        convert._convert_filesystems()
        self.assertEqual(
            convert.pillar,
            {
                "filesystems": {
                    "/dev/sda2": {"filesystem": "swap", "mountpoint": "swap"},
                    "/dev/sda3": {
                        "filesystem": "btrfs",
                        "mountpoint": "/",
                        "subvolumes": {
                            "prefix": "@",
                            "subvolume": [
                                {"path": "tmp"},
                                {"path": "opt"},
                                {"path": "srv"},
                                {"path": "var/lib/pgsql", "copy_on_write": False},
                            ],
                        },
                    },
                }
            },
        )

    @patch("autoyast2yomi.logging")
    def test__convert_filesystems_lvm_ext3(self, logging):
        control = self._parse_xml("ay_lvm_ext3.xml")
        convert = autoyast2yomi.Convert(control)
        convert._control = autoyast2yomi.Convert._parse(control.getroot())
        convert._convert_filesystems()
        self.assertEqual(
            convert.pillar,
            {
                "filesystems": {
                    "/dev/system/user_lv": {
                        "filesystem": "ext3",
                        "mountpoint": "/usr",
                    },
                    "/dev/system/opt_lv": {"filesystem": "ext3", "mountpoint": "/opt"},
                    "/dev/system/var_lv": {"filesystem": "ext3", "mountpoint": "/var"},
                }
            },
        )

    @patch("autoyast2yomi.logging")
    def test__convert_filesystems_raid_ext3(self, logging):
        control = self._parse_xml("ay_raid_ext3.xml")
        convert = autoyast2yomi.Convert(control)
        convert._control = autoyast2yomi.Convert._parse(control.getroot())
        convert._convert_filesystems()
        self.assertEqual(
            convert.pillar,
            {
                "filesystems": {
                    "/dev/sda1": {"filesystem": "ext3", "mountpoint": "/"},
                    "/dev/md/0p1": {"filesystem": "ext3", "mountpoint": "/home"},
                    "/dev/md/0p2": {"filesystem": "ext3", "mountpoint": "/srv"},
                }
            },
        )

    @patch("autoyast2yomi.logging")
    def test__convert_bootloader(self, logging):
        control = self._parse_xml("ay_single_ext3.xml")
        convert = autoyast2yomi.Convert(control)
        convert._control = autoyast2yomi.Convert._parse(control.getroot())
        convert._convert_bootloader()
        self.assertEqual(
            convert.pillar,
            {
                "bootloader": {
                    "device": "/dev/sda",
                    "timeout": 10,
                    "kernel": (
                        "splash=silent quiet nomodeset vga=0x317 "
                        "noibrs noibpb nopti nospectre_v2 nospectre_v1 "
                        "l1tf=off nospec_store_bypass_disable "
                        "no_stf_barrier mds=off mitigations=off"
                    ),
                    "terminal": "serial",
                    "serial_command": (
                        "serial --speed=115200 --unit=0 "
                        "--word=8 --parity=no --stop=1"
                    ),
                    "gfxmode": "1280x1024x24",
                    "theme": True,
                    "disable_os_prober": True,
                }
            },
        )

    @patch("autoyast2yomi.logging")
    def test__convert_software(self, logging):
        control = self._parse_xml("ay_single_ext3.xml")
        convert = autoyast2yomi.Convert(control)
        convert._control = autoyast2yomi.Convert._parse(control.getroot())
        convert._convert_software()
        self.assertEqual(
            convert.pillar,
            {
                "software": {
                    "config": {"minimal": True},
                    "repositories": {
                        "SLES SDK": "cd:///sdk",
                        "yast2_head": (
                            "https://download.opensuse.org/repositories"
                            "/YaST:/Head/openSUSE_Leap_15.1/"
                        ),
                    },
                    "packages": [
                        "product:SLED",
                        "pattern:directory_server",
                        "apache",
                        "postfix",
                        "kernel-default",
                    ],
                }
            },
        )

    @patch("autoyast2yomi.logging")
    def test__convert_suseconnect(self, logging):
        control = self._parse_xml("ay_single_ext3.xml")
        convert = autoyast2yomi.Convert(control)
        convert._control = autoyast2yomi.Convert._parse(control.getroot())
        convert._convert_suseconnect()
        self.assertEqual(
            convert.pillar,
            {
                "suseconnect": {
                    "config": {
                        "regcode": "MY_SECRET_REGCODE",
                        "email": "tux@example.com",
                        "url": "https://smt.example.com",
                    },
                    "products": ["sle-module-basesystem/15.1/x86_64"],
                    "packages": ["pattern:apparmor", "yast2-cim"],
                },
            },
        )

    @patch("autoyast2yomi.logging")
    def test__convert_salt_minion(self, logging):
        control = self._parse_xml("ay_single_ext3.xml")
        convert = autoyast2yomi.Convert(control)
        convert._control = autoyast2yomi.Convert._parse(control.getroot())
        convert._convert_salt_minion()
        self.assertEqual(convert.pillar, {"salt-minion": {"configure": True}})

    @patch("autoyast2yomi.logging")
    def test__convert_services(self, logging):
        control = self._parse_xml("ay_single_ext3.xml")
        convert = autoyast2yomi.Convert(control)
        convert._control = autoyast2yomi.Convert._parse(control.getroot())
        convert._convert_services()
        self.assertEqual(
            convert.pillar,
            {
                "services": {
                    "enabled": ["sshd.service", "cups.socket"],
                    "disabled": ["libvirtd.service", "cups.service"],
                }
            },
        )

    def test__password(self):
        self.assertEqual(autoyast2yomi.Convert._password({}), None)
        self.assertEqual(
            autoyast2yomi.Convert._password(
                {"user_password": "linux"}, salt="$1$wYJUgpM5"
            ),
            "$1$wYJUgpM5$RXMMeASDc035eX.NbYWFl0",
        )
        self.assertEqual(
            autoyast2yomi.Convert._password(
                {
                    "user_password": "$1$wYJUgpM5$RXMMeASDc035eX.NbYWFl0",
                    "encrypted": True,
                }
            ),
            "$1$wYJUgpM5$RXMMeASDc035eX.NbYWFl0",
        )

    @patch("autoyast2yomi.logging")
    @patch("autoyast2yomi.Convert._password")
    def test__convert_users(self, _password, logging):
        control = self._parse_xml("ay_single_ext3.xml")
        convert = autoyast2yomi.Convert(control)
        convert._control = autoyast2yomi.Convert._parse(control.getroot())
        _password.return_value = "<hash>"
        convert._convert_users()
        self.assertEqual(
            convert.pillar,
            {
                "users": [
                    {
                        "username": "root",
                        "password": "<hash>",
                        "certificates": [
                            (
                                "AAAAB3NzaC1yc2EAAAADAQABAAABAQDKLt1vnW2vTJpBp3VK91"
                                "rFsBvpY97NljsVLdgUrlPbZ/L51FerQQ+djQ/ivDASQjO+567n"
                                "MGqfYGFA/De1EGMMEoeShza67qjNi14L1HBGgVojaNajMR/NI2"
                                "d1kDyvsgRy7D7FT5UGGUNT0dlcSD3b85zwgHeYLidgcGIoKeRi"
                                "7HpVDOOTyhwUv4sq3ubrPCWARgPeOLdVFa9clC8PTZdxSeKp4j"
                                "pNjIHEyREPin2Un1luCIPWrOYyym7aRJEPopCEqBA9Hvfwpbuw"
                                "BI5F0uIWZgSQLfpwW86599fBo/PvMDa96DpxH1VlzJlAIHQsMk"
                                "MHbsCazPNC0++Kp5ZVERiH"
                            )
                        ],
                    },
                    {"username": "tux", "password": "<hash>"},
                ]
            },
        )
07070100000086000081A40000000000000000000000016130D1CF000101DE000000000000000000000000000000000000003800000000yomi-0.0.1+git.1630589391.4557cfd/tests/test_devices.py# -*- coding: utf-8 -*-
#
# Author: Alberto Planas <aplanas@suse.com>
#
# Copyright 2019 SUSE LLC.
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.

import textwrap
import unittest
from unittest.mock import patch

from modules import devices


class DevicesTestCase(unittest.TestCase):
    def test_udev(self):
        self.assertEqual(devices._udev({"A": {"B": 1}}, "a.b"), 1)
        self.assertEqual(devices._udev({"A": {"B": 1}}, "A.B"), 1)
        self.assertEqual(devices._udev({"A": {"B": 1}}, "a.c"), "n/a")
        self.assertEqual(devices._udev({"A": [1, 2]}, "a.b"), "n/a")
        self.assertEqual(devices._udev({"A": {"B": 1}}, ""), {"A": {"B": 1}})

    def test_match(self):
        self.assertTrue(devices._match({"A": {"B": 1}}, {"a.b": 1}))
        self.assertFalse(devices._match({"A": {"B": 1}}, {"a.b": 2}))
        self.assertTrue(devices._match({"A": {"B": 1}}, {"a.b": [1, 2]}))
        self.assertFalse(devices._match({"A": {"B": 1}}, {"a.b": [2, 3]}))
        self.assertTrue(devices._match({"A": {"B": [1, 2]}}, {"a.b": 1}))
        self.assertTrue(devices._match({"A": {"B": [1, 2]}}, {"a.b": [1, 3]}))
        self.assertFalse(devices._match({"A": {"B": [1, 2]}}, {"a.b": [3, 4]}))
        self.assertTrue(devices._match({"A": 1}, {}))

    @patch("modules.devices.__grains__")
    @patch("modules.devices.__salt__")
    def test_devices(self, __salt__, __grains__):
        cdrom = {
            "S": ["dvd", "cdrom"],
            "E": {"ID_BUS": "ata"},
        }
        usb = {
            "E": {"ID_BUS": "usb"},
        }
        hd = {
            "E": {"ID_BUS": "ata"},
        }

        __grains__.__getitem__.return_value = ["sda", "sdb", "sr0"]
        __salt__.__getitem__.return_value = lambda d: {
            "sda": hd,
            "sdb": usb,
            "sr0": cdrom,
        }[d]

        self.assertEqual(devices.filter_({"e.id_bus": "ata"}, {}), ["sda", "sr0"])
        self.assertEqual(devices.filter_({"e.id_bus": "usb"}, {}), ["sdb"])
        self.assertEqual(
            devices.filter_({"e.id_bus": "ata"}, {"s": ["cdrom"]}), ["sda"]
        )

    def test__hwinfo_parse_short(self):
        hwinfo = textwrap.dedent(
            """
            cpu:
                                   QEMU Virtual CPU version 2.5+, 3591 MHz
            keyboard:
              /dev/input/event0    AT Translated Set 2 keyboard
            mouse:
              /dev/input/mice      VirtualPS/2 VMware VMMouse
              /dev/input/mice      VirtualPS/2 VMware VMMouse
            graphics card:
                                   VGA compatible controller
            storage:
                                   Floppy disk controller
                                   Red Hat Qemu virtual machine
            network:
              ens3                 Virtio Ethernet Card 0
            network interface:
              lo                   Loopback network interface
              ens3                 Ethernet network interface
            disk:
              /dev/fd0             Disk
              /dev/sda             QEMU HARDDISK
            cdrom:
              /dev/sr0             QEMU DVD-ROM
            floppy:
              /dev/fd0             Floppy Disk
            bios:
                                   BIOS
            bridge:
                                   Red Hat Qemu virtual machine
                                   Red Hat Qemu virtual machine
                                   Red Hat Qemu virtual machine
            memory:
                                   Main Memory
            unknown:
                                   FPU
                                   DMA controller
                                   PIC
                                   Keyboard controller
              /dev/lp0             Parallel controller
                                   PS/2 Controller
                                   Red Hat Virtio network device
              /dev/ttyS0           16550A
        """
        )
        self.assertEqual(
            devices._hwinfo_parse_short(hwinfo),
            {
                "cpu": {0: "QEMU Virtual CPU version 2.5+, 3591 MHz"},
                "keyboard": {"/dev/input/event0": "AT Translated Set 2 keyboard"},
                "mouse": {"/dev/input/mice": "VirtualPS/2 VMware VMMouse"},
                "graphics card": {0: "VGA compatible controller"},
                "storage": {
                    0: "Floppy disk controller",
                    1: "Red Hat Qemu virtual machine",
                },
                "network": {"ens3": "Virtio Ethernet Card 0"},
                "network interface": {
                    "lo": "Loopback network interface",
                    "ens3": "Ethernet network interface",
                },
                "disk": {"/dev/fd0": "Disk", "/dev/sda": "QEMU HARDDISK"},
                "cdrom": {"/dev/sr0": "QEMU DVD-ROM"},
                "floppy": {"/dev/fd0": "Floppy Disk"},
                "bios": {0: "BIOS"},
                "bridge": {
                    0: "Red Hat Qemu virtual machine",
                    1: "Red Hat Qemu virtual machine",
                    2: "Red Hat Qemu virtual machine",
                },
                "memory": {0: "Main Memory"},
                "unknown": {
                    0: "FPU",
                    1: "DMA controller",
                    2: "PIC",
                    3: "Keyboard controller",
                    "/dev/lp0": "Parallel controller",
                    4: "PS/2 Controller",
                    5: "Red Hat Virtio network device",
                    "/dev/ttyS0": "16550A",
                },
            },
        )

    def test__hwinfo_parse_full_floppy(self):
        hwinfo = textwrap.dedent(
            """
            01: None 00.0: 0102 Floppy disk controller
              [Created at floppy.112]
              Unique ID: rdCR.3wRL2_g4d2B
              Hardware Class: storage
              Model: "Floppy disk controller"
              I/O Port: 0x3f2 (rw)
              I/O Ports: 0x3f4-0x3f5 (rw)
              I/O Port: 0x3f7 (rw)
              DMA: 2
              Config Status: cfg=new, avail=yes, need=no, active=unknown

            02: Floppy 00.0: 10603 Floppy Disk
              [Created at floppy.127]
              Unique ID: sPPV.oZ89vuho4Y3
              Parent ID: rdCR.3wRL2_g4d2B
              Hardware Class: floppy
              Model: "Floppy Disk"
              Device File: /dev/fd0
              Size: 3.5 ''
              Config Status: cfg=new, avail=yes, need=no, active=unknown
              Size: 5760 sectors a 512 bytes
              Capacity: 0 GB (2949120 bytes)
              Drive status: no medium
              Config Status: cfg=new, avail=yes, need=no, active=unknown
              Attached to: #1 (Floppy disk controller)
        """
        )
        self.assertEqual(
            devices._hwinfo_parse_full(hwinfo),
            {
                "01": {
                    "None 00.0": "0102 Floppy disk controller",
                    "Note": "Created at floppy.112",
                    "Unique ID": "rdCR.3wRL2_g4d2B",
                    "Hardware Class": "storage",
                    "Model": "Floppy disk controller",
                    "I/O Ports": ["0x3f2 (rw)", "0x3f4-0x3f5 (rw)", "0x3f7 (rw)"],
                    "DMA": "2",
                    "Config Status": {
                        "cfg": "new",
                        "avail": "yes",
                        "need": "no",
                        "active": "unknown",
                    },
                },
                "02": {
                    "Floppy 00.0": "10603 Floppy Disk",
                    "Note": "Created at floppy.127",
                    "Unique ID": "sPPV.oZ89vuho4Y3",
                    "Parent ID": "rdCR.3wRL2_g4d2B",
                    "Hardware Class": "floppy",
                    "Model": "Floppy Disk",
                    "Device File": "/dev/fd0",
                    "Size": ["3.5 ''", "5760 sectors a 512 bytes"],
                    "Config Status": {
                        "cfg": "new",
                        "avail": "yes",
                        "need": "no",
                        "active": "unknown",
                    },
                    "Capacity": "0 GB (2949120 bytes)",
                    "Drive status": "no medium",
                    "Attached to": {"Handle": "#1 (Floppy disk controller)"},
                },
            },
        )

    def test__hwinfo_parse_full_bios(self):
        hwinfo = textwrap.dedent(
            """
            03: None 00.0: 10105 BIOS
              [Created at bios.186]
              Unique ID: rdCR.lZF+r4EgHp4
              Hardware Class: bios
              BIOS Keyboard LED Status:
                Scroll Lock: off
                Num Lock: off
                Caps Lock: off
              Serial Port 0: 0x3f8
              Parallel Port 0: 0x378
              Base Memory: 639 kB
              PnP BIOS: @@@0000
              MP spec rev 1.4 info:
                OEM id: "BOCHSCPU"
                Product id: "0.1"
                1 CPUs (0 disabled)
              BIOS32 Service Directory Entry: 0xfd2b0
              SMBIOS Version: 2.8
              BIOS Info: #0
                Vendor: "SeaBIOS"
                Version: "rel-1.12.1-0-ga5cab58e9a3f-prebuilt.qemu.org"
                Date: "04/01/2014"
                Start Address: 0xe8000
                ROM Size: 64 kB
                Features: 0x04000000000000000008
              System Info: #256
                Manufacturer: "QEMU"
                Product: "Standard PC (i440FX + PIIX, 1996)"
                Version: "pc-i440fx-4.0"
                UUID: undefined
                Wake-up: 0x06 (Power Switch)
              Chassis Info: #768
                Manufacturer: "QEMU"
                Version: "pc-i440fx-4.0"
                Type: 0x01 (Other)
                Bootup State: 0x03 (Safe)
                Power Supply State: 0x03 (Safe)
                Thermal State: 0x03 (Safe)
                Security Status: 0x02 (Unknown)
              Processor Info: #1024
                Socket: "CPU 0"
                Socket Type: 0x01 (Other)
                Socket Status: Populated
                Type: 0x03 (CPU)
                Family: 0x01 (Other)
                Manufacturer: "QEMU"
                Version: "pc-i440fx-4.0"
                Processor ID: 0x078bfbfd00000663
                Status: 0x01 (Enabled)
                Max. Speed: 2000 MHz
                Current Speed: 2000 MHz
              Physical Memory Array: #4096
                Use: 0x03 (System memory)
                Location: 0x01 (Other)
                Slots: 1
                Max. Size: 1 GB
                ECC: 0x06 (Multi-bit)
              Memory Device: #4352
                Location: "DIMM 0"
                Manufacturer: "QEMU"
                Memory Array: #4096
                Form Factor: 0x09 (DIMM)
                Type: 0x07 (RAM)
                Type Detail: 0x0002 (Other)
                Data Width: 0 bits
                 Size: 1 GB
              Memory Array Mapping: #4864
                Memory Array: #4096
                Partition Width: 1
                Start Address: 0x00000000
                End Address: 0x40000000
              Type 32 Record: #8192
                Data 00: 20 0b 00 20 00 00 00 00 00 00 00
              Config Status: cfg=new, avail=yes, need=no, active=unknown
        """
        )
        self.assertEqual(
            devices._hwinfo_parse_full(hwinfo),
            {
                "03": {
                    "None 00.0": "10105 BIOS",
                    "Note": "Created at bios.186",
                    "Unique ID": "rdCR.lZF+r4EgHp4",
                    "Hardware Class": "bios",
                    "BIOS Keyboard LED Status": {
                        "Scroll Lock": "off",
                        "Num Lock": "off",
                        "Caps Lock": "off",
                    },
                    "Serial Port 0": "0x3f8",
                    "Parallel Port 0": "0x378",
                    "Base Memory": "639 kB",
                    "PnP BIOS": "@@@0000",
                    "MP spec rev 1.4 info": {
                        "OEM id": "BOCHSCPU",
                        "Product id": "0.1",
                        "Note": "1 CPUs (0 disabled)",
                    },
                    "BIOS32 Service Directory Entry": "0xfd2b0",
                    "SMBIOS Version": "2.8",
                    "BIOS Info": {
                        "Handle": "#0",
                        "Vendor": "SeaBIOS",
                        "Version": "rel-1.12.1-0-ga5cab58e9a3f-prebuilt.qemu.org",
                        "Date": "04/01/2014",
                        "Start Address": "0xe8000",
                        "ROM Size": "64 kB",
                        "Features": ["0x04000000000000000008"],
                    },
                    "System Info": {
                        "Handle": "#256",
                        "Manufacturer": "QEMU",
                        "Product": "Standard PC (i440FX + PIIX, 1996)",
                        "Version": "pc-i440fx-4.0",
                        "UUID": "undefined",
                        "Wake-up": "0x06 (Power Switch)",
                    },
                    "Chassis Info": {
                        "Handle": "#768",
                        "Manufacturer": "QEMU",
                        "Version": "pc-i440fx-4.0",
                        "Type": "0x01 (Other)",
                        "Bootup State": "0x03 (Safe)",
                        "Power Supply State": "0x03 (Safe)",
                        "Thermal State": "0x03 (Safe)",
                        "Security Status": "0x02 (Unknown)",
                    },
                    "Processor Info": {
                        "Handle": "#1024",
                        "Socket": "CPU 0",
                        "Socket Type": "0x01 (Other)",
                        "Socket Status": "Populated",
                        "Type": "0x03 (CPU)",
                        "Family": "0x01 (Other)",
                        "Manufacturer": "QEMU",
                        "Version": "pc-i440fx-4.0",
                        "Processor ID": "0x078bfbfd00000663",
                        "Status": "0x01 (Enabled)",
                        "Max. Speed": "2000 MHz",
                        "Current Speed": "2000 MHz",
                    },
                    "Physical Memory Array": {
                        "Handle": "#4096",
                        "Use": "0x03 (System memory)",
                        "Location": "0x01 (Other)",
                        "Slots": "1",
                        "Max. Size": "1 GB",
                        "ECC": "0x06 (Multi-bit)",
                    },
                    "Memory Device": {
                        "Handle": "#4352",
                        "Location": "DIMM 0",
                        "Manufacturer": "QEMU",
                        "Memory Array": {"Handle": "#4096"},
                        "Form Factor": "0x09 (DIMM)",
                        "Type": "0x07 (RAM)",
                        "Type Detail": "0x0002 (Other)",
                        "Data Width": "0 bits",
                        "Size": "1 GB",
                    },
                    "Memory Array Mapping": {
                        "Handle": "#4864",
                        "Memory Array": {"Handle": "#4096"},
                        "Partition Width": "1",
                        "Start Address": "0x00000000",
                        "End Address": "0x40000000",
                    },
                    "Type 32 Record": {
                        "Handle": "#8192",
                        "Data 00": "20 0b 00 20 00 00 00 00 00 00 00",
                    },
                    "Config Status": {
                        "cfg": "new",
                        "avail": "yes",
                        "need": "no",
                        "active": "unknown",
                    },
                },
            },
        )

    def test__hwinfo_parse_full_system(self):
        hwinfo = textwrap.dedent(
            """
            04: None 00.0: 10107 System
              [Created at sys.64]
              Unique ID: rdCR.n_7QNeEnh23
              Hardware Class: system
              Model: "System"
              Formfactor: "desktop"
              Driver Info #0:
                Driver Status: thermal,fan are not active
                Driver Activation Cmd: "modprobe thermal; modprobe fan"
              Config Status: cfg=new, avail=yes, need=no, active=unknown
        """
        )
        self.assertEqual(
            devices._hwinfo_parse_full(hwinfo),
            {
                "04": {
                    "None 00.0": "10107 System",
                    "Note": "Created at sys.64",
                    "Unique ID": "rdCR.n_7QNeEnh23",
                    "Hardware Class": "system",
                    "Model": "System",
                    "Formfactor": "desktop",
                    "Driver Info #0": {
                        "Driver Status": "thermal,fan are not active",
                        "Driver Activation Cmd": "modprobe thermal; modprobe fan",
                    },
                    "Config Status": {
                        "cfg": "new",
                        "avail": "yes",
                        "need": "no",
                        "active": "unknown",
                    },
                },
            },
        )

    def test__hwinfo_parse_full_unknown(self):
        hwinfo = textwrap.dedent(
            """
            05: None 00.0: 10104 FPU
              [Created at misc.191]
              Unique ID: rdCR.EMpH5pjcahD
              Hardware Class: unknown
              Model: "FPU"
              I/O Ports: 0xf0-0xff (rw)
              Config Status: cfg=new, avail=yes, need=no, active=unknown

            06: None 00.0: 0801 DMA controller (8237)
              [Created at misc.205]
              Unique ID: rdCR.f5u1ucRm+H9
              Hardware Class: unknown
              Model: "DMA controller"
              I/O Ports: 0x00-0xcf7 (rw)
              I/O Ports: 0xc0-0xdf (rw)
              I/O Ports: 0x80-0x8f (rw)
              DMA: 4
              Config Status: cfg=new, avail=yes, need=no, active=unknown

            07: None 00.0: 0800 PIC (8259)
              [Created at misc.218]
              Unique ID: rdCR.8uRK7LxiIA2
              Hardware Class: unknown
              Model: "PIC"
              I/O Ports: 0x20-0x21 (rw)
              I/O Ports: 0xa0-0xa1 (rw)
              Config Status: cfg=new, avail=yes, need=no, active=unknown

            08: None 00.0: 0900 Keyboard controller
              [Created at misc.250]
              Unique ID: rdCR.9N+EecqykME
              Hardware Class: unknown
              Model: "Keyboard controller"
              I/O Port: 0x60 (rw)
              I/O Port: 0x64 (rw)
              Config Status: cfg=new, avail=yes, need=no, active=unknown

            09: None 00.0: 0701 Parallel controller (SPP)
              [Created at misc.261]
              Unique ID: YMnp.ecK7NLYWZ5D
              Hardware Class: unknown
              Model: "Parallel controller"
              Device File: /dev/lp0
              I/O Ports: 0x378-0x37a (rw)
              I/O Ports: 0x37b-0x37f (rw)
              Config Status: cfg=new, avail=yes, need=no, active=unknown

            10: None 00.0: 10400 PS/2 Controller
              [Created at misc.303]
              Unique ID: rdCR.DziBbWO85o5
              Hardware Class: unknown
              Model: "PS/2 Controller"
              Config Status: cfg=new, avail=yes, need=no, active=unknown
        """
        )
        self.assertEqual(
            devices._hwinfo_parse_full(hwinfo),
            {
                "05": {
                    "None 00.0": "10104 FPU",
                    "Note": "Created at misc.191",
                    "Unique ID": "rdCR.EMpH5pjcahD",
                    "Hardware Class": "unknown",
                    "Model": "FPU",
                    "I/O Ports": "0xf0-0xff (rw)",
                    "Config Status": {
                        "cfg": "new",
                        "avail": "yes",
                        "need": "no",
                        "active": "unknown",
                    },
                },
                "06": {
                    "None 00.0": "0801 DMA controller (8237)",
                    "Note": "Created at misc.205",
                    "Unique ID": "rdCR.f5u1ucRm+H9",
                    "Hardware Class": "unknown",
                    "Model": "DMA controller",
                    "I/O Ports": [
                        "0x00-0xcf7 (rw)",
                        "0xc0-0xdf (rw)",
                        "0x80-0x8f (rw)",
                    ],
                    "DMA": "4",
                    "Config Status": {
                        "cfg": "new",
                        "avail": "yes",
                        "need": "no",
                        "active": "unknown",
                    },
                },
                "07": {
                    "None 00.0": "0800 PIC (8259)",
                    "Note": "Created at misc.218",
                    "Unique ID": "rdCR.8uRK7LxiIA2",
                    "Hardware Class": "unknown",
                    "Model": "PIC",
                    "I/O Ports": ["0x20-0x21 (rw)", "0xa0-0xa1 (rw)"],
                    "Config Status": {
                        "cfg": "new",
                        "avail": "yes",
                        "need": "no",
                        "active": "unknown",
                    },
                },
                "08": {
                    "None 00.0": "0900 Keyboard controller",
                    "Note": "Created at misc.250",
                    "Unique ID": "rdCR.9N+EecqykME",
                    "Hardware Class": "unknown",
                    "Model": "Keyboard controller",
                    "I/O Ports": ["0x60 (rw)", "0x64 (rw)"],
                    "Config Status": {
                        "cfg": "new",
                        "avail": "yes",
                        "need": "no",
                        "active": "unknown",
                    },
                },
                "09": {
                    "None 00.0": "0701 Parallel controller (SPP)",
                    "Note": "Created at misc.261",
                    "Unique ID": "YMnp.ecK7NLYWZ5D",
                    "Hardware Class": "unknown",
                    "Model": "Parallel controller",
                    "Device File": "/dev/lp0",
                    "I/O Ports": ["0x378-0x37a (rw)", "0x37b-0x37f (rw)"],
                    "Config Status": {
                        "cfg": "new",
                        "avail": "yes",
                        "need": "no",
                        "active": "unknown",
                    },
                },
                "10": {
                    "None 00.0": "10400 PS/2 Controller",
                    "Note": "Created at misc.303",
                    "Unique ID": "rdCR.DziBbWO85o5",
                    "Hardware Class": "unknown",
                    "Model": "PS/2 Controller",
                    "Config Status": {
                        "cfg": "new",
                        "avail": "yes",
                        "need": "no",
                        "active": "unknown",
                    },
                },
            },
        )

    def test__hwinfo_parse_full_memory(self):
        hwinfo = textwrap.dedent(
            """
            12: None 00.0: 10102 Main Memory
              [Created at memory.74]
              Unique ID: rdCR.CxwsZFjVASF
              Hardware Class: memory
              Model: "Main Memory"
              Memory Range: 0x00000000-0x3cefffff (rw)
              Memory Size: 960 MB
              Config Status: cfg=new, avail=yes, need=no, active=unknown
        """
        )
        self.assertEqual(
            devices._hwinfo_parse_full(hwinfo),
            {
                "12": {
                    "None 00.0": "10102 Main Memory",
                    "Note": "Created at memory.74",
                    "Unique ID": "rdCR.CxwsZFjVASF",
                    "Hardware Class": "memory",
                    "Model": "Main Memory",
                    "Memory Range": "0x00000000-0x3cefffff (rw)",
                    "Memory Size": "960 MB",
                    "Config Status": {
                        "cfg": "new",
                        "avail": "yes",
                        "need": "no",
                        "active": "unknown",
                    },
                },
            },
        )

    def test__hwinfo_parse_full_bridge(self):
        hwinfo = textwrap.dedent(
            """
            13: PCI 01.0: 0601 ISA bridge
              [Created at pci.386]
              Unique ID: vSkL.ucdhKwLeeAA
              SysFS ID: /devices/pci0000:00/0000:00:01.0
              SysFS BusID: 0000:00:01.0
              Hardware Class: bridge
              Model: "Red Hat Qemu virtual machine"
              Vendor: pci 0x8086 "Intel Corporation"
              Device: pci 0x7000 "82371SB PIIX3 ISA [Natoma/Triton II]"
              SubVendor: pci 0x1af4 "Red Hat, Inc."
              SubDevice: pci 0x1100 "Qemu virtual machine"
              Module Alias: "pci:v00008086d00007000sv00001AF4sd00001100bc06sc01i00"
              Config Status: cfg=new, avail=yes, need=no, active=unknown

            14: PCI 00.0: 0600 Host bridge
              [Created at pci.386]
              Unique ID: qLht.YeL3TKDjrxE
              SysFS ID: /devices/pci0000:00/0000:00:00.0
              SysFS BusID: 0000:00:00.0
              Hardware Class: bridge
              Model: "Red Hat Qemu virtual machine"
              Vendor: pci 0x8086 "Intel Corporation"
              Device: pci 0x1237 "440FX - 82441FX PMC [Natoma]"
              SubVendor: pci 0x1af4 "Red Hat, Inc."
              SubDevice: pci 0x1100 "Qemu virtual machine"
              Revision: 0x02
              Module Alias: "pci:v00008086d00001237sv00001AF4sd00001100bc06sc00i00"
              Config Status: cfg=new, avail=yes, need=no, active=unknown

            15: PCI 01.3: 0680 Bridge
              [Created at pci.386]
              Unique ID: VRCs.M9Cc8lcQjE2
              SysFS ID: /devices/pci0000:00/0000:00:01.3
              SysFS BusID: 0000:00:01.3
              Hardware Class: bridge
              Model: "Red Hat Qemu virtual machine"
              Vendor: pci 0x8086 "Intel Corporation"
              Device: pci 0x7113 "82371AB/EB/MB PIIX4 ACPI"
              SubVendor: pci 0x1af4 "Red Hat, Inc."
              SubDevice: pci 0x1100 "Qemu virtual machine"
              Revision: 0x03
              Driver: "piix4_smbus"
              Driver Modules: "i2c_piix4"
              IRQ: 9 (no events)
              Module Alias: "pci:v00008086d00007113sv00001AF4sd00001100bc06sc80i00"
              Driver Info #0:
                Driver Status: i2c_piix4 is active
                Driver Activation Cmd: "modprobe i2c_piix4"
              Config Status: cfg=new, avail=yes, need=no, active=unknown
        """
        )
        self.assertEqual(
            devices._hwinfo_parse_full(hwinfo),
            {
                "13": {
                    "PCI 01.0": "0601 ISA bridge",
                    "Note": "Created at pci.386",
                    "Unique ID": "vSkL.ucdhKwLeeAA",
                    "SysFS ID": "/devices/pci0000:00/0000:00:01.0",
                    "SysFS BusID": "0000:00:01.0",
                    "Hardware Class": "bridge",
                    "Model": "Red Hat Qemu virtual machine",
                    "Vendor": 'pci 0x8086 "Intel Corporation"',
                    "Device": 'pci 0x7000 "82371SB PIIX3 ISA [Natoma/Triton II]"',
                    "SubVendor": 'pci 0x1af4 "Red Hat, Inc."',
                    "SubDevice": 'pci 0x1100 "Qemu virtual machine"',
                    "Module Alias": "pci:v00008086d00007000sv00001AF4sd00001100bc06sc01i00",
                    "Config Status": {
                        "cfg": "new",
                        "avail": "yes",
                        "need": "no",
                        "active": "unknown",
                    },
                },
                "14": {
                    "PCI 00.0": "0600 Host bridge",
                    "Note": "Created at pci.386",
                    "Unique ID": "qLht.YeL3TKDjrxE",
                    "SysFS ID": "/devices/pci0000:00/0000:00:00.0",
                    "SysFS BusID": "0000:00:00.0",
                    "Hardware Class": "bridge",
                    "Model": "Red Hat Qemu virtual machine",
                    "Vendor": 'pci 0x8086 "Intel Corporation"',
                    "Device": 'pci 0x1237 "440FX - 82441FX PMC [Natoma]"',
                    "SubVendor": 'pci 0x1af4 "Red Hat, Inc."',
                    "SubDevice": 'pci 0x1100 "Qemu virtual machine"',
                    "Revision": "0x02",
                    "Module Alias": "pci:v00008086d00001237sv00001AF4sd00001100bc06sc00i00",
                    "Config Status": {
                        "cfg": "new",
                        "avail": "yes",
                        "need": "no",
                        "active": "unknown",
                    },
                },
                "15": {
                    "PCI 01.3": "0680 Bridge",
                    "Note": "Created at pci.386",
                    "Unique ID": "VRCs.M9Cc8lcQjE2",
                    "SysFS ID": "/devices/pci0000:00/0000:00:01.3",
                    "SysFS BusID": "0000:00:01.3",
                    "Hardware Class": "bridge",
                    "Model": "Red Hat Qemu virtual machine",
                    "Vendor": 'pci 0x8086 "Intel Corporation"',
                    "Device": 'pci 0x7113 "82371AB/EB/MB PIIX4 ACPI"',
                    "SubVendor": 'pci 0x1af4 "Red Hat, Inc."',
                    "SubDevice": 'pci 0x1100 "Qemu virtual machine"',
                    "Revision": "0x03",
                    "Driver": ["piix4_smbus"],
                    "Driver Modules": ["i2c_piix4"],
                    "IRQ": "9 (no events)",
                    "Module Alias": "pci:v00008086d00007113sv00001AF4sd00001100bc06sc80i00",
                    "Driver Info #0": {
                        "Driver Status": "i2c_piix4 is active",
                        "Driver Activation Cmd": "modprobe i2c_piix4",
                    },
                    "Config Status": {
                        "cfg": "new",
                        "avail": "yes",
                        "need": "no",
                        "active": "unknown",
                    },
                },
            },
        )

    def test__hwinfo_parse_full_ethernet(self):
        hwinfo = textwrap.dedent(
            """
            16: PCI 03.0: 0200 Ethernet controller
              [Created at pci.386]
              Unique ID: 3hqH.pkM7KXDR457
              SysFS ID: /devices/pci0000:00/0000:00:03.0
              SysFS BusID: 0000:00:03.0
              Hardware Class: unknown
              Model: "Red Hat Virtio network device"
              Vendor: pci 0x1af4 "Red Hat, Inc."
              Device: pci 0x1000 "Virtio network device"
              SubVendor: pci 0x1af4 "Red Hat, Inc."
              SubDevice: pci 0x0001 
              Driver: "virtio-pci"
              Driver Modules: "virtio_pci"
              I/O Ports: 0xc000-0xc01f (rw)
              Memory Range: 0xfebd1000-0xfebd1fff (rw,non-prefetchable)
              Memory Range: 0xfe000000-0xfe003fff (ro,non-prefetchable)
              Memory Range: 0xfeb80000-0xfebbffff (ro,non-prefetchable,disabled)
              IRQ: 11 (no events)
              Module Alias: "pci:v00001AF4d00001000sv00001AF4sd00000001bc02sc00i00"
              Config Status: cfg=new, avail=yes, need=no, active=unknown
        """
        )
        self.assertEqual(
            devices._hwinfo_parse_full(hwinfo),
            {
                "16": {
                    "PCI 03.0": "0200 Ethernet controller",
                    "Note": "Created at pci.386",
                    "Unique ID": "3hqH.pkM7KXDR457",
                    "SysFS ID": "/devices/pci0000:00/0000:00:03.0",
                    "SysFS BusID": "0000:00:03.0",
                    "Hardware Class": "unknown",
                    "Model": "Red Hat Virtio network device",
                    "Vendor": 'pci 0x1af4 "Red Hat, Inc."',
                    "Device": 'pci 0x1000 "Virtio network device"',
                    "SubVendor": 'pci 0x1af4 "Red Hat, Inc."',
                    "SubDevice": "pci 0x0001",
                    "Driver": ["virtio-pci"],
                    "Driver Modules": ["virtio_pci"],
                    "I/O Ports": "0xc000-0xc01f (rw)",
                    "Memory Range": [
                        "0xfebd1000-0xfebd1fff (rw,non-prefetchable)",
                        "0xfe000000-0xfe003fff (ro,non-prefetchable)",
                        "0xfeb80000-0xfebbffff (ro,non-prefetchable,disabled)",
                    ],
                    "IRQ": "11 (no events)",
                    "Module Alias": "pci:v00001AF4d00001000sv00001AF4sd00000001bc02sc00i00",
                    "Config Status": {
                        "cfg": "new",
                        "avail": "yes",
                        "need": "no",
                        "active": "unknown",
                    },
                },
            },
        )

    def test__hwinfo_parse_full_storage(self):
        hwinfo = textwrap.dedent(
            """
            17: PCI 01.1: 0101 IDE interface (ISA Compatibility mode-only controller, supports bus mts bus mastering)
              [Created at pci.386]
              Unique ID: mnDB.3sKqaxiizg6
              SysFS ID: /devices/pci0000:00/0000:00:01.1
              SysFS BusID: 0000:00:01.1
              Hardware Class: storage
              Model: "Red Hat Qemu virtual machine"
              Vendor: pci 0x8086 "Intel Corporation"
              Device: pci 0x7010 "82371SB PIIX3 IDE [Natoma/Triton II]"
              SubVendor: pci 0x1af4 "Red Hat, Inc."
              SubDevice: pci 0x1100 "Qemu virtual machine"
              Driver: "ata_piix"
              Driver Modules: "ata_piix"
              I/O Ports: 0x1f0-0x1f7 (rw)
              I/O Port: 0x3f6 (rw)
              I/O Ports: 0x170-0x177 (rw)
              I/O Port: 0x376 (rw)
              I/O Ports: 0xc020-0xc02f (rw)
              Module Alias: "pci:v00008086d00007010sv00001AF4sd00001100bc01sc01i80"
              Driver Info #0:
                Driver Status: ata_piix is active
                Driver Activation Cmd: "modprobe ata_piix"
              Driver Info #1:
                Driver Status: ata_generic is active
                Driver Activation Cmd: "modprobe ata_generic"
              Driver Info #2:
                Driver Status: pata_acpi is active
                Driver Activation Cmd: "modprobe pata_acpi"
              Config Status: cfg=new, avail=yes, need=no, active=unknown
        """
        )
        self.assertEqual(
            devices._hwinfo_parse_full(hwinfo),
            {
                "17": {
                    "PCI 01.1": "0101 IDE interface (ISA Compatibility mode-only controller, supports bus mts bus mastering)",
                    "Note": "Created at pci.386",
                    "Unique ID": "mnDB.3sKqaxiizg6",
                    "SysFS ID": "/devices/pci0000:00/0000:00:01.1",
                    "SysFS BusID": "0000:00:01.1",
                    "Hardware Class": "storage",
                    "Model": "Red Hat Qemu virtual machine",
                    "Vendor": 'pci 0x8086 "Intel Corporation"',
                    "Device": 'pci 0x7010 "82371SB PIIX3 IDE [Natoma/Triton II]"',
                    "SubVendor": 'pci 0x1af4 "Red Hat, Inc."',
                    "SubDevice": 'pci 0x1100 "Qemu virtual machine"',
                    "Driver": ["ata_piix"],
                    "Driver Modules": ["ata_piix"],
                    "I/O Ports": [
                        "0x1f0-0x1f7 (rw)",
                        "0x3f6 (rw)",
                        "0x170-0x177 (rw)",
                        "0x376 (rw)",
                        "0xc020-0xc02f (rw)",
                    ],
                    "Module Alias": "pci:v00008086d00007010sv00001AF4sd00001100bc01sc01i80",
                    "Driver Info #0": {
                        "Driver Status": "ata_piix is active",
                        "Driver Activation Cmd": "modprobe ata_piix",
                    },
                    "Driver Info #1": {
                        "Driver Status": "ata_generic is active",
                        "Driver Activation Cmd": "modprobe ata_generic",
                    },
                    "Driver Info #2": {
                        "Driver Status": "pata_acpi is active",
                        "Driver Activation Cmd": "modprobe pata_acpi",
                    },
                    "Config Status": {
                        "cfg": "new",
                        "avail": "yes",
                        "need": "no",
                        "active": "unknown",
                    },
                },
            },
        )

    def test__hwinfo_parse_full_video(self):
        hwinfo = textwrap.dedent(
            """
            18: PCI 02.0: 0300 VGA compatible controller (VGA)
              [Created at pci.386]
              Unique ID: _Znp.WspiKb87LiA
              SysFS ID: /devices/pci0000:00/0000:00:02.0
              SysFS BusID: 0000:00:02.0
              Hardware Class: graphics card
              Model: "VGA compatible controller"
              Vendor: pci 0x1234
              Device: pci 0x1111
              SubVendor: pci 0x1af4 "Red Hat, Inc."
              SubDevice: pci 0x1100
              Revision: 0x02
              Driver: "bochs-drm"
              Driver Modules: "bochs_drm"
              Memory Range: 0xfd000000-0xfdffffff (ro,non-prefetchable)
              Memory Range: 0xfebd0000-0xfebd0fff (rw,non-prefetchable)
              Memory Range: 0x000c0000-0x000dffff (rw,non-prefetchable,disabled)
              Module Alias: "pci:v00001234d00001111sv00001AF4sd00001100bc03sc00i00"
              Driver Info #0:
                Driver Status: bochs_drm is active
                Driver Activation Cmd: "modprobe bochs_drm"
              Config Status: cfg=new, avail=yes, need=no, active=unknown
        """
        )
        self.assertEqual(
            devices._hwinfo_parse_full(hwinfo),
            {
                "18": {
                    "PCI 02.0": "0300 VGA compatible controller (VGA)",
                    "Note": "Created at pci.386",
                    "Unique ID": "_Znp.WspiKb87LiA",
                    "SysFS ID": "/devices/pci0000:00/0000:00:02.0",
                    "SysFS BusID": "0000:00:02.0",
                    "Hardware Class": "graphics card",
                    "Model": "VGA compatible controller",
                    "Vendor": "pci 0x1234",
                    "Device": "pci 0x1111",
                    "SubVendor": 'pci 0x1af4 "Red Hat, Inc."',
                    "SubDevice": "pci 0x1100",
                    "Revision": "0x02",
                    "Driver": ["bochs-drm"],
                    "Driver Modules": ["bochs_drm"],
                    "Memory Range": [
                        "0xfd000000-0xfdffffff (ro,non-prefetchable)",
                        "0xfebd0000-0xfebd0fff (rw,non-prefetchable)",
                        "0x000c0000-0x000dffff (rw,non-prefetchable,disabled)",
                    ],
                    "Module Alias": "pci:v00001234d00001111sv00001AF4sd00001100bc03sc00i00",
                    "Driver Info #0": {
                        "Driver Status": "bochs_drm is active",
                        "Driver Activation Cmd": "modprobe bochs_drm",
                    },
                    "Config Status": {
                        "cfg": "new",
                        "avail": "yes",
                        "need": "no",
                        "active": "unknown",
                    },
                },
            },
        )

    def test__hwinfo_parse_full_network(self):
        hwinfo = textwrap.dedent(
            """
            19: Virtio 00.0: 0200 Ethernet controller
              [Created at pci.1679]
              Unique ID: vWuh.VIRhsc57kTD
              Parent ID: 3hqH.pkM7KXDR457
              SysFS ID: /devices/pci0000:00/0000:00:03.0/virtio0
              SysFS BusID: virtio0
              Hardware Class: network
              Model: "Virtio Ethernet Card 0"
              Vendor: int 0x6014 "Virtio"
              Device: int 0x0001 "Ethernet Card 0"
              Driver: "virtio_net"
              Driver Modules: "virtio_net"
              Device File: ens3
              HW Address: 52:54:00:12:34:56
              Permanent HW Address: 52:54:00:12:34:56
              Link detected: yes
              Module Alias: "virtio:d00000001v00001AF4"
              Driver Info #0:
                Driver Status: virtio_net is active
                Driver Activation Cmd: "modprobe virtio_net"
              Config Status: cfg=new, avail=yes, need=no, active=unknown
              Attached to: #16 (Ethernet controller)

            20: None 00.0: 0700 Serial controller (16550)
              [Created at serial.74]
              Unique ID: S_Uw.3fyvFV+mbWD
              Hardware Class: unknown
              Model: "16550A"
              Device: "16550A"
              Device File: /dev/ttyS0
              Tags: mouse, modem, braille
              I/O Ports: 0x3f8-0x3ff (rw)
              IRQ: 4 (55234 events)
              Config Status: cfg=new, avail=yes, need=no, active=unknown
        """
        )
        self.assertEqual(
            devices._hwinfo_parse_full(hwinfo),
            {
                "19": {
                    "Virtio 00.0": "0200 Ethernet controller",
                    "Note": "Created at pci.1679",
                    "Unique ID": "vWuh.VIRhsc57kTD",
                    "Parent ID": "3hqH.pkM7KXDR457",
                    "SysFS ID": "/devices/pci0000:00/0000:00:03.0/virtio0",
                    "SysFS BusID": "virtio0",
                    "Hardware Class": "network",
                    "Model": "Virtio Ethernet Card 0",
                    "Vendor": 'int 0x6014 "Virtio"',
                    "Device": 'int 0x0001 "Ethernet Card 0"',
                    "Driver": ["virtio_net"],
                    "Driver Modules": ["virtio_net"],
                    "Device File": "ens3",
                    "HW Address": "52:54:00:12:34:56",
                    "Permanent HW Address": "52:54:00:12:34:56",
                    "Link detected": "yes",
                    "Module Alias": "virtio:d00000001v00001AF4",
                    "Driver Info #0": {
                        "Driver Status": "virtio_net is active",
                        "Driver Activation Cmd": "modprobe virtio_net",
                    },
                    "Config Status": {
                        "cfg": "new",
                        "avail": "yes",
                        "need": "no",
                        "active": "unknown",
                    },
                    "Attached to": {"Handle": "#16 (Ethernet controller)"},
                },
                "20": {
                    "None 00.0": "0700 Serial controller (16550)",
                    "Note": "Created at serial.74",
                    "Unique ID": "S_Uw.3fyvFV+mbWD",
                    "Hardware Class": "unknown",
                    "Model": "16550A",
                    "Device": "16550A",
                    "Device File": "/dev/ttyS0",
                    "Tags": ["mouse", "modem", "braille"],
                    "I/O Ports": "0x3f8-0x3ff (rw)",
                    "IRQ": "4 (55234 events)",
                    "Config Status": {
                        "cfg": "new",
                        "avail": "yes",
                        "need": "no",
                        "active": "unknown",
                    },
                },
            },
        )

    def test__hwinfo_parse_full_disk(self):
        hwinfo = textwrap.dedent(
            """
            21: SCSI 100.0: 10602 CD-ROM (DVD)
              [Created at block.249]
              Unique ID: KD9E.53N0UD4ozwD
              Parent ID: mnDB.3sKqaxiizg6
              SysFS ID: /class/block/sr0
              SysFS BusID: 1:0:0:0
              SysFS Device Link: /devices/pci0000:00/0000:00:01.1/ata2/host1/target1:0:0/1:0:0:0
              Hardware Class: cdrom
              Model: "QEMU DVD-ROM"
              Vendor: "QEMU"
              Device: "QEMU DVD-ROM"
              Revision: "2.5+"
              Driver: "ata_piix", "sr"
              Driver Modules: "ata_piix", "sr_mod"
              Device File: /dev/sr0 (/dev/sg1)
              Device Files: /dev/sr0, /dev/cdrom, /dev/dvd, /dev/disk/by-path/pci-0000:00:01.1-ata-2, /dev/disk/by-id/ata-QEMU_DVD-ROM_QM00003, /dev/disk/by-uuid/2019-08-11-11-44-39-00, /dev/disk/by-label/CDROM
              Device Number: block 11:0 (char 21:1)
              Features: DVD, MRW, MRW-W
              Config Status: cfg=new, avail=yes, need=no, active=unknown
              Attached to: #17 (IDE interface)
              Drive Speed: 4
              Volume ID: "CDROM"
              Application: "0X5228779D"
              Publisher: "SUSE LLC"
              Preparer: "KIWI - HTTPS://GITHUB.COM/OSINSIDE/KIWI"
              Creation date: "2019081111443900"
              El Torito info: platform 0, bootable
                Boot Catalog: at sector 0x00fa
                Media: none starting at sector 0x00fb
                Load: 2048 bytes

            22: None 00.0: 10600 Disk
              [Created at block.245]
              Unique ID: kwWm.Fxp0d3BezAE
              SysFS ID: /class/block/fd0
              SysFS BusID: floppy.0
              SysFS Device Link: /devices/platform/floppy.0
              Hardware Class: disk
              Model: "Disk"
              Driver: "floppy"
              Driver Modules: "floppy"
              Device File: /dev/fd0
              Device Number: block 2:0
              Size: 8 sectors a 512 bytes
              Capacity: 0 GB (4096 bytes)
              Drive status: no medium
              Config Status: cfg=new, avail=yes, need=no, active=unknown

            23: IDE 00.0: 10600 Disk
              [Created at block.245]
              Unique ID: 3OOL.W8iGvCekDp8
              Parent ID: mnDB.3sKqaxiizg6
              SysFS ID: /class/block/sda
              SysFS BusID: 0:0:0:0
              SysFS Device Link: /devices/pci0000:00/0000:00:01.1/ata1/host0/target0:0:0/0:0:0:0
              Hardware Class: disk
              Model: "QEMU HARDDISK"
              Vendor: "QEMU"
              Device: "HARDDISK"
              Revision: "2.5+"
              Serial ID: "QM00001"
              Driver: "ata_piix", "sd"
              Driver Modules: "ata_piix"
              Device File: /dev/sda
              Device Files: /dev/sda, /dev/disk/by-path/pci-0000:00:01.1-ata-1, /dev/disk/by-id/ata-QEMU_HARDDISK_QM00001
              Device Number: block 8:0-8:15
              Geometry (Logical): CHS 3133/255/63
              Size: 50331648 sectors a 512 bytes
              Capacity: 24 GB (25769803776 bytes)
              Config Status: cfg=new, avail=yes, need=no, active=unknown
              Attached to: #17 (IDE interface)
        """
        )
        self.assertEqual(
            devices._hwinfo_parse_full(hwinfo),
            {
                "21": {
                    "SCSI 100.0": "10602 CD-ROM (DVD)",
                    "Note": "Created at block.249",
                    "Unique ID": "KD9E.53N0UD4ozwD",
                    "Parent ID": "mnDB.3sKqaxiizg6",
                    "SysFS ID": "/class/block/sr0",
                    "SysFS BusID": "1:0:0:0",
                    "SysFS Device Link": "/devices/pci0000:00/0000:00:01.1/ata2/host1/target1:0:0/1:0:0:0",
                    "Hardware Class": "cdrom",
                    "Model": "QEMU DVD-ROM",
                    "Vendor": "QEMU",
                    "Device": "QEMU DVD-ROM",
                    "Revision": "2.5+",
                    "Driver": ["ata_piix", "sr"],
                    "Driver Modules": ["ata_piix", "sr_mod"],
                    "Device File": "/dev/sr0 (/dev/sg1)",
                    "Device Files": [
                        "/dev/sr0",
                        "/dev/cdrom",
                        "/dev/dvd",
                        "/dev/disk/by-path/pci-0000:00:01.1-ata-2",
                        "/dev/disk/by-id/ata-QEMU_DVD-ROM_QM00003",
                        "/dev/disk/by-uuid/2019-08-11-11-44-39-00",
                        "/dev/disk/by-label/CDROM",
                    ],
                    "Device Number": "block 11:0 (char 21:1)",
                    "Features": ["DVD", "MRW", "MRW-W"],
                    "Config Status": {
                        "cfg": "new",
                        "avail": "yes",
                        "need": "no",
                        "active": "unknown",
                    },
                    "Attached to": {"Handle": "#17 (IDE interface)"},
                    "Drive Speed": "4",
                    "Volume ID": "CDROM",
                    "Application": "0X5228779D",
                    "Publisher": "SUSE LLC",
                    "Preparer": "KIWI - HTTPS://GITHUB.COM/OSINSIDE/KIWI",
                    "Creation date": "2019081111443900",
                    "El Torito info": {
                        "platform": "0",
                        "bootable": "yes",
                        "Boot Catalog": "at sector 0x00fa",
                        "Media": "none starting at sector 0x00fb",
                        "Load": "2048 bytes",
                    },
                },
                "22": {
                    "None 00.0": "10600 Disk",
                    "Note": "Created at block.245",
                    "Unique ID": "kwWm.Fxp0d3BezAE",
                    "SysFS ID": "/class/block/fd0",
                    "SysFS BusID": "floppy.0",
                    "SysFS Device Link": "/devices/platform/floppy.0",
                    "Hardware Class": "disk",
                    "Model": "Disk",
                    "Driver": ["floppy"],
                    "Driver Modules": ["floppy"],
                    "Device File": "/dev/fd0",
                    "Device Number": "block 2:0",
                    "Size": "8 sectors a 512 bytes",
                    "Capacity": "0 GB (4096 bytes)",
                    "Drive status": "no medium",
                    "Config Status": {
                        "cfg": "new",
                        "avail": "yes",
                        "need": "no",
                        "active": "unknown",
                    },
                },
                "23": {
                    "IDE 00.0": "10600 Disk",
                    "Note": "Created at block.245",
                    "Unique ID": "3OOL.W8iGvCekDp8",
                    "Parent ID": "mnDB.3sKqaxiizg6",
                    "SysFS ID": "/class/block/sda",
                    "SysFS BusID": "0:0:0:0",
                    "SysFS Device Link": "/devices/pci0000:00/0000:00:01.1/ata1/host0/target0:0:0/0:0:0:0",
                    "Hardware Class": "disk",
                    "Model": "QEMU HARDDISK",
                    "Vendor": "QEMU",
                    "Device": "HARDDISK",
                    "Revision": "2.5+",
                    "Serial ID": "QM00001",
                    "Driver": ["ata_piix", "sd"],
                    "Driver Modules": ["ata_piix"],
                    "Device File": "/dev/sda",
                    "Device Files": [
                        "/dev/sda",
                        "/dev/disk/by-path/pci-0000:00:01.1-ata-1",
                        "/dev/disk/by-id/ata-QEMU_HARDDISK_QM00001",
                    ],
                    "Device Number": "block 8:0-8:15",
                    "Geometry (Logical)": "CHS 3133/255/63",
                    "Size": "50331648 sectors a 512 bytes",
                    "Capacity": "24 GB (25769803776 bytes)",
                    "Config Status": {
                        "cfg": "new",
                        "avail": "yes",
                        "need": "no",
                        "active": "unknown",
                    },
                    "Attached to": {"Handle": "#17 (IDE interface)"},
                },
            },
        )

    def test__hwinfo_parse_full_keyboard(self):
        hwinfo = textwrap.dedent(
            """
            24: PS/2 00.0: 10800 Keyboard
              [Created at input.226]
              Unique ID: nLyy.+49ps10DtUF
              Hardware Class: keyboard
              Model: "AT Translated Set 2 keyboard"
              Vendor: 0x0001
              Device: 0x0001 "AT Translated Set 2 keyboard"
              Compatible to: int 0x0211 0x0001
              Device File: /dev/input/event0
              Device Files: /dev/input/event0, /dev/input/by-path/platform-i8042-serio-0-event-kbd
              Device Number: char 13:64
              Driver Info #0:
                XkbRules: xfree86
                XkbModel: pc104
              Config Status: cfg=new, avail=yes, need=no, active=unknown
        """
        )
        self.assertEqual(
            devices._hwinfo_parse_full(hwinfo),
            {
                "24": {
                    "PS/2 00.0": "10800 Keyboard",
                    "Note": "Created at input.226",
                    "Unique ID": "nLyy.+49ps10DtUF",
                    "Hardware Class": "keyboard",
                    "Model": "AT Translated Set 2 keyboard",
                    "Vendor": "0x0001",
                    "Device": '0x0001 "AT Translated Set 2 keyboard"',
                    "Compatible to": "int 0x0211 0x0001",
                    "Device File": "/dev/input/event0",
                    "Device Files": [
                        "/dev/input/event0",
                        "/dev/input/by-path/platform-i8042-serio-0-event-kbd",
                    ],
                    "Device Number": "char 13:64",
                    "Driver Info #0": {"XkbRules": "xfree86", "XkbModel": "pc104"},
                    "Config Status": {
                        "cfg": "new",
                        "avail": "yes",
                        "need": "no",
                        "active": "unknown",
                    },
                },
            },
        )

    def test__hwinfo_parse_full_mouse(self):
        hwinfo = textwrap.dedent(
            """
            25: PS/2 00.0: 10500 PS/2 Mouse
              [Created at input.249]
              Unique ID: AH6Q.mYF0pYoTCW7
              Hardware Class: mouse
              Model: "VirtualPS/2 VMware VMMouse"
              Vendor: 0x0002
              Device: 0x0013 "VirtualPS/2 VMware VMMouse"
              Compatible to: int 0x0210 0x0003
              Device File: /dev/input/mice (/dev/input/mouse0)
              Device Files: /dev/input/mice, /dev/input/mouse0, /dev/input/event1, /dev/input/by-path/platform-i8042-serio-1-event-mouse, /dev/input/by-path/platform-i8042-serio-1-mouse
              Device Number: char 13:63 (char 13:32)
              Driver Info #0:
                Buttons: 3
                Wheels: 0
                XFree86 Protocol: explorerps/2
                GPM Protocol: exps2
              Config Status: cfg=new, avail=yes, need=no, active=unknown

            26: PS/2 00.0: 10500 PS/2 Mouse
              [Created at input.249]
              Unique ID: AH6Q.++hSeDccb2F
              Hardware Class: mouse
              Model: "VirtualPS/2 VMware VMMouse"
              Vendor: 0x0002
              Device: 0x0013 "VirtualPS/2 VMware VMMouse"
              Compatible to: int 0x0210 0x0012
              Device File: /dev/input/mice (/dev/input/mouse1)
              Device Files: /dev/input/mice, /dev/input/mouse1, /dev/input/event2
              Device Number: char 13:63 (char 13:33)
              Driver Info #0:
                Buttons: 2
                Wheels: 1
                XFree86 Protocol: explorerps/2
                GPM Protocol: exps2
              Config Status: cfg=new, avail=yes, need=no, active=unknown
        """
        )
        self.assertEqual(
            devices._hwinfo_parse_full(hwinfo),
            {
                "25": {
                    "PS/2 00.0": "10500 PS/2 Mouse",
                    "Note": "Created at input.249",
                    "Unique ID": "AH6Q.mYF0pYoTCW7",
                    "Hardware Class": "mouse",
                    "Model": "VirtualPS/2 VMware VMMouse",
                    "Vendor": "0x0002",
                    "Device": '0x0013 "VirtualPS/2 VMware VMMouse"',
                    "Compatible to": "int 0x0210 0x0003",
                    "Device File": "/dev/input/mice (/dev/input/mouse0)",
                    "Device Files": [
                        "/dev/input/mice",
                        "/dev/input/mouse0",
                        "/dev/input/event1",
                        "/dev/input/by-path/platform-i8042-serio-1-event-mouse",
                        "/dev/input/by-path/platform-i8042-serio-1-mouse",
                    ],
                    "Device Number": "char 13:63 (char 13:32)",
                    "Driver Info #0": {
                        "Buttons": "3",
                        "Wheels": "0",
                        "XFree86 Protocol": "explorerps/2",
                        "GPM Protocol": "exps2",
                    },
                    "Config Status": {
                        "cfg": "new",
                        "avail": "yes",
                        "need": "no",
                        "active": "unknown",
                    },
                },
                "26": {
                    "PS/2 00.0": "10500 PS/2 Mouse",
                    "Note": "Created at input.249",
                    "Unique ID": "AH6Q.++hSeDccb2F",
                    "Hardware Class": "mouse",
                    "Model": "VirtualPS/2 VMware VMMouse",
                    "Vendor": "0x0002",
                    "Device": '0x0013 "VirtualPS/2 VMware VMMouse"',
                    "Compatible to": "int 0x0210 0x0012",
                    "Device File": "/dev/input/mice (/dev/input/mouse1)",
                    "Device Files": [
                        "/dev/input/mice",
                        "/dev/input/mouse1",
                        "/dev/input/event2",
                    ],
                    "Device Number": "char 13:63 (char 13:33)",
                    "Driver Info #0": {
                        "Buttons": "2",
                        "Wheels": "1",
                        "XFree86 Protocol": "explorerps/2",
                        "GPM Protocol": "exps2",
                    },
                    "Config Status": {
                        "cfg": "new",
                        "avail": "yes",
                        "need": "no",
                        "active": "unknown",
                    },
                },
            },
        )

    def test__hwinfo_parse_full_cpu(self):
        hwinfo = textwrap.dedent(
            """
            27: None 00.0: 10103 CPU
              [Created at cpu.462]
              Unique ID: rdCR.j8NaKXDZtZ6
              Hardware Class: cpu
              Arch: X86-64
              Vendor: "GenuineIntel"
              Model: 6.6.3 "QEMU Virtual CPU version 2.5+"
              Features: fpu,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pse36,clflush,mmx,fxsr,sse,sse2,syscall,nx,lm,rep_good,nopl,xtopology,cpuid,tsc_known_freq,pni,cx16,x2apic,hypervisor,lahf_lm,cpuid_fault,pti
              Clock: 3591 MHz
              BogoMips: 7182.68
              Cache: 16384 kb
              Config Status: cfg=new, avail=yes, need=no, active=unknown
        """
        )
        self.assertEqual(
            devices._hwinfo_parse_full(hwinfo),
            {
                "27": {
                    "None 00.0": "10103 CPU",
                    "Note": "Created at cpu.462",
                    "Unique ID": "rdCR.j8NaKXDZtZ6",
                    "Hardware Class": "cpu",
                    "Arch": "X86-64",
                    "Vendor": "GenuineIntel",
                    "Model": '6.6.3 "QEMU Virtual CPU version 2.5+"',
                    "Features": [
                        "fpu",
                        "de",
                        "pse",
                        "tsc",
                        "msr",
                        "pae",
                        "mce",
                        "cx8",
                        "apic",
                        "sep",
                        "mtrr",
                        "pge",
                        "mca",
                        "cmov",
                        "pse36",
                        "clflush",
                        "mmx",
                        "fxsr",
                        "sse",
                        "sse2",
                        "syscall",
                        "nx",
                        "lm",
                        "rep_good",
                        "nopl",
                        "xtopology",
                        "cpuid",
                        "tsc_known_freq",
                        "pni",
                        "cx16",
                        "x2apic",
                        "hypervisor",
                        "lahf_lm",
                        "cpuid_fault",
                        "pti",
                    ],
                    "Clock": "3591 MHz",
                    "BogoMips": "7182.68",
                    "Cache": "16384 kb",
                    "Config Status": {
                        "cfg": "new",
                        "avail": "yes",
                        "need": "no",
                        "active": "unknown",
                    },
                },
            },
        )

    def test__hwinfo_parse_full_nic(self):
        hwinfo = textwrap.dedent(
            """
            28: None 00.0: 10700 Loopback
              [Created at net.126]
              Unique ID: ZsBS.GQNx7L4uPNA
              SysFS ID: /class/net/lo
              Hardware Class: network interface
              Model: "Loopback network interface"
              Device File: lo
              Link detected: yes
              Config Status: cfg=new, avail=yes, need=no, active=unknown

            29: None 03.0: 10701 Ethernet
              [Created at net.126]
              Unique ID: U2Mp.ndpeucax6V1
              Parent ID: vWuh.VIRhsc57kTD
              SysFS ID: /class/net/ens3
              SysFS Device Link: /devices/pci0000:00/0000:00:03.0/virtio0
              Hardware Class: network interface
              Model: "Ethernet network interface"
              Driver: "virtio_net"
              Driver Modules: "virtio_net"
              Device File: ens3
              HW Address: 52:54:00:12:34:56
              Permanent HW Address: 52:54:00:12:34:56
              Link detected: yes
              Config Status: cfg=new, avail=yes, need=no, active=unknown
              Attached to: #19 (Ethernet controller)
        """
        )
        self.assertEqual(
            devices._hwinfo_parse_full(hwinfo),
            {
                "28": {
                    "None 00.0": "10700 Loopback",
                    "Note": "Created at net.126",
                    "Unique ID": "ZsBS.GQNx7L4uPNA",
                    "SysFS ID": "/class/net/lo",
                    "Hardware Class": "network interface",
                    "Model": "Loopback network interface",
                    "Device File": "lo",
                    "Link detected": "yes",
                    "Config Status": {
                        "cfg": "new",
                        "avail": "yes",
                        "need": "no",
                        "active": "unknown",
                    },
                },
                "29": {
                    "None 03.0": "10701 Ethernet",
                    "Note": "Created at net.126",
                    "Unique ID": "U2Mp.ndpeucax6V1",
                    "Parent ID": "vWuh.VIRhsc57kTD",
                    "SysFS ID": "/class/net/ens3",
                    "SysFS Device Link": "/devices/pci0000:00/0000:00:03.0/virtio0",
                    "Hardware Class": "network interface",
                    "Model": "Ethernet network interface",
                    "Driver": ["virtio_net"],
                    "Driver Modules": ["virtio_net"],
                    "Device File": "ens3",
                    "HW Address": "52:54:00:12:34:56",
                    "Permanent HW Address": "52:54:00:12:34:56",
                    "Link detected": "yes",
                    "Config Status": {
                        "cfg": "new",
                        "avail": "yes",
                        "need": "no",
                        "active": "unknown",
                    },
                    "Attached to": {"Handle": "#19 (Ethernet controller)"},
                },
            },
        )
07070100000087000081A40000000000000000000000016130D1CF00000530000000000000000000000000000000000000003500000000yomi-0.0.1+git.1630589391.4557cfd/tests/test_disk.py# -*- coding: utf-8 -*-
#
# Author: Alberto Planas <aplanas@suse.com>
#
# Copyright 2019 SUSE LLC.
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.

import unittest

from utils import disk


class DiskTestCase(unittest.TestCase):
    def test_units(self):
        self.assertEqual(disk.units(1), (1, "MB"))
        self.assertEqual(disk.units("1"), (1, "MB"))
        self.assertEqual(disk.units("1.0"), (1, "MB"))
        self.assertEqual(disk.units("1s"), (1, "s"))
        self.assertEqual(disk.units("1.1s"), (1.1, "s"))
        self.assertRaises(disk.ParseException, disk.units, "s1")
07070100000088000081A40000000000000000000000016130D1CF0000332E000000000000000000000000000000000000003700000000yomi-0.0.1+git.1630589391.4557cfd/tests/test_images.py# -*- coding: utf-8 -*-
#
# Author: Alberto Planas <aplanas@suse.com>
#
# Copyright 2019 SUSE LLC.
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.

import unittest
from unittest.mock import patch, MagicMock

from salt.exceptions import SaltInvocationError, CommandExecutionError

from modules import images


class ImagesTestCase(unittest.TestCase):
    def test__checksum_url(self):
        """Test images._checksum_url function"""
        self.assertEqual(
            images._checksum_url("http://example.com/image.xz", "md5"),
            "http://example.com/image.md5",
        )
        self.assertEqual(
            images._checksum_url("http://example.com/image.ext4", "md5"),
            "http://example.com/image.ext4.md5",
        )

    def test__curl_cmd(self):
        """Test images._curl_cmd function"""
        self.assertEqual(
            images._curl_cmd("http://example.com/image.xz"),
            ["curl", "http://example.com/image.xz"],
        )
        self.assertEqual(
            images._curl_cmd("http://example.com/image.xz", s=None),
            ["curl", "-s", "http://example.com/image.xz"],
        )
        self.assertEqual(
            images._curl_cmd("http://example.com/image.xz", s="a"),
            ["curl", "-s", "a", "http://example.com/image.xz"],
        )
        self.assertEqual(
            images._curl_cmd("http://example.com/image.xz", _long=None),
            ["curl", "--_long", "http://example.com/image.xz"],
        )
        self.assertEqual(
            images._curl_cmd("http://example.com/image.xz", _long="a"),
            ["curl", "--_long", "a", "http://example.com/image.xz"],
        )

    def test__fetch_file(self):
        """Test images._fetch_file function"""
        salt_mock = {
            "cmd.run_stdout": MagicMock(return_value="stdout"),
        }

        with patch.dict(images.__salt__, salt_mock):
            self.assertEqual(images._fetch_file("http://url"), "stdout")
            salt_mock["cmd.run_stdout"].assert_called_with(
                ["curl", "--silent", "--location", "http://url"]
            )

        with patch.dict(images.__salt__, salt_mock):
            self.assertEqual(images._fetch_file("http://url", s="a"), "stdout")
            salt_mock["cmd.run_stdout"].assert_called_with(
                ["curl", "--silent", "--location", "-s", "a", "http://url"]
            )

    def test__find_filesystem(self):
        """Test images._find_filesystem function"""
        salt_mock = {
            "cmd.run_stdout": MagicMock(return_value="ext4"),
        }

        with patch.dict(images.__salt__, salt_mock):
            self.assertEqual(images._find_filesystem("/dev/sda1"), "ext4")
            salt_mock["cmd.run_stdout"].assert_called_with(
                ["lsblk", "--noheadings", "--output", "FSTYPE", "/dev/sda1"]
            )

    def test_fetch_checksum(self):
        """Test images.fetch_checksum function"""
        salt_mock = {
            "cmd.run_stdout": MagicMock(return_value="mychecksum -"),
        }

        with patch.dict(images.__salt__, salt_mock):
            self.assertEqual(
                images.fetch_checksum("http://url/image.xz", checksum_type="md5"),
                "mychecksum",
            )
            salt_mock["cmd.run_stdout"].assert_called_with(
                ["curl", "--silent", "--location", "http://url/image.md5"]
            )

        with patch.dict(images.__salt__, salt_mock):
            self.assertEqual(
                images.fetch_checksum("http://url/image.ext4", checksum_type="md5"),
                "mychecksum",
            )
            salt_mock["cmd.run_stdout"].assert_called_with(
                ["curl", "--silent", "--location", "http://url/image.ext4.md5"]
            )

        with patch.dict(images.__salt__, salt_mock):
            self.assertEqual(
                images.fetch_checksum(
                    "http://url/image.xz", checksum_type="sha1", s="a"
                ),
                "mychecksum",
            )
            salt_mock["cmd.run_stdout"].assert_called_with(
                ["curl", "--silent", "--location", "-s", "a", "http://url/image.sha1"]
            )

    def test_dump_invalid_url(self):
        """Test images.dump function with an invalid URL"""
        with self.assertRaises(SaltInvocationError):
            images.dump("random://example.org", "/dev/sda1")

    def test_dump_invalid_checksum_type(self):
        """Test images.dump function with an invalid checksum type"""
        with self.assertRaises(SaltInvocationError):
            images.dump("http://example.org/image.xz", "/dev/sda1", checksum_type="crc")

    def test_dump_missing_checksum_type(self):
        """Test images.dump function with a missing checksum type"""
        with self.assertRaises(SaltInvocationError):
            images.dump(
                "http://example.org/image.xz", "/dev/sda1", checksum="mychecksum"
            )

    def test_dump_download_fail(self):
        """Test images.dump function when download fails"""
        salt_mock = {
            "cmd.run_all": MagicMock(return_value={"retcode": 1, "stderr": "error"}),
        }

        with patch.dict(images.__salt__, salt_mock):
            with self.assertRaises(CommandExecutionError):
                images.dump("http://example.org/image.ext4", "/dev/sda1")
            salt_mock["cmd.run_all"].assert_called_with(
                "set -eo pipefail ; curl --fail --location --silent "
                "http://example.org/image.ext4 | tee /dev/sda1 "
                "| md5sum",
                python_shell=True,
            )

    def test_dump_download_fail_gz(self):
        """Test images.dump function when download fails (gz)"""
        salt_mock = {
            "cmd.run_all": MagicMock(return_value={"retcode": 1, "stderr": "error"}),
        }

        with patch.dict(images.__salt__, salt_mock):
            with self.assertRaises(CommandExecutionError):
                images.dump("http://example.org/image.gz", "/dev/sda1")
            salt_mock["cmd.run_all"].assert_called_with(
                "set -eo pipefail ; curl --fail --location --silent "
                "http://example.org/image.gz | gunzip | tee /dev/sda1 "
                "| md5sum",
                python_shell=True,
            )

    def test_dump_download_fail_bz2(self):
        """Test images.dump function when download fails (bz2)"""
        salt_mock = {
            "cmd.run_all": MagicMock(return_value={"retcode": 1, "stderr": "error"}),
        }

        with patch.dict(images.__salt__, salt_mock):
            with self.assertRaises(CommandExecutionError):
                images.dump("http://example.org/image.bz2", "/dev/sda1")
            salt_mock["cmd.run_all"].assert_called_with(
                "set -eo pipefail ; curl --fail --location --silent "
                "http://example.org/image.bz2 | bzip2 -d | tee /dev/sda1 "
                "| md5sum",
                python_shell=True,
            )

    def test_dump_download_fail_xz(self):
        """Test images.dump function when download fails (xz)"""
        salt_mock = {
            "cmd.run_all": MagicMock(return_value={"retcode": 1, "stderr": "error"}),
        }

        with patch.dict(images.__salt__, salt_mock):
            with self.assertRaises(CommandExecutionError):
                images.dump("http://example.org/image.xz", "/dev/sda1")
            salt_mock["cmd.run_all"].assert_called_with(
                "set -eo pipefail ; curl --fail --location --silent "
                "http://example.org/image.xz | xz -d | tee /dev/sda1 "
                "| md5sum",
                python_shell=True,
            )

    def test_dump_download_checksum_fail(self):
        """Test images.dump function when checksum fails"""
        salt_mock = {
            "cmd.run_all": MagicMock(
                return_value={"retcode": 0, "stdout": "badchecksum"}
            ),
        }

        with patch.dict(images.__salt__, salt_mock):
            with self.assertRaises(CommandExecutionError):
                images.dump(
                    "http://example.org/image.ext4",
                    "/dev/sda1",
                    checksum_type="md5",
                    checksum="checksum",
                )

    def test_dump_download_checksum_fail_fetch(self):
        """Test images.dump function when checksum fails"""
        salt_mock = {
            "cmd.run_stdout": MagicMock(return_value="checksum -"),
            "cmd.run_all": MagicMock(
                return_value={"retcode": 0, "stdout": "badchecksum"}
            ),
        }

        with patch.dict(images.__salt__, salt_mock):
            with self.assertRaises(CommandExecutionError):
                images.dump(
                    "http://example.org/image.ext4", "/dev/sda1", checksum_type="md5"
                )

    def test_dump_resize_fail_extx(self):
        """Test images.dump function when resize fails (extx)"""
        salt_mock = {
            "cmd.run_stdout": MagicMock(return_value="ext4"),
            "cmd.run_all": MagicMock(
                side_effect=[
                    {"retcode": 0, "stdout": "checksum"},
                    {"retcode": 1, "stderr": "error"},
                ]
            ),
        }

        with patch.dict(images.__salt__, salt_mock):
            with self.assertRaises(CommandExecutionError):
                images.dump(
                    "http://example.org/image.ext4",
                    "/dev/sda1",
                    checksum_type="md5",
                    checksum="checksum",
                )
            salt_mock["cmd.run_all"].assert_called_with(
                "e2fsck -f -y /dev/sda1; resize2fs /dev/sda1", python_shell=True
            )

    def test_dump_resize_fail_btrfs(self):
        """Test images.dump function when resize fails (btrfs)"""
        salt_mock = {
            "cmd.run_stdout": MagicMock(return_value="btrfs"),
            "cmd.run_all": MagicMock(
                side_effect=[
                    {"retcode": 0, "stdout": "checksum"},
                    {"retcode": 1, "stderr": "error"},
                ]
            ),
        }

        with patch.dict(images.__salt__, salt_mock):
            with self.assertRaises(CommandExecutionError):
                images.dump(
                    "http://example.org/image.btrfs",
                    "/dev/sda1",
                    checksum_type="md5",
                    checksum="checksum",
                )
            salt_mock["cmd.run_all"].assert_called_with(
                "mount /dev/sda1 /mnt; btrfs filesystem resize max /mnt; "
                "umount /mnt",
                python_shell=True,
            )

    def test_dump_resize_fail_xfs(self):
        """Test images.dump function when resize fails (xfs)"""
        salt_mock = {
            "cmd.run_stdout": MagicMock(return_value="xfs"),
            "cmd.run_all": MagicMock(
                side_effect=[
                    {"retcode": 0, "stdout": "checksum"},
                    {"retcode": 1, "stderr": "error"},
                ]
            ),
        }

        with patch.dict(images.__salt__, salt_mock):
            with self.assertRaises(CommandExecutionError):
                images.dump(
                    "http://example.org/image.xfs",
                    "/dev/sda1",
                    checksum_type="md5",
                    checksum="checksum",
                )
            salt_mock["cmd.run_all"].assert_called_with(
                "mount /dev/sda1 /mnt; xfs_growfs /mnt; umount /mnt", python_shell=True
            )

    def test_dump_resize(self):
        """Test images.dump function"""
        salt_mock = {
            "cmd.run_stdout": MagicMock(return_value="ext4"),
            "cmd.run_all": MagicMock(
                side_effect=[{"retcode": 0, "stdout": "checksum"}, {"retcode": 0}]
            ),
            "cmd.run": MagicMock(return_value=""),
        }

        with patch.dict(images.__salt__, salt_mock):
            self.assertEqual(
                images.dump(
                    "http://example.org/image.ext4",
                    "/dev/sda1",
                    checksum_type="md5",
                    checksum="checksum",
                ),
                "checksum",
            )
            salt_mock["cmd.run"].assert_called_with("sync")
07070100000089000081A40000000000000000000000016130D1CF00005276000000000000000000000000000000000000003300000000yomi-0.0.1+git.1630589391.4557cfd/tests/test_lp.py# -*- coding: utf-8 -*-
#
# Author: Alberto Planas <aplanas@suse.com>
#
# Copyright 2019 SUSE LLC.
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.

import unittest

from utils import lp


class ModelTestCase(unittest.TestCase):
    def test_add_constraint_fails(self):
        """Test Model.add_constraint asserts."""
        model = lp.Model(["x1", "x2"])
        self.assertRaises(AssertionError, model.add_constraint, [1], lp.EQ, 1)
        self.assertRaises(AssertionError, model.add_constraint, [1, 2, 3], lp.EQ, 1)
        self.assertRaises(AssertionError, model.add_constraint, [1, 2], None, 1)

    def test_add_constraint(self):
        """Test Model.add_constraint success."""
        model = lp.Model(["x1", "x2"])
        model.add_constraint([1, 2], lp.EQ, 1)
        self.assertTrue(([1, 2], lp.EQ, 1) in model._constraints)

    def test_add_cost_function_fails(self):
        """Test Model.add_cost_function asserts."""
        model = lp.Model(["x1", "x2"])
        self.assertRaises(AssertionError, model.add_cost_function, None, [1, 2], 1)
        self.assertRaises(AssertionError, model.add_cost_function, lp.MINIMIZE, [1], 1)
        self.assertRaises(
            AssertionError, model.add_cost_function, lp.MINIMIZE, [1, 2, 3], 1
        )

    def test_add_cost_function(self):
        """Test Model.add_cost_function success."""
        model = lp.Model(["x1", "x2"])
        model.add_cost_function(lp.MINIMIZE, [1, 2], 1)
        self.assertEqual((lp.MINIMIZE, [1, 2], 1), model._cost_function)

    def test__coeff(self):
        """Test model._coeff method."""
        model = lp.Model(["x1", "x2"])
        self.assertEqual(model._coeff({"x1": 1}), [1, 0])
        self.assertEqual(model._coeff({"x2": 1}), [0, 1])
        self.assertEqual(model._coeff({"x1": 1, "x2": 2}), [1, 2])

    def test_add_constraint_named(self):
        """Test Model.add_constraint_named success."""
        model = lp.Model(["x1", "x2"])
        model.add_constraint_named({"x1": 1, "x2": 2}, lp.EQ, 1)
        self.assertTrue(([1, 2], lp.EQ, 1) in model._constraints)

    def test_add_cost_function_named(self):
        """Test Model.add_cost_function success."""
        model = lp.Model(["x1", "x2"])
        model.add_cost_function_named(lp.MINIMIZE, {"x1": 1, "x2": 2}, 1)
        self.assertEqual((lp.MINIMIZE, [1, 2], 1), model._cost_function)

    def test_simplex(self):
        """Test Model.simplex method."""
        model = lp.Model(["x1", "x2", "x3", "x4", "x5"])
        model.add_constraint([-6, 0, 1, -2, 2], lp.EQ, 6)
        model.add_constraint([-3, 1, 0, 6, 3], lp.EQ, 15)
        model.add_cost_function(lp.MINIMIZE, [5, 0, 0, 3, -2], -21)
        self.assertEqual(
            model.simplex(), {"x1": 1.0, "x2": 0.0, "x3": 0.0, "x4": 0.0, "x5": 6.0}
        )

    def test__convert_to_standard_form_standard(self):
        """Test Model._convert_to_standard_form when in standard form."""
        model = lp.Model(["x1", "x2", "x3"])
        model.add_constraint([30, 100, 85], lp.EQ, 2500)
        model.add_constraint([6, 2, 3], lp.EQ, 90)
        model.add_cost_function(lp.MINIMIZE, [3, 2, 4], 0)
        model._convert_to_standard_form()
        self.assertEqual(model._slack_variables, [])
        self.assertEqual(
            model._standard_constraints, [([30, 100, 85], 2500), ([6, 2, 3], 90)]
        )
        self.assertEqual(model._standard_cost_function, ([3, 2, 4], 0))

    def test__convert_to_standard_form_lte(self):
        """Test Model._convert_to_standard_form when constraint is LTE."""
        model = lp.Model(["x1", "x2", "x3"])
        model.add_constraint([30, 100, 85], lp.LTE, 2500)
        model.add_constraint([6, 2, 3], lp.EQ, 90)
        model.add_cost_function(lp.MINIMIZE, [3, 2, 4], 0)
        model._convert_to_standard_form()
        self.assertEqual(model._slack_variables, [3])
        self.assertEqual(
            model._standard_constraints, [([30, 100, 85, 1], 2500), ([6, 2, 3, 0], 90)]
        )
        self.assertEqual(model._standard_cost_function, ([3, 2, 4, 0], 0))

    def test__convert_to_standard_form_gte(self):
        """Test Model._convert_to_standard_form when constraint is GTE."""
        model = lp.Model(["x1", "x2", "x3"])
        model.add_constraint([30, 100, 85], lp.GTE, 2500)
        model.add_constraint([6, 2, 3], lp.EQ, 90)
        model.add_cost_function(lp.MINIMIZE, [3, 2, 4], 0)
        model._convert_to_standard_form()
        self.assertEqual(model._slack_variables, [3])
        self.assertEqual(
            model._standard_constraints, [([30, 100, 85, -1], 2500), ([6, 2, 3, 0], 90)]
        )
        self.assertEqual(model._standard_cost_function, ([3, 2, 4, 0], 0))

    def test__convert_to_standard_form_lte_gte(self):
        """Test Model._convert_to_standard_form for LTE/GTE constraints."""
        model = lp.Model(["x1", "x2", "x3"])
        model.add_constraint([30, 100, 85], lp.LTE, 2500)
        model.add_constraint([6, 2, 3], lp.GTE, 90)
        model.add_cost_function(lp.MINIMIZE, [3, 2, 4], 0)
        model._convert_to_standard_form()
        self.assertEqual(model._slack_variables, [3, 4])
        self.assertEqual(
            model._standard_constraints,
            [([30, 100, 85, 1, 0], 2500), ([6, 2, 3, 0, -1], 90)],
        )
        self.assertEqual(model._standard_cost_function, ([3, 2, 4, 0, 0], 0))

    def test__convert_to_standard_form_maximize(self):
        """Test Model._convert_to_standard_form when maximizing."""
        model = lp.Model(["x1", "x2", "x3"])
        model.add_constraint([30, 100, 85], lp.EQ, 2500)
        model.add_constraint([6, 2, 3], lp.EQ, 90)
        model.add_cost_function(lp.MAXIMIZE, [3, 2, 4], 0)
        model._convert_to_standard_form()
        self.assertEqual(model._slack_variables, [])
        self.assertEqual(
            model._standard_constraints, [([30, 100, 85], 2500), ([6, 2, 3], 90)]
        )
        self.assertEqual(model._standard_cost_function, ([-3, -2, -4], 0))

    def test__convert_to_canonical_form(self):
        """Test Model._convert_to_canonical_form when in standard form."""
        model = lp.Model(["x1", "x2", "x3", "x4"])
        model.add_constraint([1, -2, -3, -2], lp.EQ, 3)
        model.add_constraint([1, -1, 2, 1], lp.EQ, 11)
        model.add_cost_function(lp.MINIMIZE, [2, -3, 1, 1], 0)
        model._convert_to_standard_form()
        model._convert_to_canonical_form()
        self.assertEqual(
            model._canonical_constraints,
            [([1, -2, -3, -2, 1, 0], 3), ([1, -1, 2, 1, 0, 1], 11)],
        )
        self.assertEqual(model._canonical_cost_function, ([2, -3, 1, 1, 0, 0], 0))
        self.assertEqual(
            model._canonical_artificial_function, ([-2, 3, 1, 1, 0, 0], -14)
        )

    def test__convert_to_canonical_form_neg_free_term(self):
        """Test Model._convert_to_standard_form when in standard form."""
        model = lp.Model(["x1", "x2", "x3"])
        model.add_constraint([30, 100, 85], lp.EQ, -2500)
        model.add_constraint([6, 2, 3], lp.EQ, 90)
        model.add_cost_function(lp.MINIMIZE, [3, 2, 4], 0)
        model._convert_to_standard_form()
        model._convert_to_canonical_form()
        self.assertEqual(
            model._canonical_constraints,
            [([-30, -100, -85, 1, 0], 2500), ([6, 2, 3, 0, 1], 90)],
        )
        self.assertEqual(model._canonical_cost_function, ([3, 2, 4, 0, 0], 0))
        self.assertEqual(
            model._canonical_artificial_function, ([24, 98, 82, 0, 0], -2590)
        )

    def test__convert_to_canonical_form_artificial(self):
        """Test Model._convert_to_canonical_form when not in standard form."""
        model = lp.Model(["x1", "x2", "x3", "x4"])
        model.add_constraint([1, -2, -3, -2], lp.LTE, 3)
        model.add_constraint([1, -1, 2, 1], lp.GTE, 11)
        model.add_cost_function(lp.MAXIMIZE, [2, -3, 1, 1], 10)
        model._convert_to_standard_form()
        model._convert_to_canonical_form()
        self.assertEqual(
            model._canonical_constraints,
            [([1, -2, -3, -2, 1, 0, 1, 0], 3), ([1, -1, 2, 1, 0, -1, 0, 1], 11)],
        )
        self.assertEqual(
            model._canonical_cost_function, ([-2, 3, -1, -1, 0, 0, 0, 0], -10)
        )
        self.assertEqual(
            model._canonical_artificial_function, ([-2, 3, 1, 1, -1, 1, 0, 0], -14)
        )

    def test__build_tableau_canonical_form(self):
        """Test Model._build_tableau_canonical_form method."""
        model = lp.Model(["x1", "x2", "x3", "x4"])
        model.add_constraint([1, -2, -3, -2], lp.EQ, 3)
        model.add_constraint([1, -1, 2, 1], lp.EQ, 11)
        model.add_cost_function(lp.MINIMIZE, [2, -3, 1, 1], 0)
        model._convert_to_standard_form()
        model._convert_to_canonical_form()
        tableau = model._build_tableau_canonical_form()
        self.assertEqual(tableau._basic_variables, [4, 5])
        self.assertEqual(
            tableau._tableau,
            [
                [1, -2, -3, -2, 1, 0, 3],
                [1, -1, 2, 1, 0, 1, 11],
                [2, -3, 1, 1, 0, 0, 0],
                [-2, 3, 1, 1, 0, 0, -14],
            ],
        )

    def test___str__(self):
        """Test Model.__str__ method."""
        model = lp.Model(["x1", "x2", "x3", "x4", "x5"])
        model.add_constraint([-6, 0, 1, -2, 2], lp.LTE, 6)
        model.add_constraint([-3, 1, 0, 6, 3], lp.EQ, 15)
        model.add_cost_function(lp.MINIMIZE, [5, 0, 0, 3, -2], -21)
        self.assertEqual(
            model.__str__(),
            """Minimize:
  5 x1 + 0 x2 + 0 x3 + 3 x4 - 2 x5 - 21

Subject to:
  -6 x1 + 0 x2 + 1 x3 - 2 x4 + 2 x5 <= 6
  -3 x1 + 1 x2 + 0 x3 + 6 x4 + 3 x5 = 15
  x1, x2, x3, x4, x5 >= 0""",
        )


class TableauTestCase(unittest.TestCase):
    def test_add_constraint_fails(self):
        """Test Tableau.add_constraint asserts."""
        tableau = lp.Tableau(3, 2)
        self.assertRaises(AssertionError, tableau.add_constraint, [1], 0)
        self.assertRaises(AssertionError, tableau.add_constraint, [1, 2, 3, 4, 5], 0)
        tableau.add_constraint([1, 2, 3, 4], 0)
        self.assertRaises(AssertionError, tableau.add_constraint, [1, 2, 3, 4], 0)

    def test_add_constraint(self):
        """Test Tableau.add_constraint success."""
        tableau = lp.Tableau(3, 2)
        tableau.add_constraint([1, 2, 3, 4], 0)
        self.assertEqual(tableau._basic_variables, [0])
        self.assertEqual(tableau._tableau, [[1, 2, 3, 4]])

    def test_add_cost_function_fails(self):
        """Test Tableau.add_cost_function asserts."""
        tableau = lp.Tableau(3, 2)
        self.assertRaises(AssertionError, tableau.add_cost_function, [1])
        self.assertRaises(AssertionError, tableau.add_cost_function, [1, 2, 3, 4])

    def test_add_cost_function(self):
        """Test Tableau.add_cost_function success."""
        tableau = lp.Tableau(3, 2)
        tableau.add_constraint([1, 2, 3, 4], 0)
        tableau.add_constraint([0, 1, 2, 3], 1)
        tableau.add_cost_function([0, 0, 1, 2])
        self.assertEqual(tableau._tableau, [[1, 2, 3, 4], [0, 1, 2, 3], [0, 0, 1, 2]])

    def test_add_artificial_function_fails(self):
        """Test Tableau.add_artificial_function asserts."""
        tableau = lp.Tableau(3, 2)
        self.assertRaises(AssertionError, tableau.add_artificial_function, [1])
        self.assertRaises(AssertionError, tableau.add_artificial_function, [1, 2, 3, 4])

    def test_add_artificial_function(self):
        """Test Tableau.add_artificial_function success."""
        tableau = lp.Tableau(3, 2)
        tableau.add_constraint([1, 2, 3, 4], 0)
        tableau.add_constraint([0, 1, 2, 3], 1)
        tableau.add_cost_function([0, 0, 1, 2])
        tableau.add_artificial_function([1, 3, 5, 7])
        self.assertTrue(tableau._artificial)
        self.assertEqual(
            tableau._tableau, [[1, 2, 3, 4], [0, 1, 2, 3], [0, 0, 1, 2], [1, 3, 5, 7]]
        )

    def test_constraints(self):
        """Test Tableau.constraints method."""
        tableau = lp.Tableau(3, 2)
        tableau.add_constraint([1, 2, 3, 4], 0)
        tableau.add_constraint([0, 1, 2, 3], 1)
        tableau.add_cost_function([0, 0, 1, 2])
        tableau.add_artificial_function([1, 3, 5, 7])
        self.assertEqual(tableau.constraints(), [[1, 2, 3, 4], [0, 1, 2, 3]])

    def test_cost_function(self):
        """Test Tableau.cost_function for non artificial models."""
        tableau = lp.Tableau(3, 2)
        tableau.add_constraint([1, 2, 3, 4], 0)
        tableau.add_constraint([0, 1, 2, 3], 1)
        tableau.add_cost_function([0, 0, 1, 2])
        self.assertEqual(tableau.cost_function(), [0, 0, 1, 2])

    def test_cost_function_artificial(self):
        """Test Tableau.cost_function for artificial models."""
        tableau = lp.Tableau(3, 2)
        tableau.add_constraint([1, 2, 3, 4], 0)
        tableau.add_constraint([0, 1, 2, 3], 1)
        tableau.add_cost_function([0, 0, 1, 2])
        tableau.add_artificial_function([1, 3, 5, 7])
        self.assertEqual(tableau.cost_function(), [1, 3, 5, 7])

    def test_drop_artificial_not_minimal(self):
        """Test Tableau.drop_artificial fails when not minimal."""
        tableau = lp.Tableau(4, 2)
        tableau.add_constraint([1, 2, 1, 0, 5], 0)
        tableau.add_constraint([0, 1, 0, 1, 5], 1)
        tableau.add_cost_function([2, 3, 0, 0, 5])
        tableau.add_artificial_function([-1, -3, 0, 0, -15])
        self.assertRaises(AssertionError, tableau.drop_artificial)

    def test_drop_artificial_artificial_variable(self):
        """Test Tableau.drop_artificial fails when artificial variable."""
        tableau = lp.Tableau(4, 2)
        tableau.add_constraint([1, 2, 1, 0, 5], 2)
        tableau.add_constraint([0, 1, 0, 1, 5], 3)
        tableau.add_cost_function([2, 3, 0, 0, 5])
        tableau.add_artificial_function([1, 3, 0, 0, -15])
        self.assertRaises(AssertionError, tableau.drop_artificial)

    def test_drop_artificial(self):
        """Test Tableau.drop_artificial method."""
        tableau = lp.Tableau(4, 2)
        tableau.add_constraint([1, 2, 1, 0, 5], 0)
        tableau.add_constraint([0, 1, 0, 1, 5], 1)
        tableau.add_cost_function([2, 3, 0, 0, 5])
        tableau.add_artificial_function([1, 3, 0, 0, -15])
        tableau.drop_artificial()
        self.assertFalse(tableau._artificial)
        self.assertEqual(tableau._tableau, [[1, 2, 5], [0, 1, 5], [2, 3, 5]])

    def test_simplex(self):
        """Test Tableau.simplex method."""
        tableau = lp.Tableau(5, 2)
        tableau.add_constraint([-6, 0, 1, -2, 2, 6], 2)
        tableau.add_constraint([-3, 1, 0, 6, 3, 15], 1)
        tableau.add_cost_function([5, 0, 0, 3, -2, -21])
        tableau.simplex()
        self.assertEqual(tableau._basic_variables, [4, 0])
        self.assertEqual(
            tableau._tableau,
            [
                [0.0, 1 / 2, -1 / 4, 7 / 2, 1.0, 6.0],
                [1.0, 1 / 6, -1 / 4, 3 / 2, 0.0, 1.0],
                [0.0, 1 / 6, 3 / 4, 5 / 2, 0.0, -14.0],
            ],
        )

    def test_is_canonical_not_canonical(self):
        """Test Tableau.is_canonical when not canonical."""
        tableau = lp.Tableau(3, 2)
        tableau.add_constraint([1, 2, 0, 5], 1)
        tableau.add_constraint([0, 1, 0, 5], 2)
        tableau.add_cost_function([2, 3, 0, 5])
        self.assertFalse(tableau.is_canonical())

    def test_is_canonical_almost_canonical(self):
        """Test Tableau.is_canonical when no canonical."""
        tableau = lp.Tableau(2, 2)
        tableau.add_constraint([1, 2, 5], 0)
        tableau.add_constraint([0, 1, 5], 1)
        tableau.add_cost_function([2, 3, 5])
        self.assertFalse(tableau.is_canonical())

    def test_is_canonical(self):
        """Test Tableau.is_canonical when canonical."""
        tableau = lp.Tableau(2, 2)
        tableau.add_constraint([1, 0, 5], 0)
        tableau.add_constraint([0, 1, 5], 1)
        tableau.add_cost_function([0, 0, 5])
        self.assertTrue(tableau.is_canonical())

    def test_is_minimum_not_minimum(self):
        """Test Tableau.is_minimum method."""
        tableau = lp.Tableau(3, 2)
        tableau.add_constraint([1, 2, 3, 4], 0)
        tableau.add_constraint([0, 1, 2, 3], 1)
        tableau.add_cost_function([0, 0, -1, 2])
        self.assertFalse(tableau.is_minimum())

    def test_is_minimum_artificial_not_minimum(self):
        """Test Tableau.is_minimum method."""
        tableau = lp.Tableau(3, 2)
        tableau.add_constraint([1, 2, 3, 4], 0)
        tableau.add_constraint([0, 1, 2, 3], 1)
        tableau.add_cost_function([0, 0, 1, 2])
        tableau.add_artificial_function([2, -3, 0, 0])
        self.assertFalse(tableau.is_minimum())

    def test_is_minimum(self):
        """Test Tableau.is_minimum method."""
        tableau = lp.Tableau(3, 2)
        tableau.add_constraint([1, 2, 3, 4], 0)
        tableau.add_constraint([0, 1, 2, 3], 1)
        tableau.add_cost_function([0, 0, 1, 2])
        self.assertTrue(tableau.is_minimum())

    def test_is_minimum_artificial(self):
        """Test Tableau.is_minimum method."""
        tableau = lp.Tableau(3, 2)
        tableau.add_constraint([1, 2, 3, 4], 0)
        tableau.add_constraint([0, 1, 2, 3], 1)
        tableau.add_cost_function([0, 0, -1, 2])
        tableau.add_artificial_function([2, 3, 0, 0])
        self.assertTrue(tableau.is_minimum())

    def test_is_basic_feasible_solution_fails(self):
        """Test Tableau.is_basic_feasible_solution failures."""
        tableau = lp.Tableau(4, 2)
        tableau.add_constraint([1, 1, 2, 1, 6], 0)
        tableau.add_constraint([0, 3, 1, 8, 3], 1)
        tableau.add_cost_function([0, 0, 0, 0, 0])
        self.assertRaises(AssertionError, tableau.is_basic_feasible_solution)

    def test_is_basic_feasible_solution_non_existent(self):
        """Test Tableau.is_basic_feasible_solution method."""
        tableau = lp.Tableau(4, 2)
        tableau.add_constraint([1, 0, 1.667, 1.667, 5], 0)
        tableau.add_constraint([0, 1, 0.333, 2.667, -1], 1)
        tableau.add_cost_function([0, 0, 0, 0, 0])
        self.assertFalse(tableau.is_basic_feasible_solution())

    def test_is_basic_feasible_solution(self):
        """Test Tableau.is_basic_feasible_solution method."""
        tableau = lp.Tableau(4, 2)
        tableau.add_constraint([1, 0, 1.667, 1.667, 5], 0)
        tableau.add_constraint([0, 1, 0.333, 2.667, 1], 1)
        tableau.add_cost_function([0, 0, 0, 0, 0])
        self.assertTrue(tableau.is_basic_feasible_solution())

    def test_is_bound(self):
        """Test Tableau.is_bound method."""
        pass

    def test__get_pivoting_column(self):
        """Test Tableau._get_pivoting_column method."""
        tableau = lp.Tableau(5, 2)
        tableau.add_constraint([-6, 0, 1, -2, 2, 6], 2)
        tableau.add_constraint([-3, 1, 0, 6, 3, 15], 1)
        tableau.add_cost_function([5, 0, 0, 3, -2, -21])
        self.assertEqual(tableau._get_pivoting_column(), 4)

    def test__get_pivoting_row(self):
        """Test Tableau._get_pivoting_row method."""
        tableau = lp.Tableau(5, 2)
        tableau.add_constraint([-6, 0, 1, -2, 2, 6], 2)
        tableau.add_constraint([-3, 1, 0, 6, 3, 15], 1)
        tableau.add_cost_function([5, 0, 0, 3, -2, -21])
        self.assertEqual(tableau._get_pivoting_row(4), 0)

    def test__pivote(self):
        """Test Tableau._pivote method."""
        tableau = lp.Tableau(3, 3)
        tableau.add_constraint([1, 4, 2, 6], 0)
        tableau.add_constraint([3, 14, 8, 16], 1)
        tableau.add_constraint([4, 21, 10, 28], 2)

        # Pivote by x1 in the first equation
        tableau._pivote(0, 0)
        self.assertEqual(
            tableau._tableau,
            [[1.0, 4.0, 2.0, 6.0], [0.0, 2.0, 2.0, -2.0], [0.0, 5.0, 2.0, 4.0]],
        )
        # Pivote by x2 in the second equation
        tableau._pivote(1, 1)
        self.assertEqual(
            tableau._tableau,
            [[1.0, 0.0, -2.0, 10.0], [0.0, 1.0, 1.0, -1.0], [0.0, 0.0, -3.0, 9.0]],
        )
        # Pivote by x3 in the third equation
        tableau._pivote(2, 2)
        self.assertEqual(
            tableau._tableau,
            [[1.0, 0.0, 0.0, 4.0], [0.0, 1.0, 0.0, 2.0], [0.0, 0.0, 1.0, -3.0]],
        )


if __name__ == "__main__":
    unittest.main()
0707010000008A000081A40000000000000000000000016130D1CF000038D0000000000000000000000000000000000000003C00000000yomi-0.0.1+git.1630589391.4557cfd/tests/test_partitioned.py# -*- coding: utf-8 -*-
#
# Author: Alberto Planas <aplanas@suse.com>
#
# Copyright 2019 SUSE LLC.
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.

import unittest
from unittest.mock import patch

from states import partitioned


class PartitionedTestCase(unittest.TestCase):
    @patch("states.partitioned.__salt__")
    def test_check_label(self, __salt__):
        fdisk_output = """Error: /dev/sda: unrecognised disk label
BYT;
/dev/sda:25.8GB:scsi:512:512:unknown:ATA QEMU HARDDISK:;

Error: /dev/sdb: unrecognised disk label
BYT;
/dev/sdb:25.8GB:scsi:512:512:unknown:ATA QEMU HARDDISK:;

"""
        __salt__.__getitem__.return_value = lambda _: fdisk_output
        self.assertFalse(partitioned._check_label("/dev/sda", "msdos"))
        self.assertFalse(partitioned._check_label("/dev/sda", "dos"))
        self.assertFalse(partitioned._check_label("/dev/sda", "gpt"))

        fdisk_output = """BYT;
/dev/sda:25.8GB:scsi:512:512:msdos:ATA QEMU HARDDISK:;

Error: /dev/sdb: unrecognised disk label
BYT;
/dev/sdb:25.8GB:scsi:512:512:unknown:ATA QEMU HARDDISK:;
"""
        __salt__.__getitem__.return_value = lambda _: fdisk_output
        self.assertTrue(partitioned._check_label("/dev/sda", "msdos"))
        self.assertTrue(partitioned._check_label("/dev/sda", "dos"))
        self.assertFalse(partitioned._check_label("/dev/sda", "gpt"))

        fdisk_output = """BYT;
/dev/sda:500GB:scsi:512:512:gpt:ATA ST3500413AS:pmbr_boot;
1:1049kB:9437kB:8389kB:::bios_grub;
2:9437kB:498GB:498GB:btrfs::legacy_boot;
3:498GB:500GB:2147MB:linux-swap(v1)::swap;

BYT;
/dev/sdb:2000GB:scsi:512:4096:msdos:ATA ST2000DM001-1CH1:;
1:1049kB:2000GB:2000GB:ext4::type=83;

"""
        __salt__.__getitem__.return_value = lambda _: fdisk_output
        self.assertFalse(partitioned._check_label("/dev/sda", "msdos"))
        self.assertFalse(partitioned._check_label("/dev/sda", "dos"))
        self.assertTrue(partitioned._check_label("/dev/sda", "gpt"))

    @patch("states.partitioned.__opts__")
    @patch("states.partitioned.__salt__")
    def test_labeled(self, __salt__, __opts__):
        __opts__.__getitem__.return_value = False

        __salt__.__getitem__.return_value = lambda _: "/dev/sda:msdos:"
        self.assertEqual(
            partitioned.labeled("/dev/sda", "msdos"),
            {
                "name": "/dev/sda",
                "result": True,
                "changes": {},
                "comment": ["Label already set to msdos"],
            },
        )

        __salt__.__getitem__.side_effect = (
            lambda _: "",
            lambda _a, _b: True,
            lambda _: "/dev/sda:msdos:",
        )
        self.assertEqual(
            partitioned.labeled("/dev/sda", "msdos"),
            {
                "name": "/dev/sda",
                "result": True,
                "changes": {"label": "Label set to msdos in /dev/sda"},
                "comment": ["Label set to msdos in /dev/sda"],
            },
        )

    @patch("states.partitioned.__salt__")
    def test_get_partition_type(self, __salt__):
        __salt__.__getitem__.return_value = (
            lambda _: """
Model: ATA ST2000DM001-9YN1 (scsi)
Disk /dev/sda: 2000GB
Sector size (logical/physical): 512B/4096B
Partition Table: msdos
Disk Flags:

Number  Start   End     Size    Type     File system     Flags
 1      1049kB  2155MB  2154MB  primary  linux-swap(v1)  type=82
 2      2155MB  45.1GB  43.0GB  primary  btrfs           boot, type=83
 3      45.1GB  2000GB  1955GB  primary  xfs             type=83
        """
        )
        self.assertEqual(
            partitioned._get_partition_type("/dev/sda"),
            {"1": "primary", "2": "primary", "3": "primary"},
        )

        __salt__.__getitem__.return_value = (
            lambda _: """
Model: ATA QEMU HARDDISK (scsi)
Disk /dev/sda: 25.8GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number  Start   End     Size    Type      File system  Flags
 1      1049kB  11.5MB  10.5MB  extended               type=05
 5      2097kB  5243kB  3146kB  logical                type=83
 3      11.5GB  22.0MB  10.5MB  primary                type=83
        """
        )
        self.assertEqual(
            partitioned._get_partition_type("/dev/sda"),
            {"1": "extended", "5": "logical", "3": "primary"},
        )

    @patch("states.partitioned.__salt__")
    def test_get_cached_partitions(self, __salt__):
        __salt__.__getitem__.side_effect = [
            lambda _: "1 extended",
            lambda _, unit: {"info": None, "partitions": {"1": {}}},
        ]

        self.assertEqual(
            partitioned._get_cached_partitions("/dev/sda", "s"),
            {"1": {"type": "extended"}},
        )
        partitioned._invalidate_cached_partitions()

        __salt__.__getitem__.side_effect = [
            lambda _: "",
            lambda _, unit: {"info": None, "partitions": {"1": {}}},
        ]

        self.assertEqual(
            partitioned._get_cached_partitions("/dev/sda", "s"),
            {"1": {"type": "primary"}},
        )

    @patch("states.partitioned._get_cached_partitions")
    def test_check_partition(self, _get_cached_partitions):
        _get_cached_partitions.return_value = {
            "1": {"type": "primary", "size": "10s", "start": "0s", "end": "10s"}
        }
        self.assertTrue(
            partitioned._check_partition("/dev/sda", 1, "primary", "0s", "10s")
        )
        self.assertTrue(
            partitioned._check_partition("/dev/sda", "1", "primary", "0s", "10s")
        )
        self.assertFalse(
            partitioned._check_partition("/dev/sda", "1", "primary", "10s", "20s")
        )
        self.assertEqual(
            partitioned._check_partition("/dev/sda", "2", "primary", "10s", "20s"), None
        )

        _get_cached_partitions.return_value = {
            "1": {"type": "primary", "size": "100kB", "start": "0.5kB", "end": "100kB"}
        }
        self.assertTrue(
            partitioned._check_partition("/dev/sda", "1", "primary", "0kB", "100kB")
        )
        self.assertTrue(
            partitioned._check_partition("/dev/sda", "1", "primary", "1kB", "100kB")
        )
        self.assertFalse(
            partitioned._check_partition("/dev/sda", "1", "primary", "1.5kB", "100kB")
        )

    @patch("states.partitioned._get_cached_partitions")
    def test_get_first_overlapping_partition(self, _get_cached_partitions):
        _get_cached_partitions.return_value = {}
        self.assertEqual(
            partitioned._get_first_overlapping_partition("/dev/sda", "0s"), None
        )

        _get_cached_partitions.return_value = {
            "1": {
                "number": "1",
                "type": "primary",
                "size": "10s",
                "start": "0s",
                "end": "10s",
            }
        }
        self.assertEqual(
            partitioned._get_first_overlapping_partition("/dev/sda", "0s"), "1"
        )

        _get_cached_partitions.return_value = {
            "1": {
                "number": "1",
                "type": "primary",
                "size": "100kB",
                "start": "0.51kB",
                "end": "100kB",
            }
        }
        self.assertEqual(
            partitioned._get_first_overlapping_partition("/dev/sda", "0kB"), "1"
        )

        _get_cached_partitions.return_value = {
            "1": {
                "number": "1",
                "type": "extended",
                "size": "10s",
                "start": "0s",
                "end": "10s",
            },
            "5": {
                "number": "5",
                "type": "logical",
                "size": "4s",
                "start": "1s",
                "end": "5s",
            },
        }
        self.assertEqual(
            partitioned._get_first_overlapping_partition("/dev/sda", "0s"), "1"
        )

        self.assertEqual(
            partitioned._get_first_overlapping_partition("/dev/sda", "1s"), "5"
        )

    @patch("states.partitioned._get_cached_info")
    @patch("states.partitioned._get_cached_partitions")
    def test_get_partition_number_primary(
        self, _get_cached_partitions, _get_cached_info
    ):
        _get_cached_info.return_value = {"partition table": "msdos"}
        _get_cached_partitions.return_value = {}

        partition_data = ("/dev/sda", "primary", "0s", "10s")
        self.assertEqual(partitioned._get_partition_number(*partition_data), "1")

        _get_cached_partitions.return_value = {
            "1": {
                "number": "1",
                "type": "primary",
                "size": "10s",
                "start": "0s",
                "end": "10s",
            }
        }
        self.assertEqual(partitioned._get_partition_number(*partition_data), "1")

        partition_data = ("/dev/sda", "primary", "0s", "10s")
        self.assertEqual(partitioned._get_partition_number(*partition_data), "1")

        _get_cached_partitions.return_value = {
            "1": {"number": "1", "type": "primary", "start": "0s", "end": "10s"},
            "2": {"number": "2", "type": "primary", "start": "11s", "end": "20s"},
            "3": {"number": "3", "type": "primary", "start": "21s", "end": "30s"},
            "4": {"number": "4", "type": "primary", "start": "31s", "end": "40s"},
        }

        partition_data = ("/dev/sda", "primary", "41s", "50s")
        self.assertRaises(
            partitioned.EnumerateException,
            partitioned._get_partition_number,
            *partition_data
        )

        _get_cached_info.return_value = {"partition table": "gpt"}
        partition_data = ("/dev/sda", "primary", "41s", "50s")
        self.assertEqual(partitioned._get_partition_number(*partition_data), "5")

    @patch("states.partitioned._get_cached_info")
    @patch("states.partitioned._get_cached_partitions")
    def test_get_partition_number_extended(
        self, _get_cached_partitions, _get_cached_info
    ):
        _get_cached_info.return_value = {"partition table": "msdos"}
        _get_cached_partitions.return_value = {}
        partition_data = ("/dev/sda", "extended", "0s", "10s")
        self.assertEqual(partitioned._get_partition_number(*partition_data), "1")

        _get_cached_partitions.return_value = {
            "1": {"number": "1", "type": "primary", "start": "0s", "end": "10s"},
        }
        partition_data = ("/dev/sda", "extended", "21s", "30s")
        self.assertEqual(partitioned._get_partition_number(*partition_data), "2")

        _get_cached_partitions.return_value = {
            "1": {"number": "1", "type": "primary", "start": "0s", "end": "10s"},
            "2": {"number": "2", "type": "extended", "start": "11s", "end": "20s"},
        }
        self.assertRaises(
            partitioned.EnumerateException,
            partitioned._get_partition_number,
            *partition_data
        )

        _get_cached_partitions.return_value = {
            "1": {"number": "1", "type": "primary", "start": "0s", "end": "10s"},
            "2": {"number": "2", "type": "primary", "start": "11s", "end": "20s"},
            "3": {"number": "3", "type": "primary", "start": "21s", "end": "30s"},
            "4": {"number": "4", "type": "primary", "start": "31s", "end": "40s"},
        }
        partition_data = ("/dev/sda", "extended", "41s", "50s")
        self.assertRaises(
            partitioned.EnumerateException,
            partitioned._get_partition_number,
            *partition_data
        )

        _get_cached_info.return_value = {"partition table": "gpt"}
        _get_cached_partitions.return_value = {}
        self.assertRaises(
            partitioned.EnumerateException,
            partitioned._get_partition_number,
            *partition_data
        )

    @patch("states.partitioned._get_cached_info")
    @patch("states.partitioned._get_cached_partitions")
    def test_get_partition_number_logial(
        self, _get_cached_partitions, _get_cached_info
    ):
        _get_cached_info.return_value = {"partition table": "msdos"}
        _get_cached_partitions.return_value = {}
        partition_data = ("/dev/sda", "logical", "0s", "10s")
        self.assertRaises(
            partitioned.EnumerateException,
            partitioned._get_partition_number,
            *partition_data
        )

        _get_cached_partitions.return_value = {
            "1": {"number": "1", "type": "primary", "start": "0s", "end": "10s"},
        }
        partition_data = ("/dev/sda", "logical", "12s", "15s")
        self.assertRaises(
            partitioned.EnumerateException,
            partitioned._get_partition_number,
            *partition_data
        )

        _get_cached_partitions.return_value = {
            "1": {"number": "1", "type": "primary", "start": "0s", "end": "10s"},
            "2": {"number": "2", "type": "extended", "start": "11s", "end": "20s"},
        }
        self.assertEqual(partitioned._get_partition_number(*partition_data), "5")

        _get_cached_partitions.return_value = {
            "1": {"number": "1", "type": "primary", "start": "0s", "end": "10s"},
            "2": {"number": "2", "type": "extended", "start": "11s", "end": "20s"},
            "5": {"number": "5", "type": "logical", "start": "12s", "end": "15s"},
        }
        self.assertEqual(partitioned._get_partition_number(*partition_data), "5")

        partition_data = ("/dev/sda", "logical", "16s", "19s")
        self.assertEqual(partitioned._get_partition_number(*partition_data), "6")

    @patch("states.partitioned._get_partition_number")
    @patch("states.partitioned.__salt__")
    def test_mkparted(self, __salt__, _get_partition_number):
        pass


if __name__ == "__main__":
    unittest.main()
0707010000008B000081A40000000000000000000000016130D1CF00006162000000000000000000000000000000000000003800000000yomi-0.0.1+git.1630589391.4557cfd/tests/test_partmod.py# -*- coding: utf-8 -*-
#
# Author: Alberto Planas <aplanas@suse.com>
#
# Copyright 2019 SUSE LLC.
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.

import unittest
from unittest.mock import patch

from salt.exceptions import SaltInvocationError

from disk import ParseException
from modules import partmod
from modules import filters


class PartmodTestCase(unittest.TestCase):
    @patch("modules.partmod.__grains__")
    def test_prepare_partition_data_fails_fs_type(self, __grains__):
        partitions = {
            "devices": {
                "/dev/sda": {
                    "partitions": [{"number": 1, "size": "rest", "type": "error"}],
                },
            },
        }
        __grains__.__getitem__.return_value = False
        with self.assertRaises(SaltInvocationError) as cm:
            partmod.prepare_partition_data(partitions)
        self.assertTrue("type error not recognized" in str(cm.exception))

    @patch("modules.partmod.__grains__")
    @patch("modules.partmod.__salt__")
    def test_prepare_partition_data_fails_units_invalid(self, __salt__, __grains__):
        partitions = {
            "devices": {
                "/dev/sda": {
                    "partitions": [{"number": 1, "size": "1Kilo", "type": "swap"}],
                },
            },
        }
        __grains__.__getitem__.return_value = False
        __salt__.__getitem__.return_value = filters.is_raid
        with self.assertRaises(ParseException) as cm:
            partmod.prepare_partition_data(partitions)
        self.assertTrue("Kilo not recognized" in str(cm.exception))

    @patch("modules.partmod.__grains__")
    @patch("modules.partmod.__salt__")
    def test_prepare_partition_data_fails_units_initial_gap(self, __salt__, __grains__):
        partitions = {
            "config": {"initial_gap": "1024kB"},
            "devices": {
                "/dev/sda": {
                    "partitions": [{"number": 1, "size": "1MB", "type": "swap"}],
                },
            },
        }
        __grains__.__getitem__.return_value = False
        __salt__.__getitem__.return_value = filters.is_raid
        with self.assertRaises(SaltInvocationError) as cm:
            partmod.prepare_partition_data(partitions)
        self.assertTrue("Units needs to be" in str(cm.exception))

    @patch("modules.partmod.__grains__")
    @patch("modules.partmod.__salt__")
    def test_prepare_partition_data_bios_no_gap(self, __salt__, __grains__):
        partitions = {
            "devices": {
                "/dev/sda": {
                    "partitions": [{"number": 1, "size": "rest", "type": "linux"}],
                },
            },
        }
        __grains__.__getitem__.return_value = False
        __salt__.__getitem__.return_value = filters.is_raid
        self.assertEqual(
            partmod.prepare_partition_data(partitions),
            {
                "/dev/sda": {
                    "label": "msdos",
                    "pmbr_boot": False,
                    "partitions": [
                        {
                            "part_id": "/dev/sda1",
                            "part_type": "primary",
                            "fs_type": "ext2",
                            "flags": None,
                            "start": "0%",
                            "end": "100%",
                        },
                    ],
                },
            },
        )

    @patch("modules.partmod.__grains__")
    @patch("modules.partmod.__salt__")
    def test_prepare_partition_data_bios_msdos_no_gap(self, __salt__, __grains__):
        partitions = {
            "config": {"label": "msdos"},
            "devices": {
                "/dev/sda": {
                    "partitions": [{"number": 1, "size": "rest", "type": "linux"}],
                },
            },
        }
        __grains__.__getitem__.return_value = False
        __salt__.__getitem__.return_value = filters.is_raid
        self.assertEqual(
            partmod.prepare_partition_data(partitions),
            {
                "/dev/sda": {
                    "label": "msdos",
                    "pmbr_boot": False,
                    "partitions": [
                        {
                            "part_id": "/dev/sda1",
                            "part_type": "primary",
                            "fs_type": "ext2",
                            "flags": None,
                            "start": "0%",
                            "end": "100%",
                        },
                    ],
                },
            },
        )

    @patch("modules.partmod.__grains__")
    @patch("modules.partmod.__salt__")
    def test_prepare_partition_data_bios_local_msdos_no_gap(self, __salt__, __grains__):
        partitions = {
            "devices": {
                "/dev/sda": {
                    "label": "msdos",
                    "partitions": [{"number": 1, "size": "rest", "type": "linux"}],
                },
            },
        }
        __grains__.__getitem__.return_value = False
        __salt__.__getitem__.return_value = filters.is_raid
        self.assertEqual(
            partmod.prepare_partition_data(partitions),
            {
                "/dev/sda": {
                    "label": "msdos",
                    "pmbr_boot": False,
                    "partitions": [
                        {
                            "part_id": "/dev/sda1",
                            "part_type": "primary",
                            "fs_type": "ext2",
                            "flags": None,
                            "start": "0%",
                            "end": "100%",
                        },
                    ],
                },
            },
        )

    @patch("modules.partmod.__grains__")
    @patch("modules.partmod.__salt__")
    def test_prepare_partition_data_bios_gpt_no_gap(self, __salt__, __grains__):
        partitions = {
            "config": {"label": "gpt"},
            "devices": {
                "/dev/sda": {
                    "partitions": [{"number": 1, "size": "rest", "type": "linux"}],
                },
            },
        }
        __grains__.__getitem__.return_value = False
        __salt__.__getitem__.return_value = filters.is_raid
        self.assertEqual(
            partmod.prepare_partition_data(partitions),
            {
                "/dev/sda": {
                    "label": "gpt",
                    "pmbr_boot": True,
                    "partitions": [
                        {
                            "part_id": "/dev/sda1",
                            "part_type": "primary",
                            "fs_type": "ext2",
                            "flags": None,
                            "start": "0%",
                            "end": "100%",
                        },
                    ],
                },
            },
        )

    @patch("modules.partmod.__grains__")
    @patch("modules.partmod.__salt__")
    def test_prepare_partition_data_bios_local_gpt_no_gap(self, __salt__, __grains__):
        partitions = {
            "devices": {
                "/dev/sda": {
                    "label": "gpt",
                    "partitions": [{"number": 1, "size": "rest", "type": "linux"}],
                },
            },
        }
        __grains__.__getitem__.return_value = False
        __salt__.__getitem__.return_value = filters.is_raid
        self.assertEqual(
            partmod.prepare_partition_data(partitions),
            {
                "/dev/sda": {
                    "label": "gpt",
                    "pmbr_boot": True,
                    "partitions": [
                        {
                            "part_id": "/dev/sda1",
                            "part_type": "primary",
                            "fs_type": "ext2",
                            "flags": None,
                            "start": "0%",
                            "end": "100%",
                        },
                    ],
                },
            },
        )

    @patch("modules.partmod.__grains__")
    @patch("modules.partmod.__salt__")
    def test_prepare_partition_data_gap(self, __salt__, __grains__):
        partitions = {
            "config": {"initial_gap": "1MB"},
            "devices": {
                "/dev/sda": {
                    "partitions": [{"number": 1, "size": "rest", "type": "linux"}],
                },
            },
        }
        __grains__.__getitem__.return_value = False
        __salt__.__getitem__.return_value = filters.is_raid
        self.assertEqual(
            partmod.prepare_partition_data(partitions),
            {
                "/dev/sda": {
                    "label": "msdos",
                    "pmbr_boot": False,
                    "partitions": [
                        {
                            "part_id": "/dev/sda1",
                            "part_type": "primary",
                            "fs_type": "ext2",
                            "flags": None,
                            "start": "1.0MB",
                            "end": "100%",
                        },
                    ],
                },
            },
        )

    @patch("modules.partmod.__grains__")
    @patch("modules.partmod.__salt__")
    def test_prepare_partition_data_local_gap(self, __salt__, __grains__):
        partitions = {
            "devices": {
                "/dev/sda": {
                    "initial_gap": "1MB",
                    "partitions": [{"number": 1, "size": "rest", "type": "linux"}],
                },
            },
        }
        __grains__.__getitem__.return_value = False
        __salt__.__getitem__.return_value = filters.is_raid
        self.assertEqual(
            partmod.prepare_partition_data(partitions),
            {
                "/dev/sda": {
                    "label": "msdos",
                    "pmbr_boot": False,
                    "partitions": [
                        {
                            "part_id": "/dev/sda1",
                            "part_type": "primary",
                            "fs_type": "ext2",
                            "flags": None,
                            "start": "1.0MB",
                            "end": "100%",
                        },
                    ],
                },
            },
        )

    @patch("modules.partmod.__grains__")
    @patch("modules.partmod.__salt__")
    def test_prepare_partition_data_fails_rest(self, __salt__, __grains__):
        partitions = {
            "devices": {
                "/dev/sda": {
                    "partitions": [
                        {"number": 1, "size": "rest", "type": "swap"},
                        {"number": 2, "size": "rest", "type": "linux"},
                    ],
                },
            },
        }
        __grains__.__getitem__.return_value = False
        __salt__.__getitem__.return_value = filters.is_raid
        with self.assertRaises(SaltInvocationError) as cm:
            partmod.prepare_partition_data(partitions)
        self.assertTrue("rest free space" in str(cm.exception))

    @patch("modules.partmod.__grains__")
    @patch("modules.partmod.__salt__")
    def test_prepare_partition_data_fails_units(self, __salt__, __grains__):
        partitions = {
            "devices": {
                "/dev/sda": {
                    "partitions": [
                        {"number": 1, "size": "1%", "type": "swap"},
                        {"number": 2, "size": "2MB", "type": "linux"},
                    ],
                },
            },
        }
        __grains__.__getitem__.return_value = False
        __salt__.__getitem__.return_value = filters.is_raid
        with self.assertRaises(SaltInvocationError) as cm:
            partmod.prepare_partition_data(partitions)
        self.assertTrue("Units needs to be" in str(cm.exception))

    @patch("modules.partmod.__grains__")
    @patch("modules.partmod.__salt__")
    def test_prepare_partition_data_efi_partitions(self, __salt__, __grains__):
        partitions = {
            "devices": {
                "/dev/sda": {
                    "label": "gpt",
                    "partitions": [
                        {"number": 1, "size": "500MB", "type": "efi"},
                        {"number": 2, "size": "10000MB", "type": "linux"},
                        {"number": 3, "size": "5000MB", "type": "swap"},
                    ],
                },
            },
        }
        __grains__.__getitem__.return_value = True
        __salt__.__getitem__.return_value = filters.is_raid
        self.assertEqual(
            partmod.prepare_partition_data(partitions),
            {
                "/dev/sda": {
                    "label": "gpt",
                    "pmbr_boot": False,
                    "partitions": [
                        {
                            "part_id": "/dev/sda1",
                            "part_type": "primary",
                            "fs_type": "fat16",
                            "flags": ["esp"],
                            "start": "0MB",
                            "end": "500.0MB",
                        },
                        {
                            "part_id": "/dev/sda2",
                            "part_type": "primary",
                            "fs_type": "ext2",
                            "flags": None,
                            "start": "500.0MB",
                            "end": "10500.0MB",
                        },
                        {
                            "part_id": "/dev/sda3",
                            "part_type": "primary",
                            "fs_type": "linux-swap",
                            "flags": None,
                            "start": "10500.0MB",
                            "end": "15500.0MB",
                        },
                    ],
                },
            },
        )

    @patch("modules.partmod.__grains__")
    @patch("modules.partmod.__salt__")
    def test_prepare_partition_data_bios_muti_label(self, __salt__, __grains__):
        partitions = {
            "config": {"label": "msdos"},
            "devices": {
                "/dev/sda": {
                    "partitions": [{"number": 1, "size": "rest", "type": "linux"}],
                },
                "/dev/sdb": {
                    "label": "gpt",
                    "partitions": [{"number": 1, "size": "rest", "type": "linux"}],
                },
            },
        }
        __grains__.__getitem__.return_value = False
        __salt__.__getitem__.return_value = filters.is_raid
        self.assertEqual(
            partmod.prepare_partition_data(partitions),
            {
                "/dev/sda": {
                    "label": "msdos",
                    "pmbr_boot": False,
                    "partitions": [
                        {
                            "part_id": "/dev/sda1",
                            "part_type": "primary",
                            "fs_type": "ext2",
                            "flags": None,
                            "start": "0%",
                            "end": "100%",
                        },
                    ],
                },
                "/dev/sdb": {
                    "label": "gpt",
                    "pmbr_boot": True,
                    "partitions": [
                        {
                            "part_id": "/dev/sdb1",
                            "part_type": "primary",
                            "fs_type": "ext2",
                            "flags": None,
                            "start": "0%",
                            "end": "100%",
                        },
                    ],
                },
            },
        )

    @patch("modules.partmod.__grains__")
    @patch("modules.partmod.__salt__")
    def test_prepare_partition_data_multi_gap(self, __salt__, __grains__):
        partitions = {
            "config": {"initial_gap": "1MB"},
            "devices": {
                "/dev/sda": {
                    "partitions": [{"number": 1, "size": "rest", "type": "linux"}],
                },
                "/dev/sdb": {
                    "initial_gap": "2MB",
                    "partitions": [{"number": 1, "size": "20MB", "type": "linux"}],
                },
            },
        }
        __grains__.__getitem__.return_value = False
        __salt__.__getitem__.return_value = filters.is_raid
        self.assertEqual(
            partmod.prepare_partition_data(partitions),
            {
                "/dev/sda": {
                    "label": "msdos",
                    "pmbr_boot": False,
                    "partitions": [
                        {
                            "part_id": "/dev/sda1",
                            "part_type": "primary",
                            "fs_type": "ext2",
                            "flags": None,
                            "start": "1.0MB",
                            "end": "100%",
                        },
                    ],
                },
                "/dev/sdb": {
                    "label": "msdos",
                    "pmbr_boot": False,
                    "partitions": [
                        {
                            "part_id": "/dev/sdb1",
                            "part_type": "primary",
                            "fs_type": "ext2",
                            "flags": None,
                            "start": "2.0MB",
                            "end": "22.0MB",
                        },
                    ],
                },
            },
        )

    @patch("modules.partmod.__grains__")
    @patch("modules.partmod.__salt__")
    def test_prepare_partition_data_lvm(self, __salt__, __grains__):
        partitions = {
            "devices": {
                "/dev/sda": {
                    "partitions": [{"number": 1, "size": "rest", "type": "lvm"}],
                },
                "/dev/sdb": {
                    "partitions": [{"number": 1, "size": "rest", "type": "lvm"}],
                },
                "/dev/sdc": {
                    "partitions": [{"number": 1, "size": "rest", "type": "linux"}],
                },
            },
        }
        __grains__.__getitem__.return_value = False
        __salt__.__getitem__.return_value = filters.is_raid
        self.assertEqual(
            partmod.prepare_partition_data(partitions),
            {
                "/dev/sda": {
                    "label": "msdos",
                    "pmbr_boot": False,
                    "partitions": [
                        {
                            "part_id": "/dev/sda1",
                            "part_type": "primary",
                            "fs_type": "ext2",
                            "flags": ["lvm"],
                            "start": "0%",
                            "end": "100%",
                        },
                    ],
                },
                "/dev/sdb": {
                    "label": "msdos",
                    "pmbr_boot": False,
                    "partitions": [
                        {
                            "part_id": "/dev/sdb1",
                            "part_type": "primary",
                            "fs_type": "ext2",
                            "flags": ["lvm"],
                            "start": "0%",
                            "end": "100%",
                        },
                    ],
                },
                "/dev/sdc": {
                    "label": "msdos",
                    "pmbr_boot": False,
                    "partitions": [
                        {
                            "part_id": "/dev/sdc1",
                            "part_type": "primary",
                            "fs_type": "ext2",
                            "flags": None,
                            "start": "0%",
                            "end": "100%",
                        },
                    ],
                },
            },
        )

    @patch("modules.partmod.__grains__")
    @patch("modules.partmod.__salt__")
    def test_prepare_partition_data_raid(self, __salt__, __grains__):
        partitions = {
            "devices": {
                "/dev/sda": {
                    "partitions": [{"number": 1, "size": "rest", "type": "raid"}],
                },
                "/dev/sdb": {
                    "partitions": [{"number": 1, "size": "rest", "type": "raid"}],
                },
                "/dev/sdc": {
                    "partitions": [{"number": 1, "size": "rest", "type": "linux"}],
                },
            },
        }
        __grains__.__getitem__.return_value = False
        __salt__.__getitem__.return_value = filters.is_raid
        self.assertEqual(
            partmod.prepare_partition_data(partitions),
            {
                "/dev/sda": {
                    "label": "msdos",
                    "pmbr_boot": False,
                    "partitions": [
                        {
                            "part_id": "/dev/sda1",
                            "part_type": "primary",
                            "fs_type": "ext2",
                            "flags": ["raid"],
                            "start": "0%",
                            "end": "100%",
                        },
                    ],
                },
                "/dev/sdb": {
                    "label": "msdos",
                    "pmbr_boot": False,
                    "partitions": [
                        {
                            "part_id": "/dev/sdb1",
                            "part_type": "primary",
                            "fs_type": "ext2",
                            "flags": ["raid"],
                            "start": "0%",
                            "end": "100%",
                        },
                    ],
                },
                "/dev/sdc": {
                    "label": "msdos",
                    "pmbr_boot": False,
                    "partitions": [
                        {
                            "part_id": "/dev/sdc1",
                            "part_type": "primary",
                            "fs_type": "ext2",
                            "flags": None,
                            "start": "0%",
                            "end": "100%",
                        },
                    ],
                },
            },
        )

    @patch("modules.partmod.__grains__")
    @patch("modules.partmod.__salt__")
    def test_prepare_partition_data_bios_gpt_post_raid(self, __salt__, __grains__):
        partitions = {
            "devices": {
                "/dev/md0": {
                    "label": "gpt",
                    "partitions": [
                        {"number": 1, "size": "8MB", "type": "boot"},
                        {"number": 2, "size": "rest", "type": "linux"},
                    ],
                },
            },
        }
        __grains__.__getitem__.return_value = False
        __salt__.__getitem__.return_value = filters.is_raid
        self.assertEqual(
            partmod.prepare_partition_data(partitions),
            {
                "/dev/md0": {
                    "label": "gpt",
                    "pmbr_boot": True,
                    "partitions": [
                        {
                            "part_id": "/dev/md0p1",
                            "part_type": "primary",
                            "fs_type": "ext2",
                            "flags": ["bios_grub"],
                            "start": "0MB",
                            "end": "8.0MB",
                        },
                        {
                            "part_id": "/dev/md0p2",
                            "part_type": "primary",
                            "fs_type": "ext2",
                            "flags": None,
                            "start": "8.0MB",
                            "end": "100%",
                        },
                    ],
                },
            },
        )


if __name__ == "__main__":
    unittest.main()
0707010000008C000081A40000000000000000000000016130D1CF000040D8000000000000000000000000000000000000004200000000yomi-0.0.1+git.1630589391.4557cfd/tests/test_state_suseconnect.py# -*- coding: utf-8 -*-
#
# Author: Alberto Planas <aplanas@suse.com>
#
# Copyright 2019 SUSE LLC.
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations

import unittest
from unittest.mock import patch, MagicMock

from states import suseconnect

from salt.exceptions import CommandExecutionError


class SUSEConnectTestCase(unittest.TestCase):
    def test__status_registered(self):
        salt_mock = {
            "suseconnect.status": MagicMock(
                return_value=[
                    {
                        "identifier": "SLES",
                        "version": "15.2",
                        "arch": "x86_64",
                        "status": "Registered",
                        "subscription_status": "ACTIVE",
                    },
                    {
                        "identifier": "sle-module-basesystem",
                        "version": "15.2",
                        "arch": "x86_64",
                        "status": "Registered",
                    },
                    {
                        "identifier": "sle-module-server-applications",
                        "version": "15.2",
                        "arch": "x86_64",
                        "status": "Registered",
                    },
                ]
            ),
        }
        with patch.dict(suseconnect.__salt__, salt_mock):
            self.assertEqual(
                suseconnect._status(None),
                (
                    [
                        "SLES/15.2/x86_64",
                        "sle-module-basesystem/15.2/x86_64",
                        "sle-module-server-applications/15.2/x86_64",
                    ],
                    ["SLES/15.2/x86_64"],
                ),
            )

    def test__status_unregistered(self):
        salt_mock = {
            "suseconnect.status": MagicMock(
                return_value=[
                    {
                        "identifier": "openSUSE",
                        "version": "20191014",
                        "arch": "x86_64",
                        "status": "Not Registered",
                    },
                ]
            ),
        }
        with patch.dict(suseconnect.__salt__, salt_mock):
            self.assertEqual(suseconnect._status(None), ([], []))

    @patch("states.suseconnect._status")
    def test__is_registered_default_product(self, _status):
        _status.return_value = (["SLES/15.2/x86_64"], ["SLES/15.2/x86_64"])
        self.assertTrue(suseconnect._is_registered(product=None, root=None))

    @patch("states.suseconnect._status")
    def test__is_registered_product(self, _status):
        _status.return_value = (["SLES/15.2/x86_64"], ["SLES/15.2/x86_64"])
        self.assertTrue(
            suseconnect._is_registered(product="SLES/15.2/x86_64", root=None)
        )

    @patch("states.suseconnect._status")
    def test__is_registered_default_product_unregistered(self, _status):
        _status.return_value = ([], [])
        self.assertFalse(suseconnect._is_registered(product=None, root=None))

    @patch("states.suseconnect._status")
    def test__is_registered_product_unregistered(self, _status):
        _status.return_value = ([], [])
        self.assertFalse(
            suseconnect._is_registered(product="SLES/15.2/x86_64", root=None)
        )

    @patch("states.suseconnect._status")
    def test__is_registered_other_product_unregistered(self, _status):
        _status.return_value = ([], ["SLES/15.2/x86_64"])
        self.assertFalse(
            suseconnect._is_registered(product="openSUSE/15.2/x86_64", root=None)
        )

    @patch("states.suseconnect._status")
    def test_registered_default_product(self, _status):
        _status.return_value = (["SLES/15.2/x86_64"], ["SLES/15.2/x86_64"])
        result = suseconnect.registered("my_setup", "regcode")
        self.assertEqual(
            result,
            {
                "name": "my_setup",
                "result": True,
                "changes": {},
                "comment": ["Product or module default already registered"],
            },
        )

    @patch("states.suseconnect._status")
    def test_registered_named_product(self, _status):
        _status.return_value = (["SLES/15.2/x86_64"], ["SLES/15.2/x86_64"])
        result = suseconnect.registered("SLES/15.2/x86_64", "regcode")
        self.assertEqual(
            result,
            {
                "name": "SLES/15.2/x86_64",
                "result": True,
                "changes": {},
                "comment": ["Product or module SLES/15.2/x86_64 already registered"],
            },
        )

    @patch("states.suseconnect._status")
    def test_registered_product(self, _status):
        _status.return_value = (["SLES/15.2/x86_64"], ["SLES/15.2/x86_64"])
        result = suseconnect.registered(
            "my_setup", "regcode", product="SLES/15.2/x86_64"
        )
        self.assertEqual(
            result,
            {
                "name": "my_setup",
                "result": True,
                "changes": {},
                "comment": ["Product or module SLES/15.2/x86_64 already registered"],
            },
        )

    @patch("states.suseconnect._status")
    def test_registered_test(self, _status):
        _status.return_value = ([], [])

        opts_mock = {"test": True}
        with patch.dict(suseconnect.__opts__, opts_mock):
            result = suseconnect.registered("my_setup", "regcode")
            self.assertEqual(
                result,
                {
                    "name": "my_setup",
                    "result": None,
                    "changes": {"default": True},
                    "comment": ["Product or module default would be registered"],
                },
            )

    @patch("states.suseconnect._status")
    def test_registered_fail_register(self, _status):
        _status.return_value = ([], [])

        opts_mock = {"test": False}
        salt_mock = {
            "suseconnect.register": MagicMock(
                side_effect=CommandExecutionError("some error")
            )
        }
        with patch.dict(suseconnect.__salt__, salt_mock), patch.dict(
            suseconnect.__opts__, opts_mock
        ):
            result = suseconnect.registered("my_setup", "regcode")
            self.assertEqual(
                result,
                {
                    "name": "my_setup",
                    "result": False,
                    "changes": {},
                    "comment": ["Error registering default: some error"],
                },
            )

    @patch("states.suseconnect._status")
    def test_registered_fail_register_end(self, _status):
        _status.return_value = ([], [])

        opts_mock = {"test": False}
        salt_mock = {"suseconnect.register": MagicMock()}
        with patch.dict(suseconnect.__salt__, salt_mock), patch.dict(
            suseconnect.__opts__, opts_mock
        ):
            result = suseconnect.registered("my_setup", "regcode")
            self.assertEqual(
                result,
                {
                    "name": "my_setup",
                    "result": False,
                    "changes": {"default": True},
                    "comment": ["Product or module default failed to register"],
                },
            )

    @patch("states.suseconnect._status")
    def test_registered_succeed_register(self, _status):
        _status.side_effect = [
            ([], []),
            (["SLES/15.2/x86_64"], ["SLES/15.2/x86_64"]),
        ]

        opts_mock = {"test": False}
        salt_mock = {"suseconnect.register": MagicMock()}
        with patch.dict(suseconnect.__salt__, salt_mock), patch.dict(
            suseconnect.__opts__, opts_mock
        ):
            result = suseconnect.registered("my_setup", "regcode")
            self.assertEqual(
                result,
                {
                    "name": "my_setup",
                    "result": True,
                    "changes": {"default": True},
                    "comment": ["Product or module default registered"],
                },
            )
            salt_mock["suseconnect.register"].assert_called_with(
                "regcode", product=None, email=None, url=None, root=None
            )

    @patch("states.suseconnect._status")
    def test_registered_succeed_register_params(self, _status):
        _status.side_effect = [
            ([], []),
            (["SLES/15.2/x86_64"], ["SLES/15.2/x86_64"]),
        ]

        opts_mock = {"test": False}
        salt_mock = {"suseconnect.register": MagicMock()}
        with patch.dict(suseconnect.__salt__, salt_mock), patch.dict(
            suseconnect.__opts__, opts_mock
        ):
            result = suseconnect.registered(
                "my_setup",
                "regcode",
                product="SLES/15.2/x86_64",
                email="user@example.com",
                url=None,
                root=None,
            )
            self.assertEqual(
                result,
                {
                    "name": "my_setup",
                    "result": True,
                    "changes": {"SLES/15.2/x86_64": True},
                    "comment": ["Product or module SLES/15.2/x86_64 registered"],
                },
            )
            salt_mock["suseconnect.register"].assert_called_with(
                "regcode",
                product="SLES/15.2/x86_64",
                email="user@example.com",
                url=None,
                root=None,
            )

    @patch("states.suseconnect._status")
    def test_deregistered_default_product(self, _status):
        _status.return_value = ([], [])
        result = suseconnect.deregistered("my_setup")
        self.assertEqual(
            result,
            {
                "name": "my_setup",
                "result": True,
                "changes": {},
                "comment": ["Product or module default already deregistered"],
            },
        )

    @patch("states.suseconnect._status")
    def test_deregistered_named_product(self, _status):
        _status.return_value = ([], [])
        result = suseconnect.deregistered("SLES/15.2/x86_64")
        self.assertEqual(
            result,
            {
                "name": "SLES/15.2/x86_64",
                "result": True,
                "changes": {},
                "comment": [
                    "Product or module SLES/15.2/x86_64 already deregistered"
                ],
            },
        )

    @patch("states.suseconnect._status")
    def test_deregistered_other_named_product(self, _status):
        _status.return_value = (["SLES/15.2/x86_64"], ["SLES/15.2/x86_64"])
        result = suseconnect.deregistered("openSUSE/15.2/x86_64")
        self.assertEqual(
            result,
            {
                "name": "openSUSE/15.2/x86_64",
                "result": True,
                "changes": {},
                "comment": [
                    "Product or module openSUSE/15.2/x86_64 already deregistered"
                ],
            },
        )

    @patch("states.suseconnect._status")
    def test_deregistered_product(self, _status):
        _status.return_value = ([], [])
        result = suseconnect.deregistered("my_setup", product="SLES/15.2/x86_64")
        self.assertEqual(
            result,
            {
                "name": "my_setup",
                "result": True,
                "changes": {},
                "comment": [
                    "Product or module SLES/15.2/x86_64 already deregistered"
                ],
            },
        )

    @patch("states.suseconnect._status")
    def test_deregistered_test(self, _status):
        _status.return_value = (["SLES/15.2/x86_64"], ["SLES/15.2/x86_64"])

        opts_mock = {"test": True}
        with patch.dict(suseconnect.__opts__, opts_mock):
            result = suseconnect.deregistered("my_setup")
            self.assertEqual(
                result,
                {
                    "name": "my_setup",
                    "result": None,
                    "changes": {"default": True},
                    "comment": ["Product or module default would be deregistered"],
                },
            )

    @patch("states.suseconnect._status")
    def test_deregistered_fail_deregister(self, _status):
        _status.return_value = (["SLES/15.2/x86_64"], ["SLES/15.2/x86_64"])

        opts_mock = {"test": False}
        salt_mock = {
            "suseconnect.deregister": MagicMock(
                side_effect=CommandExecutionError("some error")
            )
        }
        with patch.dict(suseconnect.__salt__, salt_mock), patch.dict(
            suseconnect.__opts__, opts_mock
        ):
            result = suseconnect.deregistered("my_setup")
            self.assertEqual(
                result,
                {
                    "name": "my_setup",
                    "result": False,
                    "changes": {},
                    "comment": ["Error deregistering default: some error"],
                },
            )

    @patch("states.suseconnect._status")
    def test_deregistered_fail_deregister_end(self, _status):
        _status.return_value = (["SLES/15.2/x86_64"], ["SLES/15.2/x86_64"])

        opts_mock = {"test": False}
        salt_mock = {"suseconnect.deregister": MagicMock()}
        with patch.dict(suseconnect.__salt__, salt_mock), patch.dict(
            suseconnect.__opts__, opts_mock
        ):
            result = suseconnect.deregistered("my_setup")
            self.assertEqual(
                result,
                {
                    "name": "my_setup",
                    "result": False,
                    "changes": {"default": True},
                    "comment": ["Product or module default failed to deregister"],
                },
            )

    @patch("states.suseconnect._status")
    def test_deregistered_succeed_deregister(self, _status):
        _status.side_effect = [
            (["SLES/15.2/x86_64"], ["SLES/15.2/x86_64"]),
            ([], []),
        ]

        opts_mock = {"test": False}
        salt_mock = {"suseconnect.deregister": MagicMock()}
        with patch.dict(suseconnect.__salt__, salt_mock), patch.dict(
            suseconnect.__opts__, opts_mock
        ):
            result = suseconnect.deregistered("my_setup")
            self.assertEqual(
                result,
                {
                    "name": "my_setup",
                    "result": True,
                    "changes": {"default": True},
                    "comment": ["Product or module default deregistered"],
                },
            )
            salt_mock["suseconnect.deregister"].assert_called_with(
                product=None, url=None, root=None
            )

    @patch("states.suseconnect._status")
    def test_deregistered_succeed_register_params(self, _status):
        _status.side_effect = [
            (["SLES/15.2/x86_64"], ["SLES/15.2/x86_64"]),
            ([], []),
        ]

        opts_mock = {"test": False}
        salt_mock = {"suseconnect.deregister": MagicMock()}
        with patch.dict(suseconnect.__salt__, salt_mock), patch.dict(
            suseconnect.__opts__, opts_mock
        ):
            result = suseconnect.deregistered(
                "my_setup", product="SLES/15.2/x86_64", url=None, root=None
            )
            self.assertEqual(
                result,
                {
                    "name": "my_setup",
                    "result": True,
                    "changes": {"SLES/15.2/x86_64": True},
                    "comment": ["Product or module SLES/15.2/x86_64 deregistered"],
                },
            )
            salt_mock["suseconnect.deregister"].assert_called_with(
                product="SLES/15.2/x86_64", url=None, root=None
            )
0707010000008D000081A40000000000000000000000016130D1CF00003742000000000000000000000000000000000000003C00000000yomi-0.0.1+git.1630589391.4557cfd/tests/test_suseconnect.py# -*- coding: utf-8 -*-
#
# Author: Alberto Planas <aplanas@suse.com>
#
# Copyright 2019 SUSE LLC.
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations

import os.path
import unittest
from unittest.mock import patch, MagicMock

from salt.exceptions import CommandExecutionError

from modules import suseconnect


class SUSEConnectTestCase(unittest.TestCase):
    """
    Test cases for salt.modules.suseconnect
    """

    def test_register(self):
        """
        Test suseconnect.register without parameters
        """
        result = {"retcode": 0, "stdout": "Successfully registered system"}
        salt_mock = {
            "cmd.run_all": MagicMock(return_value=result),
        }
        with patch.dict(suseconnect.__salt__, salt_mock):
            self.assertEqual(
                suseconnect.register("regcode"), "Successfully registered system"
            )
            salt_mock["cmd.run_all"].assert_called_with(
                ["SUSEConnect", "--regcode", "regcode"]
            )

    def test_register_params(self):
        """
        Test suseconnect.register with parameters
        """
        result = {"retcode": 0, "stdout": "Successfully registered system"}
        salt_mock = {
            "cmd.run_all": MagicMock(return_value=result),
        }
        with patch.dict(suseconnect.__salt__, salt_mock):
            self.assertEqual(
                suseconnect.register(
                    "regcode",
                    product="sle-ha/15.2/x86_64",
                    email="user@example.com",
                    url="https://scc.suse.com",
                    root="/mnt",
                ),
                "Successfully registered system",
            )
            salt_mock["cmd.run_all"].assert_called_with(
                [
                    "SUSEConnect",
                    "--regcode",
                    "regcode",
                    "--product",
                    "sle-ha/15.2/x86_64",
                    "--email",
                    "user@example.com",
                    "--url",
                    "https://scc.suse.com",
                    "--root",
                    "/mnt",
                ]
            )

    def test_register_error(self):
        """
        Test suseconnect.register error
        """
        result = {"retcode": 1, "stdout": "Unknown Registration Code", "stderr": ""}
        salt_mock = {
            "cmd.run_all": MagicMock(return_value=result),
        }
        with patch.dict(suseconnect.__salt__, salt_mock):
            with self.assertRaises(CommandExecutionError):
                suseconnect.register("regcode")

    def test_deregister(self):
        """
        Test suseconnect.deregister without parameters
        """
        result = {"retcode": 0, "stdout": "Successfully deregistered system"}
        salt_mock = {
            "cmd.run_all": MagicMock(return_value=result),
        }
        with patch.dict(suseconnect.__salt__, salt_mock):
            self.assertEqual(
                suseconnect.deregister(), "Successfully deregistered system"
            )
            salt_mock["cmd.run_all"].assert_called_with(
                ["SUSEConnect", "--de-register"]
            )

    def test_deregister_params(self):
        """
        Test suseconnect.deregister with parameters
        """
        result = {"retcode": 0, "stdout": "Successfully deregistered system"}
        salt_mock = {
            "cmd.run_all": MagicMock(return_value=result),
        }
        with patch.dict(suseconnect.__salt__, salt_mock):
            self.assertEqual(
                suseconnect.deregister(
                    product="sle-ha/15.2/x86_64",
                    url="https://scc.suse.com",
                    root="/mnt",
                ),
                "Successfully deregistered system",
            )
            salt_mock["cmd.run_all"].assert_called_with(
                [
                    "SUSEConnect",
                    "--de-register",
                    "--product",
                    "sle-ha/15.2/x86_64",
                    "--url",
                    "https://scc.suse.com",
                    "--root",
                    "/mnt",
                ]
            )

    def test_deregister_error(self):
        """
        Test suseconnect.deregister error
        """
        result = {"retcode": 1, "stdout": "Unknown Product", "stderr": ""}
        salt_mock = {
            "cmd.run_all": MagicMock(return_value=result),
        }
        with patch.dict(suseconnect.__salt__, salt_mock):
            with self.assertRaises(CommandExecutionError):
                suseconnect.deregister()

    def test_status(self):
        """
        Test suseconnect.status without parameters
        """
        result = {
            "retcode": 0,
            "stdout": '[{"identifier":"SLES","version":"15.2",'
            '"arch":"x86_64","status":"No Registered"}]',
        }
        salt_mock = {
            "cmd.run_all": MagicMock(return_value=result),
        }
        with patch.dict(suseconnect.__salt__, salt_mock):
            self.assertEqual(
                suseconnect.status(),
                [
                    {
                        "identifier": "SLES",
                        "version": "15.2",
                        "arch": "x86_64",
                        "status": "No Registered",
                    }
                ],
            )
            salt_mock["cmd.run_all"].assert_called_with(["SUSEConnect", "--status"])

    def test_status_params(self):
        """
        Test suseconnect.status with parameters
        """
        result = {
            "retcode": 0,
            "stdout": '[{"identifier":"SLES","version":"15.2",'
            '"arch":"x86_64","status":"No Registered"}]',
        }
        salt_mock = {
            "cmd.run_all": MagicMock(return_value=result),
        }
        with patch.dict(suseconnect.__salt__, salt_mock):
            self.assertEqual(
                suseconnect.status(root="/mnt"),
                [
                    {
                        "identifier": "SLES",
                        "version": "15.2",
                        "arch": "x86_64",
                        "status": "No Registered",
                    }
                ],
            )
            salt_mock["cmd.run_all"].assert_called_with(
                ["SUSEConnect", "--status", "--root", "/mnt"]
            )

    def test_status_error(self):
        """
        Test suseconnect.status error
        """
        result = {"retcode": 1, "stdout": "Some Error", "stderr": ""}
        salt_mock = {
            "cmd.run_all": MagicMock(return_value=result),
        }
        with patch.dict(suseconnect.__salt__, salt_mock):
            with self.assertRaises(CommandExecutionError):
                suseconnect.status()

    def test__parse_list_extensions(self):
        """
        Test suseconnect.status error
        """
        fixture = os.path.join(
            os.path.dirname(__file__), "fixtures/list_extensions.txt"
        )
        with open(fixture) as f:
            self.assertEqual(
                suseconnect._parse_list_extensions(f.read()),
                [
                    "sle-module-basesystem/15.2/x86_64",
                    "sle-module-containers/15.2/x86_64",
                    "sle-module-desktop-applications/15.2/x86_64",
                    "sle-module-development-tools/15.2/x86_64",
                    "sle-we/15.2/x86_64",
                    "sle-module-python2/15.2/x86_64",
                    "sle-module-live-patching/15.2/x86_64",
                    "PackageHub/15.2/x86_64",
                    "sle-module-server-applications/15.2/x86_64",
                    "sle-module-legacy/15.2/x86_64",
                    "sle-module-public-cloud/15.2/x86_64",
                    "sle-ha/15.2/x86_64",
                    "sle-module-web-scripting/15.2/x86_64",
                    "sle-module-transactional-server/15.2/x86_64",
                ],
            )

    def test_list_extensions(self):
        """
        Test suseconnect.list_extensions without parameters
        """
        result = {
            "retcode": 0,
            "stdout": "Activate with: SUSEConnect -p sle-ha/15.2/x86_64",
        }
        salt_mock = {
            "cmd.run_all": MagicMock(return_value=result),
        }
        with patch.dict(suseconnect.__salt__, salt_mock):
            self.assertEqual(suseconnect.list_extensions(), ["sle-ha/15.2/x86_64"])
            salt_mock["cmd.run_all"].assert_called_with(
                ["SUSEConnect", "--list-extensions"]
            )

    def test_list_extensions_params(self):
        """
        Test suseconnect.list_extensions with parameters
        """
        result = {
            "retcode": 0,
            "stdout": "Activate with: SUSEConnect -p sle-ha/15.2/x86_64",
        }
        salt_mock = {
            "cmd.run_all": MagicMock(return_value=result),
        }
        with patch.dict(suseconnect.__salt__, salt_mock):
            self.assertEqual(
                suseconnect.list_extensions(url="https://scc.suse.com", root="/mnt"),
                ["sle-ha/15.2/x86_64"],
            )
            salt_mock["cmd.run_all"].assert_called_with(
                [
                    "SUSEConnect",
                    "--list-extensions",
                    "--url",
                    "https://scc.suse.com",
                    "--root",
                    "/mnt",
                ]
            )

    def test_list_extensions_error(self):
        """
        Test suseconnect.list_extensions error
        """
        result = {
            "retcode": 1,
            "stdout": "To list extensions, you must first register " "the base product",
            "stderr": "",
        }
        salt_mock = {
            "cmd.run_all": MagicMock(return_value=result),
        }
        with patch.dict(suseconnect.__salt__, salt_mock):
            with self.assertRaises(CommandExecutionError):
                suseconnect.list_extensions()

    def test_cleanup(self):
        """
        Test suseconnect.cleanup without parameters
        """
        result = {"retcode": 0, "stdout": "Service has been removed"}
        salt_mock = {
            "cmd.run_all": MagicMock(return_value=result),
        }
        with patch.dict(suseconnect.__salt__, salt_mock):
            self.assertEqual(suseconnect.cleanup(), "Service has been removed")
            salt_mock["cmd.run_all"].assert_called_with(["SUSEConnect", "--cleanup"])

    def test_cleanup_params(self):
        """
        Test suseconnect.cleanup with parameters
        """
        result = {"retcode": 0, "stdout": "Service has been removed"}
        salt_mock = {
            "cmd.run_all": MagicMock(return_value=result),
        }
        with patch.dict(suseconnect.__salt__, salt_mock):
            self.assertEqual(
                suseconnect.cleanup(root="/mnt"), "Service has been removed"
            )
            salt_mock["cmd.run_all"].assert_called_with(
                ["SUSEConnect", "--cleanup", "--root", "/mnt"]
            )

    def test_cleanup_error(self):
        """
        Test suseconnect.cleanup error
        """
        result = {"retcode": 1, "stdout": "some error", "stderr": ""}
        salt_mock = {
            "cmd.run_all": MagicMock(return_value=result),
        }
        with patch.dict(suseconnect.__salt__, salt_mock):
            with self.assertRaises(CommandExecutionError):
                suseconnect.cleanup()

    def test_rollback(self):
        """
        Test suseconnect.rollback without parameters
        """
        result = {
            "retcode": 0,
            "stdout": "Starting to sync system product activations",
        }
        salt_mock = {
            "cmd.run_all": MagicMock(return_value=result),
        }
        with patch.dict(suseconnect.__salt__, salt_mock):
            self.assertEqual(
                suseconnect.rollback(), "Starting to sync system product activations"
            )
            salt_mock["cmd.run_all"].assert_called_with(["SUSEConnect", "--rollback"])

    def test_rollback_params(self):
        """
        Test suseconnect.rollback with parameters
        """
        result = {
            "retcode": 0,
            "stdout": "Starting to sync system product activations",
        }
        salt_mock = {
            "cmd.run_all": MagicMock(return_value=result),
        }
        with patch.dict(suseconnect.__salt__, salt_mock):
            self.assertEqual(
                suseconnect.rollback(url="https://scc.suse.com", root="/mnt"),
                "Starting to sync system product activations",
            )
            salt_mock["cmd.run_all"].assert_called_with(
                [
                    "SUSEConnect",
                    "--rollback",
                    "--url",
                    "https://scc.suse.com",
                    "--root",
                    "/mnt",
                ]
            )

    def test_rollback_error(self):
        """
        Test suseconnect.rollback error
        """
        result = {"retcode": 1, "stdout": "some error", "stderr": ""}
        salt_mock = {
            "cmd.run_all": MagicMock(return_value=result),
        }
        with patch.dict(suseconnect.__salt__, salt_mock):
            with self.assertRaises(CommandExecutionError):
                suseconnect.rollback()
0707010000008E000081ED0000000000000000000000016130D1CF00003A07000000000000000000000000000000000000002F00000000yomi-0.0.1+git.1630589391.4557cfd/yomi-monitor#!/usr/bin/python3

# -*- coding: utf-8 -*-
#
# Author: Alberto Planas <aplanas@suse.com>
#
# Copyright 2019 SUSE LLC.
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.

import argparse
import getpass
import json
import logging
import os
from pathlib import Path
import pprint
import ssl
import sys
import time
import urllib.error
import urllib.parse
import urllib.request

LOG = logging.getLogger(__name__)
TOKEN_FILE = "~/.salt-api-token"

# ANSI color codes
BLACK = "\033[1;30m"
RED = "\033[1;31m"
GREEN = "\033[0;32m"
YELLOW = "\033[0;33m"
BLUE = "\033[1;34m"
MAGENTA = "\033[1;35m"
CYAN = "\033[1;36m"
WHITE = "\033[1;37m"
RESET = "\033[0;0m"


class SaltAPI:
    def __init__(
        self,
        url,
        username,
        password,
        eauth,
        insecure,
        token_file=TOKEN_FILE,
        debug=False,
    ):
        self.url = url
        self.username = username
        self.password = password
        self.eauth = eauth
        self.insecure = insecure
        self.token_file = token_file
        self.debug = debug

        is_https = urllib.parse.urlparse(url).scheme == "https"
        if debug or (is_https and insecure):
            if insecure:
                context = ssl._create_unverified_context()
                handler = urllib.request.HTTPSHandler(
                    context=context, debuglevel=int(debug)
                )
            else:
                handler = urllib.request.HTTPHandler(debuglevel=int(debug))
            opener = urllib.request.build_opener(handler)
            urllib.request.install_opener(opener)

        self.token = None
        self.expire = 0.0

    def login(self, remove=False):
        """Login into the Salt API service."""
        if remove:
            self._drop_token()
        self.token, self.expire = self._read_token()
        if self.expire < time.time() + 30:
            self.token, self.expire = self._login()
            self._write_token()

    def logout(self):
        """Logout from the Salt API service."""
        self._drop_token()
        self._post("/logout")

    def events(self):
        """SSE event stream from Salt API service."""
        for line in api._req_sse("/events", None, "GET"):
            line = line.decode("utf-8").strip()
            if not line or line.startswith((":", "retry:")):
                continue
            key, value = line.split(":", 1)
            if key == "tag":
                tag = value.strip()
                continue
            if key == "data":
                data = json.loads(value)
                yield (tag, data)

    def minions(self, mid=None):
        """Return the list of minions."""
        if mid:
            action = "/minions/{}".format(mid)
        else:
            action = "/minions"
        return self._get(action)["return"][0]

    def run_job(self, tgt, fun, **kwargs):
        """Start an execution command and return jid."""
        data = {
            "tgt": tgt,
            "fun": fun,
        }
        data.update(kwargs)
        return self._post("/minions", data)["return"][0]

    def jobs(self, jid=None):
        """Return the list of jobs."""
        if jid:
            action = "/jobs/{}".format(jid)
        else:
            action = "/jobs"
        return self._get(action)["return"][0]

    def stats(self):
        """Return a dump of statistics."""
        return self._get("/stats")["return"][0]

    def _login(self):
        """Login into the Salt API service."""
        data = {
            "username": self.username,
            "password": self.password,
            "eauth": self.eauth,
        }
        result = self._post("/login", data)
        return result["return"][0]["token"], result["return"][0]["expire"]

    def _get(self, action, data=None):
        return self._req(action, data, "GET")

    def _post(self, action, data=None):
        return self._req(action, data, "POST")

    def _req(self, action, data, method):
        """HTTP GET / POST to Salt API."""
        headers = {
            "User-Agent": "salt-autoinstaller monitor",
            "Accept": "application/json",
            "Content-Type": "application/json",
            "X-Requested-With": "XMLHttpRequest",
        }
        if self.token:
            headers["X-Auth-Token"] = self.token

        url = urllib.parse.urljoin(self.url, action)
        if method == "GET":
            data = urllib.parse.urlencode(data).encode() if data else None
            if data:
                url = "{}?{}".format(url, data)
            data = None
        elif method == "POST":
            data = json.dumps(data).encode() if data else {}
        else:
            raise ValueError("Method {} not valid".format(method))

        result = {}
        try:
            request = urllib.request.Request(url, data, headers)
            with urllib.request.urlopen(request) as response:
                result = json.loads(response.read().decode("utf-8"))
        except (urllib.error.HTTPError, urllib.error.URLError) as exc:
            LOG.debug("Error with request", exc_info=True)
            status = getattr(exc, "code", None)

            if status == 401:
                print("Authentication denied")

            if status == 500:
                print("Server error.")
            exit(-1)
        return result

    def _req_sse(self, action, data, method):
        """HTTP SSE GET / POST to Salt API."""
        headers = {
            "User-Agent": "salt-autoinstaller monitor",
            "Accept": "text/event-stream",
            "Content-Type": "application/json",
            "Connection": "Keep-Alive",
            "X-Requested-With": "XMLHttpRequest",
        }
        if self.token:
            headers["X-Auth-Token"] = self.token

        url = urllib.parse.urljoin(self.url, action)
        if method == "GET":
            data = urllib.parse.urlencode(data).encode() if data else None
            if data:
                url = "{}?{}".format(url, data)
            data = None
        elif method == "POST":
            data = json.dumps(data).encode() if data else {}
        else:
            raise ValueError("Method {} not valid".format(method))

        try:
            request = urllib.request.Request(url, data, headers)
            with urllib.request.urlopen(request) as response:
                yield from response
        except (urllib.error.HTTPError, urllib.error.URLError) as e:
            LOG.debug("Error with request", exc_info=True)
            status = getattr(e, "code", None)

            if status == 401:
                print("Authentication denied")

            if status == 500:
                print("Server error.")
            exit(-1)

    def _read_token(self):
        """Return the token and expire time from the token file."""
        token, expire = None, 0.0

        if self.token_file:
            token_path = Path(self.token_file).expanduser()
            if token_path.is_file():
                token, expire = token_path.read_text().split()
                try:
                    expire = float(expire)
                except ValueError:
                    expire = 0.0

        return token, expire

    def _write_token(self):
        """Save the token and expire time into the token file."""
        self._drop_token()
        if self.token_file:
            token_path = Path(self.token_file).expanduser()
            token_path.touch(mode=0o600)
            token_path.write_text("{} {}".format(self.token, self.expire))

    def _drop_token(self):
        """Remove the token file if present."""
        if self.token_file:
            token_path = Path(self.token_file).expanduser()
            if token_path.is_file():
                token_path.unlink()


def print_minions(minions):
    """Print a list of minions."""
    print("Registered minions:")
    for minion in minions:
        print("- {}".format(minion))


def print_minion(minion):
    """Print detailed information of a minion."""
    pprint.pprint(minion)


def print_jobs(jobs):
    """Print a list of jobs."""
    print("Registered jobs:")
    for job, info in jobs.items():
        print("- {}".format(job))
        pprint.pprint(info)


def print_job(job):
    """Print detailed information of a job."""
    pprint.pprint(job)


def print_raw_event(tag, data):
    """Print raw event without format."""
    print("- {}".format(tag))
    pprint.pprint(data)


def print_yomi_event(tag, data):
    """Print a Yomi event with format."""
    if tag.startswith("yomi/"):
        id_ = data["data"]["id"]
        stamp = data["data"]["_stamp"]

        # Decide the color to represent the node
        if id_ in print_yomi_event.nodes:
            color = print_yomi_event.nodes[id_]
        else:
            color = print_yomi_event.colors.pop()
            print_yomi_event.colors.insert(0, color)
            print_yomi_event.nodes[id_] = color

        tag = tag.split("/", 1)[1]
        tag, section = tag.rsplit("/", 1)
        if section == "enter":
            print(
                "[{}{}{}] {} -> [{}STARTING{}] {}".format(
                    color, id_, RESET, stamp, BLUE, RESET, tag
                )
            )
        elif section == "success":
            print(
                "[{}{}{}] {} -> [{}SUCCESS{}]  {}".format(
                    color, id_, RESET, stamp, GREEN, RESET, tag
                )
            )
        elif section == "fail":
            print(
                "[{}{}{}] {} -> [{}FAIL{}]     {}".format(
                    color, id_, RESET, stamp, RED, RESET, tag
                )
            )


# Static-a-like variables to track the rotating color
print_yomi_event.nodes = {}
print_yomi_event.colors = [RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN]

if __name__ == "__main__":
    parser = argparse.ArgumentParser(
        description="salt-autoinstaller monitor tool via salt-api."
    )
    parser.add_argument(
        "-u",
        "--saltapi-url",
        default=os.environ.get("SALTAPI_URL", "https://localhost:8000"),
        help="Specify the host url. Overwrite SALTAPI_URL.",
    )
    parser.add_argument(
        "-a",
        "--auth",
        "--eauth",
        "--extended-auth",
        default=os.environ.get("SALTAPI_EAUTH", "pam"),
        help="Specify the external_auth backend to "
        "authenticate against and interactively prompt "
        "for credentials. Overwrite SALTAPI_EAUTH.",
    )
    parser.add_argument(
        "-n",
        "--username",
        default=os.environ.get("SALTAPI_USER"),
        help="Optional, defaults to user name. will "
        "be prompt if empty unless --non-interactive. "
        "Overwrite SALTAPI_USER.",
    )
    parser.add_argument(
        "-p",
        "--password",
        default=os.environ.get("SALTAPI_PASS"),
        help="Optional, but will be prompted unless "
        "--non-interactive. Overwrite SALTAPI_PASS.",
    )
    parser.add_argument(
        "--non-interactive",
        action="store_true",
        default=False,
        help="Optional, fail rather than waiting for input.",
    )
    parser.add_argument(
        "-r",
        "--remove",
        action="store_true",
        default=False,
        help="Remove the token cached in the system.",
    )
    parser.add_argument(
        "-i",
        "--insecure",
        action="store_true",
        default=False,
        help="Ignore any SSL certificate that may be "
        "encountered. Note that it is recommended to resolve "
        "certificate errors for production.",
    )
    parser.add_argument(
        "-H",
        "--debug-http",
        action="store_true",
        default=False,
        help=("Output the HTTP request/response headers on " "stderr."),
    )
    parser.add_argument(
        "-m",
        "--minions",
        action="store_true",
        default=False,
        help="List available minions.",
    )
    parser.add_argument(
        "--show-minion",
        metavar="MID",
        default=None,
        help="Show the details of a minion.",
    )
    parser.add_argument(
        "-j", "--jobs", action="store_true", default=False, help="List available jobs."
    )
    parser.add_argument(
        "--show-job", metavar="JID", default=None, help="Show the details of a job."
    )
    parser.add_argument(
        "-e",
        "--events",
        action="store_true",
        default=False,
        help="Show events from salt-master.",
    )
    parser.add_argument(
        "-y",
        "--yomi-events",
        action="store_true",
        default=False,
        help="Show only Yomi events from salt-master.",
    )
    parser.add_argument(
        "target", nargs="?", help="Minion ID where to launch the installer."
    )
    args = parser.parse_args()

    if not args.saltapi_url:
        print("Please, provide a valid Salt API URL", file=sys.stderr)
        exit(-1)

    if args.non_interactive:
        if args.username is None:
            print("Please, provide a valid user name", file=sys.stderr)
            exit(-1)

        if args.password is None:
            print("Please, provide a valid password", file=sys.stderr)
            exit(-1)
    else:
        if args.username is None:
            args.username = input("Username: ")
        if args.password is None:
            args.password = getpass.getpass(prompt="Password: ")

    api = SaltAPI(
        url=args.saltapi_url,
        username=args.username,
        password=args.password,
        eauth=args.auth,
        insecure=args.insecure,
        debug=args.debug_http,
    )

    api.login(args.remove)

    if args.minions:
        print_minions(api.minions())

    if args.show_minion:
        print_minion(api.minions(args.show_minion))

    if args.jobs:
        print_jobs(api.jobs())

    if args.show_job:
        print_job(api.jobs(args.show_job))

    if args.target:
        print_job(api.run_job(args.target, "state.highstate"))

    if args.events or args.yomi_events:
        for tag, data in api.events():
            if args.yomi_events:
                print_yomi_event(tag, data)
            else:
                print_raw_event(tag, data)
07070100000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000B00000000TRAILER!!!1119 blocks
openSUSE Build Service is sponsored by