File product-composer.obscpio of Package product-composer

07070100000000000081a4000000000000000000000001682dad4d00000029000000000000000000000000000000000000001600000000product-composer/.gitgitdir: ../.git/modules/product-composer
07070100000001000081a4000000000000000000000001682dad4d000003ef000000000000000000000000000000000000002e00000000product-composer/.github/workflows/tests.yamlname: 'tests'

on:
  pull_request:
    branches: ['main']

concurrency:
  group: ${{ github.workflow }}-${{ github.ref }}
  cancel-in-progress: true

jobs:
  unit:
    name: "basic"
    runs-on: 'ubuntu-latest'
    strategy:
      fail-fast: false
      matrix:
        container:
          - 'registry.opensuse.org/opensuse/tumbleweed'

    container:
      image: ${{ matrix.container }}

    steps:
      - name: 'Install packages'
        run: |
            zypper -n modifyrepo --disable repo-openh264 || :
            zypper -n --gpg-auto-import-keys refresh
            zypper -n install python3 python3-pip python3-pydantic python3-pytest python3-rpm python3-setuptools python3-solv python3-PyYAML python3-schema

      - uses: actions/checkout@v4

      - name: 'Run basic example verification'
        run: |
          pip3 config set global.break-system-packages 1
          pip3 install --no-dependencies -e .
          productcomposer verify examples/ftp.productcompose
#          pytest tests
07070100000002000041ed000000000000000000000001682dad4d00000000000000000000000000000000000000000000002300000000product-composer/.github/workflows07070100000003000041ed000000000000000000000001682dad4d00000000000000000000000000000000000000000000001900000000product-composer/.github07070100000004000081a4000000000000000000000001682dad4d000000a2000000000000000000000000000000000000001c00000000product-composer/.gitignore.venv
examples/repos
src/productcomposer.egg-info
src/productcomposer/__pycache__
src/productcomposer/core/__pycache__
src/productcomposer/api/__pycache__
output
07070100000005000081a4000000000000000000000001682dad4d000001df000000000000000000000000000000000000001a00000000product-composer/Makefile# Project management tasks.

VENV = .venv
PYTHON = . $(VENV)/bin/activate && python
PYTEST = $(PYTHON) -m pytest


$(VENV)/.make-update: pyproject.toml
	python3 -m venv $(VENV)
	$(PYTHON) -m pip install -U pip  # needs to be updated first
	$(PYTHON) -m pip install -e ".[dev]"
	touch $@


.PHONY: dev
dev: $(VENV)/.make-update


.PHONY: docs
docs: dev
	asciidoc docs/productcomposer.adoc


.PHONY: test-unit
test-unit: dev
	$(PYTEST) tests/unit/


.PHONY: check
check: test-unit
07070100000006000081a4000000000000000000000001682dad4d00000999000000000000000000000000000000000000001c00000000product-composer/README.rstproduct-composer
================

This is the successor of product-builder. A tool to create rpm product
repositories inside of Open Build Service based on a larger pool
of packages.

It is used by any SLFO based product during product creation and
also during maintenance time.

Currently it supports:
 - processing based on a list of rpm package names
 - optional filters for architectures, versions and flavors can be defined
 - it can either just take a single rpm of a given name or all of them
 - it can post process updateinfo data
 - post processing like rpm meta data generation
 - modify pre-generated installer images to put a package set for 
   off-line installation on it.

Development
===========

Create the development environment:

.. code-block:: console

    $ python -m venv .venv
    $ .venv/bin/python -m pip install -e ".[dev]"


Run tests:

.. code-block::

    $ .venv/bin/python -m pytest -v tests/


Build documentation:

.. code-block::

    $ make docs



Installation
============

Packaging and distributing a Python application is dependent on the target
operating system(s) and execution environment, which could be a Python virtual
environment, Linux container, or native application.

Install the application to a self-contained Python virtual environment:

    $ python -m venv .venv
    $ .venv/bin/python -m pip install <project source>
    $ cp -r <project source>/etc .venv/
    $ .venv/bin/productcomposer --help



Execution
=========

The installed application includes a wrapper script for command line execution.
The location of this scripts depends on how the application was installed.


Configuration
-------------

The application uses `TOML`_ files for configuration. Configuration supports
runtime parameter substitution via a shell-like variable syntax, *i.e.*
``var = ${VALUE}``. CLI invocation will use the current environment for
parameter substitution, which makes it simple to pass host-specific values
to the application without needing to change the config file for every
installation.

.. code-block:: toml

    mailhost = $SENDMAIL_HOST


Logging
-------

The application uses standard `Python logging`_. All loggins is to ``STDERR``,
nd the logging level can be set via the config file or on the command line.


.. _TOML: https://toml.io
.. _Python logging: https://docs.python.org/3/library/logging.html
.. _mdklatt/cookiecutter-python-app: https://github.com/mdklatt/cookiecutter-python-app
07070100000007000081a4000000000000000000000001682dad4d00000029000000000000000000000000000000000000002100000000product-composer/docs/.gitignore# Ignore Sphinx build artifacts.

_build
07070100000008000081a4000000000000000000000001682dad4d0000244a000000000000000000000000000000000000002d00000000product-composer/docs/build_description.adoc
== productcompose build description options

=== minimal version

 product_compose_schema: 0.2
 vendor: I_and_myself
 name: my_product
 version: 1.0
 product-type: module

 architectures: [ x86_64 ]

 packages:
  - my-single-rpm-package

=== build options

The build options may be used to change the behaviour of the build
process. The options are described in the details below.

Just add them to enable the desired functionality, no further
arguments are allowed.

=== flavors

Flavors can be defined with any name. These can be
used to build multiple media from one build description.

Each flavor may define an own architecture list.

It can also be used to add different package sets.

You need to add a _multibuild file to your sources
to enable the build.

=== iso

Enables iso file generation and requires configuration of
iso9660 headers.

=== unpack

unpack defines the packageset to be used for extracting
the content of the rpm packages directly on the medium.

These rpm packages need to provide these files below

 /usr/lib/skelcd/CD1

Currently it gets only extracted to the first/main medium,
but not on source or debug media.

=== packagesets

The packages list lists rpm names to be put on the medium.

There is usually one master list and in addition there
can be addional optional lists.

The additional lists can be filtered by flavors and/or 
architectures.

The packageset requires at least a packages definition,
but may optionaly also a name, flavors or architectures.

==== name

Defines the name of the package set. 'main' is the default
name.

==== architecture

Lists the architectures where the set is to be used. The
default is for all architectures.

==== flavor

Lists the flavors where the set is to be used. The
default is for all flavors.

==== add

Can be used to add further packagesets by specifing
their names.

A special packageset called '__all__' will add all
package names local available.

==== sub

Can be used to remove packages from the specified
packageset names.

==== intersect

Can be used to filter packages with specified package
set lists.

==== packages

Lists all package names to be added. This is just the rpm
name, not the file name.

=== Details

==== name

The product name.

==== version

The product version

==== update

Optional update level string for CPE

==== edition

Optional edition string for CPE

==== summary

The product name in explaining words. It will be presented to the
user on overview screens

==== product-type

Either 'base' for operation systems or 'module' for any product
depending on any existing installation.

'extension' is handled as alias for 'module'.

==== architectures

An array of the master architectures to be put into the repository.
This can be used to build a single repository usable for many
hardware architectures.

product composer will automatically fall back to "noarch" packages
if the package is not found natively.

Setting a global architecture list is optional, when architectures
are listed for each flavor.

==== bcntsynctag

Optionaly defines a bcntsynctag for OBS. OBS will sync the build
counter over all packages in same repository and architecture
according to this tag.

==== milestone

Optionaly defines a milestone which will be used by OBS at release
time. This can be used to turn candidate builds into a Beta1 for
example

==== build_options

===== take_all_available_versions

By default only "the best" version of each rpm is taken.
Use this switch to put all candidates on the medium.
For example for maintenance repositories.

===== OBS_unordered_product_repos

OBS is by default filtering rpm packages based on the repository
path layering.

This switch can be used to disable this behaviour in cases where
a binary from a lower priorisated repository should be used.

This can increase the amount of required binaries a lot when
dealing with deep path lists.

===== ignore_missing_packages

Missing packages lead by default to a build failure.
Use this switch to continue. The missing packages are
still listed in the build log.

===== hide_flavor_in_product_directory_name

The flavor name is by default part of the directory
name of the build result. This can be disabled, 
when each flavor has a different arch list. Otherwise
conflicts can happen.

===== add_slsa_provenance

Add slsa provenance files for each rpm if available

===== abort_on_empty_updateinfo

Existing updateinfo.xml are scanned by default and reduced to
the available package binaries. In case none are found the
update is skipped. Enableing this option leads to a build failure
instead.

===== skip_updateinfos

No updateinfo meta information is added to the media.
This might be required when not using take_all_available_versions,
but building on a former released code base.

===== updateinfo_packages_only

Build a pure update repository. Skipping all matching rpms
which are not referenced via an updateinfo.

===== base_skip_packages

Controls whether packages should be copied in the `/install` directory
when using base iso images.

Enabling this would result in a simply repacked base image, without
any package copied there.

==== iso

===== publisher

For setting the iso9660 PUBLISHER header

===== vendor_id

For setting the iso9660 VENDOR_ID header

===== tree

Can be set to "drop" for creating only the iso files.

===== base

Can be used to copy the result into a pre generated iso file.
product-composer itself is not creating bootable iso images,
aka installer images. But it can used for example to use iso 
images with the agama-installer where it copies the main tree
inside.

When defining a base iso name, it is expected that:
 * the image gets provided via an rpm called baseiso-NAME
 * the image is available in the /usr/libexec/base-isos directory
   with the given NAME prefix.

product-composer will add the main product tree into /install
directory of this media. The build result will be a single
iso file name with the product name and a .install.iso suffix.

Only a single repository per product is usable. In case source
or debug rpm's need to be added, they need to be part of the
main repository.

==== installcheck

Runs a repository closure test for each architecture. This will
report any missing dependencies and abort.

===== ignore_errors

For reporting the dependency errors, but ignoring them.

==== debug

Configure the handling of debuginfo and debugsource rpms.
Use either

  debug: include

to include them or

  debug: drop

to drop all debug packages or

  debug: split

to create a seperate medium mwith -Debug suffix.

Missing debug packages will always be ignored.

This default setting may get specified per flavor.

==== packages

The package list. It can contain either simple name or it can
be extended by a >, >=, =, <, <= operator to specify a
specific version constraint.

The syntax for the version is rpm like

 [EPOCH:]VERSION[-RELEASE]

A missing epoch means epoch zero. If the release is missing, it
matches any release.

The package list can be valid globally or limited to specific flavors
or architectures.

==== product_compose_schema

Defines the level of the yaml syntax.
Please expect incompatible changes at any time atm.

This will be used to provide backward compability once
we stabilized.

==== product_directory_name

Can be used to specify a directory or medium name manually.
The default is "name-version".

The directory name will always be suffixed by the architecture
and build number.

==== source

Configure the handling of src or nosrc rpms for the picked binaries.
Use either

  source: include

to include all source packages or

  source: drop

to drop all source packages or

  source: split

to create a seperate medium with -Source suffix.

A missing source package leads to a build failure unless
the ignore_missing_packages built option is used.

This default setting may get specified per flavor.

==== repodata

Write architecture specific repository meta data into the architecture
specific sub directories. This way a client needs to process less
data. The disadvantage is that different URL's need to be handled 
per architecture.

  repodata: split

It is also possible to have a main repodata including all architectures
in addition to the architecture splitted ones.

  repodata: all

In absence of the repodata element only the main repodata is created
and includes all architectures.

This may get specified per flavor.

==== vendor

Defines the company responsible for the content. Can be for example
openSUSE or SUSE. It is used by the install stack.

==== set_updateinfo_from

Can be set to replace the "from" attribute in updateinfo.xml files with a fixed value.
This is shown as patch provider by zypp stack. Otherwise the value stays, OBS is setting
the packager from _patchinfo file here by default.

==== set_updateinfo_id_prefix

Sets a fixed prefix to all id's of included updateinfo data. It is not adding again
if the prefix exists already.

This can be used to have a common identifier for an update for many products, but
still being able to identify the filtering for a specific product.

==== block_updates_under_embargo

The current default is to include maintenance updates under embargo. This option can
be set to abort when an embargo date is in future.

07070100000009000081a4000000000000000000000001682dad4d00000756000000000000000000000000000000000000002b00000000product-composer/docs/productcomposer.adoc= productcomposer
:toc:
:icons:
:numbered:
:website: https://www.geckito.org/

== Goals

A lightweight successor for product builder.

It is used to generate product RPM repositories out of a pool of RPMs.
Unlike product builder, these can also be used to ship maintenance updates.

.Currently it supports:
- processing based on a list of RPM package names.
  product compose is currently not taking care of dependencies.
- providing matching source and/or debug packages for picked RPM packages.
  These can be either included into main repository or prepared via
  extra repositories
- optional filters for architectures, versions and flavors can be defined
- it can provide either just a single RPM of a given name or all of them
- it can post process updateinfo data
- post processing to provide various types of RPM meta data generation

Not yet implemented:
- create bootable iso files

== Design

product composer is supposed to be used only inside of OBS builds atm.
OBS or osc is preparing all binary RPM candidates in local directory 
before starting the build.

== Setup in OBS

You will require OBS 2.11 or later.

.Create a new repository with any name. Either in a new or existing project.
- The product-composer package must be available in any repository
  listed in the path elements.
- All scheduler architectures where packages are taken from must be listed.

Your build description file may have any name, but must have a .productcompose
suffix.

The build type for the repository must be set to

  Type: productcompose

in the build configuration (aka prjconf).

== Special setup for maintenance

Ensure to build your patchinfo builds in a repository where "local" is the first
architecture.

Your productcompose file may provide all versions of each RPM if you enable
"take_all_available_versions" in the build options.

include::build_description.adoc[]

0707010000000a000081a4000000000000000000000001682dad4d00008c67000000000000000000000000000000000000002b00000000product-composer/docs/productcomposer.html<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN"
    "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en">
<head>
<meta http-equiv="Content-Type" content="application/xhtml+xml; charset=UTF-8" />
<meta name="generator" content="AsciiDoc 10.2.1" />
<title>productcomposer</title>
<style type="text/css">
/* Shared CSS for AsciiDoc xhtml11 and html5 backends */

/* Default font. */
body {
  font-family: Georgia,serif;
}

/* Title font. */
h1, h2, h3, h4, h5, h6,
div.title, caption.title,
thead, p.table.header,
#toctitle,
#author, #revnumber, #revdate, #revremark,
#footer {
  font-family: Arial,Helvetica,sans-serif;
}

body {
  margin: 1em 5% 1em 5%;
}

a {
  color: blue;
  text-decoration: underline;
}
a:visited {
  color: fuchsia;
}

em {
  font-style: italic;
  color: navy;
}

strong {
  font-weight: bold;
  color: #083194;
}

h1, h2, h3, h4, h5, h6 {
  color: #527bbd;
  margin-top: 1.2em;
  margin-bottom: 0.5em;
  line-height: 1.3;
}

h1, h2, h3 {
  border-bottom: 2px solid silver;
}
h2 {
  padding-top: 0.5em;
}
h3 {
  float: left;
}
h3 + * {
  clear: left;
}
h5 {
  font-size: 1.0em;
}

div.sectionbody {
  margin-left: 0;
}

hr {
  border: 1px solid silver;
}

p {
  margin-top: 0.5em;
  margin-bottom: 0.5em;
}

ul, ol, li > p {
  margin-top: 0;
}
ul > li     { color: #aaa; }
ul > li > * { color: black; }

.monospaced, code, pre {
  font-family: "Courier New", Courier, monospace;
  font-size: inherit;
  color: navy;
  padding: 0;
  margin: 0;
}
pre {
  white-space: pre-wrap;
}

#author {
  color: #527bbd;
  font-weight: bold;
  font-size: 1.1em;
}
#email {
}
#revnumber, #revdate, #revremark {
}

#footer {
  font-size: small;
  border-top: 2px solid silver;
  padding-top: 0.5em;
  margin-top: 4.0em;
}
#footer-text {
  float: left;
  padding-bottom: 0.5em;
}
#footer-badges {
  float: right;
  padding-bottom: 0.5em;
}

#preamble {
  margin-top: 1.5em;
  margin-bottom: 1.5em;
}
div.imageblock, div.exampleblock, div.verseblock,
div.quoteblock, div.literalblock, div.listingblock, div.sidebarblock,
div.admonitionblock {
  margin-top: 1.0em;
  margin-bottom: 1.5em;
}
div.admonitionblock {
  margin-top: 2.0em;
  margin-bottom: 2.0em;
  margin-right: 10%;
  color: #606060;
}

div.content { /* Block element content. */
  padding: 0;
}

/* Block element titles. */
div.title, caption.title {
  color: #527bbd;
  font-weight: bold;
  text-align: left;
  margin-top: 1.0em;
  margin-bottom: 0.5em;
}
div.title + * {
  margin-top: 0;
}

td div.title:first-child {
  margin-top: 0.0em;
}
div.content div.title:first-child {
  margin-top: 0.0em;
}
div.content + div.title {
  margin-top: 0.0em;
}

div.sidebarblock > div.content {
  background: #ffffee;
  border: 1px solid #dddddd;
  border-left: 4px solid #f0f0f0;
  padding: 0.5em;
}

div.listingblock > div.content {
  border: 1px solid #dddddd;
  border-left: 5px solid #f0f0f0;
  background: #f8f8f8;
  padding: 0.5em;
}

div.quoteblock, div.verseblock {
  padding-left: 1.0em;
  margin-left: 1.0em;
  margin-right: 10%;
  border-left: 5px solid #f0f0f0;
  color: #888;
}

div.quoteblock > div.attribution {
  padding-top: 0.5em;
  text-align: right;
}

div.verseblock > pre.content {
  font-family: inherit;
  font-size: inherit;
}
div.verseblock > div.attribution {
  padding-top: 0.75em;
  text-align: left;
}
/* DEPRECATED: Pre version 8.2.7 verse style literal block. */
div.verseblock + div.attribution {
  text-align: left;
}

div.admonitionblock .icon {
  vertical-align: top;
  font-size: 1.1em;
  font-weight: bold;
  text-decoration: underline;
  color: #527bbd;
  padding-right: 0.5em;
}
div.admonitionblock td.content {
  padding-left: 0.5em;
  border-left: 3px solid #dddddd;
}

div.exampleblock > div.content {
  border-left: 3px solid #dddddd;
  padding-left: 0.5em;
}

div.imageblock div.content { padding-left: 0; }
span.image img { border-style: none; vertical-align: text-bottom; }
a.image:visited { color: white; }

dl {
  margin-top: 0.8em;
  margin-bottom: 0.8em;
}
dt {
  margin-top: 0.5em;
  margin-bottom: 0;
  font-style: normal;
  color: navy;
}
dd > *:first-child {
  margin-top: 0.1em;
}

ul, ol {
    list-style-position: outside;
}
ol.arabic {
  list-style-type: decimal;
}
ol.loweralpha {
  list-style-type: lower-alpha;
}
ol.upperalpha {
  list-style-type: upper-alpha;
}
ol.lowerroman {
  list-style-type: lower-roman;
}
ol.upperroman {
  list-style-type: upper-roman;
}

div.compact ul, div.compact ol,
div.compact p, div.compact p,
div.compact div, div.compact div {
  margin-top: 0.1em;
  margin-bottom: 0.1em;
}

tfoot {
  font-weight: bold;
}
td > div.verse {
  white-space: pre;
}

div.hdlist {
  margin-top: 0.8em;
  margin-bottom: 0.8em;
}
div.hdlist tr {
  padding-bottom: 15px;
}
dt.hdlist1.strong, td.hdlist1.strong {
  font-weight: bold;
}
td.hdlist1 {
  vertical-align: top;
  font-style: normal;
  padding-right: 0.8em;
  color: navy;
}
td.hdlist2 {
  vertical-align: top;
}
div.hdlist.compact tr {
  margin: 0;
  padding-bottom: 0;
}

.comment {
  background: yellow;
}

.footnote, .footnoteref {
  font-size: 0.8em;
}

span.footnote, span.footnoteref {
  vertical-align: super;
}

#footnotes {
  margin: 20px 0 20px 0;
  padding: 7px 0 0 0;
}

#footnotes div.footnote {
  margin: 0 0 5px 0;
}

#footnotes hr {
  border: none;
  border-top: 1px solid silver;
  height: 1px;
  text-align: left;
  margin-left: 0;
  width: 20%;
  min-width: 100px;
}

div.colist td {
  padding-right: 0.5em;
  padding-bottom: 0.3em;
  vertical-align: top;
}
div.colist td img {
  margin-top: 0.3em;
}

@media print {
  #footer-badges { display: none; }
}

#toc {
  margin-bottom: 2.5em;
}

#toctitle {
  color: #527bbd;
  font-size: 1.1em;
  font-weight: bold;
  margin-top: 1.0em;
  margin-bottom: 0.1em;
}

div.toclevel0, div.toclevel1, div.toclevel2, div.toclevel3, div.toclevel4 {
  margin-top: 0;
  margin-bottom: 0;
}
div.toclevel2 {
  margin-left: 2em;
  font-size: 0.9em;
}
div.toclevel3 {
  margin-left: 4em;
  font-size: 0.9em;
}
div.toclevel4 {
  margin-left: 6em;
  font-size: 0.9em;
}

span.aqua { color: aqua; }
span.black { color: black; }
span.blue { color: blue; }
span.fuchsia { color: fuchsia; }
span.gray { color: gray; }
span.green { color: green; }
span.lime { color: lime; }
span.maroon { color: maroon; }
span.navy { color: navy; }
span.olive { color: olive; }
span.purple { color: purple; }
span.red { color: red; }
span.silver { color: silver; }
span.teal { color: teal; }
span.white { color: white; }
span.yellow { color: yellow; }

span.aqua-background { background: aqua; }
span.black-background { background: black; }
span.blue-background { background: blue; }
span.fuchsia-background { background: fuchsia; }
span.gray-background { background: gray; }
span.green-background { background: green; }
span.lime-background { background: lime; }
span.maroon-background { background: maroon; }
span.navy-background { background: navy; }
span.olive-background { background: olive; }
span.purple-background { background: purple; }
span.red-background { background: red; }
span.silver-background { background: silver; }
span.teal-background { background: teal; }
span.white-background { background: white; }
span.yellow-background { background: yellow; }

span.big { font-size: 2em; }
span.small { font-size: 0.6em; }

span.underline { text-decoration: underline; }
span.overline { text-decoration: overline; }
span.line-through { text-decoration: line-through; }

div.unbreakable { page-break-inside: avoid; }


/*
 * xhtml11 specific
 *
 * */

div.tableblock {
  margin-top: 1.0em;
  margin-bottom: 1.5em;
}
div.tableblock > table {
  border: 3px solid #527bbd;
}
thead, p.table.header {
  font-weight: bold;
  color: #527bbd;
}
p.table {
  margin-top: 0;
}
/* Because the table frame attribute is overridden by CSS in most browsers. */
div.tableblock > table[frame="void"] {
  border-style: none;
}
div.tableblock > table[frame="hsides"] {
  border-left-style: none;
  border-right-style: none;
}
div.tableblock > table[frame="vsides"] {
  border-top-style: none;
  border-bottom-style: none;
}


/*
 * html5 specific
 *
 * */

table.tableblock {
  margin-top: 1.0em;
  margin-bottom: 1.5em;
}
thead, p.tableblock.header {
  font-weight: bold;
  color: #527bbd;
}
p.tableblock {
  margin-top: 0;
}
table.tableblock {
  border-width: 3px;
  border-spacing: 0px;
  border-style: solid;
  border-color: #527bbd;
  border-collapse: collapse;
}
th.tableblock, td.tableblock {
  border-width: 1px;
  padding: 4px;
  border-style: solid;
  border-color: #527bbd;
}

table.tableblock.frame-topbot {
  border-left-style: hidden;
  border-right-style: hidden;
}
table.tableblock.frame-sides {
  border-top-style: hidden;
  border-bottom-style: hidden;
}
table.tableblock.frame-none {
  border-style: hidden;
}

th.tableblock.halign-left, td.tableblock.halign-left {
  text-align: left;
}
th.tableblock.halign-center, td.tableblock.halign-center {
  text-align: center;
}
th.tableblock.halign-right, td.tableblock.halign-right {
  text-align: right;
}

th.tableblock.valign-top, td.tableblock.valign-top {
  vertical-align: top;
}
th.tableblock.valign-middle, td.tableblock.valign-middle {
  vertical-align: middle;
}
th.tableblock.valign-bottom, td.tableblock.valign-bottom {
  vertical-align: bottom;
}


/*
 * manpage specific
 *
 * */

body.manpage h1 {
  padding-top: 0.5em;
  padding-bottom: 0.5em;
  border-top: 2px solid silver;
  border-bottom: 2px solid silver;
}
body.manpage h2 {
  border-style: none;
}
body.manpage div.sectionbody {
  margin-left: 3em;
}

@media print {
  body.manpage div#toc { display: none; }
}


</style>
<script type="text/javascript">
/*<![CDATA[*/
var asciidoc = {  // Namespace.

/////////////////////////////////////////////////////////////////////
// Table Of Contents generator
/////////////////////////////////////////////////////////////////////

/* Author: Mihai Bazon, September 2002
 * http://students.infoiasi.ro/~mishoo
 *
 * Table Of Content generator
 * Version: 0.4
 *
 * Feel free to use this script under the terms of the GNU General Public
 * License, as long as you do not remove or alter this notice.
 */

 /* modified by Troy D. Hanson, September 2006. License: GPL */
 /* modified by Stuart Rackham, 2006, 2009. License: GPL */

// toclevels = 1..4.
toc: function (toclevels) {

  function getText(el) {
    var text = "";
    for (var i = el.firstChild; i != null; i = i.nextSibling) {
      if (i.nodeType == 3 /* Node.TEXT_NODE */) // IE doesn't speak constants.
        text += i.data;
      else if (i.firstChild != null)
        text += getText(i);
    }
    return text;
  }

  function TocEntry(el, text, toclevel) {
    this.element = el;
    this.text = text;
    this.toclevel = toclevel;
  }

  function tocEntries(el, toclevels) {
    var result = new Array;
    var re = new RegExp('[hH]([1-'+(toclevels+1)+'])');
    // Function that scans the DOM tree for header elements (the DOM2
    // nodeIterator API would be a better technique but not supported by all
    // browsers).
    var iterate = function (el) {
      for (var i = el.firstChild; i != null; i = i.nextSibling) {
        if (i.nodeType == 1 /* Node.ELEMENT_NODE */) {
          var mo = re.exec(i.tagName);
          if (mo && (i.getAttribute("class") || i.getAttribute("className")) != "float") {
            result[result.length] = new TocEntry(i, getText(i), mo[1]-1);
          }
          iterate(i);
        }
      }
    }
    iterate(el);
    return result;
  }

  var toc = document.getElementById("toc");
  if (!toc) {
    return;
  }

  // Delete existing TOC entries in case we're reloading the TOC.
  var tocEntriesToRemove = [];
  var i;
  for (i = 0; i < toc.childNodes.length; i++) {
    var entry = toc.childNodes[i];
    if (entry.nodeName.toLowerCase() == 'div'
     && entry.getAttribute("class")
     && entry.getAttribute("class").match(/^toclevel/))
      tocEntriesToRemove.push(entry);
  }
  for (i = 0; i < tocEntriesToRemove.length; i++) {
    toc.removeChild(tocEntriesToRemove[i]);
  }

  // Rebuild TOC entries.
  var entries = tocEntries(document.getElementById("content"), toclevels);
  for (var i = 0; i < entries.length; ++i) {
    var entry = entries[i];
    if (entry.element.id == "")
      entry.element.id = "_toc_" + i;
    var a = document.createElement("a");
    a.href = "#" + entry.element.id;
    a.appendChild(document.createTextNode(entry.text));
    var div = document.createElement("div");
    div.appendChild(a);
    div.className = "toclevel" + entry.toclevel;
    toc.appendChild(div);
  }
  if (entries.length == 0)
    toc.parentNode.removeChild(toc);
},


/////////////////////////////////////////////////////////////////////
// Footnotes generator
/////////////////////////////////////////////////////////////////////

/* Based on footnote generation code from:
 * http://www.brandspankingnew.net/archive/2005/07/format_footnote.html
 */

footnotes: function () {
  // Delete existing footnote entries in case we're reloading the footnodes.
  var i;
  var noteholder = document.getElementById("footnotes");
  if (!noteholder) {
    return;
  }
  var entriesToRemove = [];
  for (i = 0; i < noteholder.childNodes.length; i++) {
    var entry = noteholder.childNodes[i];
    if (entry.nodeName.toLowerCase() == 'div' && entry.getAttribute("class") == "footnote")
      entriesToRemove.push(entry);
  }
  for (i = 0; i < entriesToRemove.length; i++) {
    noteholder.removeChild(entriesToRemove[i]);
  }

  // Rebuild footnote entries.
  var cont = document.getElementById("content");
  var spans = cont.getElementsByTagName("span");
  var refs = {};
  var n = 0;
  for (i=0; i<spans.length; i++) {
    if (spans[i].className == "footnote") {
      n++;
      var note = spans[i].getAttribute("data-note");
      if (!note) {
        // Use [\s\S] in place of . so multi-line matches work.
        // Because JavaScript has no s (dotall) regex flag.
        note = spans[i].innerHTML.match(/\s*\[([\s\S]*)]\s*/)[1];
        spans[i].innerHTML =
          "[<a id='_footnoteref_" + n + "' href='#_footnote_" + n +
          "' title='View footnote' class='footnote'>" + n + "</a>]";
        spans[i].setAttribute("data-note", note);
      }
      noteholder.innerHTML +=
        "<div class='footnote' id='_footnote_" + n + "'>" +
        "<a href='#_footnoteref_" + n + "' title='Return to text'>" +
        n + "</a>. " + note + "</div>";
      var id =spans[i].getAttribute("id");
      if (id != null) refs["#"+id] = n;
    }
  }
  if (n == 0)
    noteholder.parentNode.removeChild(noteholder);
  else {
    // Process footnoterefs.
    for (i=0; i<spans.length; i++) {
      if (spans[i].className == "footnoteref") {
        var href = spans[i].getElementsByTagName("a")[0].getAttribute("href");
        href = href.match(/#.*/)[0];  // Because IE return full URL.
        n = refs[href];
        spans[i].innerHTML =
          "[<a href='#_footnote_" + n +
          "' title='View footnote' class='footnote'>" + n + "</a>]";
      }
    }
  }
},

install: function(toclevels) {
  var timerId;

  function reinstall() {
    asciidoc.footnotes();
    if (toclevels) {
      asciidoc.toc(toclevels);
    }
  }

  function reinstallAndRemoveTimer() {
    clearInterval(timerId);
    reinstall();
  }

  timerId = setInterval(reinstall, 500);
  if (document.addEventListener)
    document.addEventListener("DOMContentLoaded", reinstallAndRemoveTimer, false);
  else
    window.onload = reinstallAndRemoveTimer;
}

}
asciidoc.install(2);
/*]]>*/
</script>
</head>
<body class="article">
<div id="header">
<h1>productcomposer</h1>
<div id="toc">
  <div id="toctitle">Table of Contents</div>
  <noscript><p><b>JavaScript must be enabled in your browser to display the table of contents.</b></p></noscript>
</div>
</div>
<div id="content">
<div class="sect1">
<h2 id="_goals">1. Goals</h2>
<div class="sectionbody">
<div class="paragraph"><p>A lightweight success or product builder.</p></div>
<div class="paragraph"><p>It is used to generate product rpm repositories out of a pool of rpms.
Unlike product builder, these can also be used to ship maintenance updates.</p></div>
<div class="ulist"><div class="title">Currently it supports:</div><ul>
<li>
<p>
processing based on a list of rpm package names.
  product compose is not take care of dependencies atm.
</p>
</li>
<li>
<p>
providing matching source and/or debug packages for picked rpm packages.
  These can be either included into main repository or prepared via
  extra repositories
</p>
</li>
<li>
<p>
optional filters for architectures, versions and flavors can be defined
</p>
</li>
<li>
<p>
it can provide either just a single rpm of a given name or all of them
</p>
</li>
<li>
<p>
it can post process updateinfo data
</p>
</li>
<li>
<p>
post processing to provide various rpm meta data generation
</p>
</li>
</ul></div>
<div class="paragraph"><p>Not yet implemented:
- create bootable iso files</p></div>
</div>
</div>
<div class="sect1">
<h2 id="_design">2. Design</h2>
<div class="sectionbody">
<div class="paragraph"><p>product composer issupposed to be used only inside of OBS builds atm.
OBS or osc is preparing all binary rpm candidates in local directory
before starting the build.</p></div>
</div>
</div>
<div class="sect1">
<h2 id="_setup_in_obs">3. Setup in OBS</h2>
<div class="sectionbody">
<div class="paragraph"><p>You will require OBS 2.11 or later.</p></div>
<div class="ulist"><div class="title">Create a new repository with any name. Either in a new or existing project.</div><ul>
<li>
<p>
The product-composer package must be available in any repository
  listed in the path elements.
</p>
</li>
<li>
<p>
All scheduler architectures where packages are taken from must be listed.
</p>
</li>
</ul></div>
<div class="paragraph"><p>Your build description file may have any name, but must have a .productcompose
suffix.</p></div>
<div class="paragraph"><p>The build type for the repository must be set to</p></div>
<div class="literalblock">
<div class="content">
<pre><code>Type: productcompose</code></pre>
</div></div>
<div class="paragraph"><p>in the build configuration (aka prjconf).</p></div>
</div>
</div>
<div class="sect1">
<h2 id="_special_setup_for_maintenance">4. Special setup for maintenance</h2>
<div class="sectionbody">
<div class="paragraph"><p>Ensure to build your patchinfo builds in a repository where "local" is the first
architecture.</p></div>
<div class="paragraph"><p>Your productcompose file may provide all versions of each rpm if you enable
"take_all_available_versions" in the build options.</p></div>
</div>
</div>
<div class="sect1">
<h2 id="_productcompose_build_description_options">5. productcompose build description options</h2>
<div class="sectionbody">
<div class="sect2">
<h3 id="_minimal_version">5.1. minimal version</h3>
<div class="literalblock">
<div class="content">
<pre><code>product_compose_schema: 0.2
vendor: I_and_myself
name: my_product
version: 1.0
product-type: module</code></pre>
</div></div>
<div class="literalblock">
<div class="content">
<pre><code>architectures: [ x86_64 ]</code></pre>
</div></div>
<div class="literalblock">
<div class="content">
<pre><code>packages:
 - my-single-rpm-package</code></pre>
</div></div>
</div>
<div class="sect2">
<h3 id="_build_options">5.2. build options</h3>
<div class="paragraph"><p>The build options may be used to change the behaviour of the build
process. The options are described in the details below.</p></div>
<div class="paragraph"><p>Just add them to enable the desired functionality, no further
arguments are allowed.</p></div>
</div>
<div class="sect2">
<h3 id="_flavors">5.3. flavors</h3>
<div class="paragraph"><p>Flavors can be defined with any name. These can be
used to build multiple media from one build description.</p></div>
<div class="paragraph"><p>Each flavor may define an own architecture list.</p></div>
<div class="paragraph"><p>It can also be used to add different package sets.</p></div>
<div class="paragraph"><p>You need to add a _multibuild file to your sources
to enable the build.</p></div>
</div>
<div class="sect2">
<h3 id="_iso">5.4. iso</h3>
<div class="paragraph"><p>Enables iso file generation and requires configuration of
iso9660 headers.</p></div>
</div>
<div class="sect2">
<h3 id="_unpack">5.5. unpack</h3>
<div class="paragraph"><p>unpack defines the packageset to be used for extracting
the content of the rpm packages directly on the medium.</p></div>
<div class="paragraph"><p>These rpm packages need to provide these files below</p></div>
<div class="literalblock">
<div class="content">
<pre><code>/usr/lib/skelcd/CD1</code></pre>
</div></div>
<div class="paragraph"><p>Currently it gets only extracted to the first/main medium,
but not on source or debug media.</p></div>
</div>
<div class="sect2">
<h3 id="_packagesets">5.6. packagesets</h3>
<div class="paragraph"><p>The packages list lists rpm names to be put on the medium.</p></div>
<div class="paragraph"><p>There is usually one master list and in addition there
can be addional optional lists.</p></div>
<div class="paragraph"><p>The additional lists can be filtered by flavors and/or
architectures.</p></div>
<div class="paragraph"><p>The packageset requires at least a packages definition,
but may optionaly also a name, flavors or architectures.</p></div>
<div class="sect3">
<h4 id="_name">5.6.1. name</h4>
<div class="paragraph"><p>Defines the name of the package set. <em>main</em> is the default
name.</p></div>
</div>
<div class="sect3">
<h4 id="_architecture">5.6.2. architecture</h4>
<div class="paragraph"><p>Lists the architectures where the set is to be used. The
default is for all architectures.</p></div>
</div>
<div class="sect3">
<h4 id="_flavor">5.6.3. flavor</h4>
<div class="paragraph"><p>Lists the flavors where the set is to be used. The
default is for all flavors.</p></div>
</div>
<div class="sect3">
<h4 id="_add">5.6.4. add</h4>
<div class="paragraph"><p>Can be used to add further packagesets by specifing
their names.</p></div>
<div class="paragraph"><p>A special packageset called <em><em>all</em></em> will add all
package names local available.</p></div>
</div>
<div class="sect3">
<h4 id="_sub">5.6.5. sub</h4>
<div class="paragraph"><p>Can be used to remove packages from the specified
packageset names.</p></div>
</div>
<div class="sect3">
<h4 id="_intersect">5.6.6. intersect</h4>
<div class="paragraph"><p>Can be used to filter packages with specified package
set lists.</p></div>
</div>
<div class="sect3">
<h4 id="_packages">5.6.7. packages</h4>
<div class="paragraph"><p>Lists all package names to be added. This is just the rpm
name, not the file name.</p></div>
</div>
</div>
<div class="sect2">
<h3 id="_details">5.7. Details</h3>
<div class="sect3">
<h4 id="_name_2">5.7.1. name</h4>
<div class="paragraph"><p>The product name.</p></div>
</div>
<div class="sect3">
<h4 id="_version">5.7.2. version</h4>
<div class="paragraph"><p>The product version</p></div>
</div>
<div class="sect3">
<h4 id="_summary">5.7.3. summary</h4>
<div class="paragraph"><p>The product name in explaining words. It will be presented to the
user on overview screens</p></div>
</div>
<div class="sect3">
<h4 id="_product_type">5.7.4. product-type</h4>
<div class="paragraph"><p>Either <em>base</em> for operation systems or <em>module</em> for any product
depending on any existing installation.</p></div>
<div class="paragraph"><p><em>extension</em> is handled as alias for <em>module</em>.</p></div>
</div>
<div class="sect3">
<h4 id="_architectures">5.7.5. architectures</h4>
<div class="paragraph"><p>An array of the master architectures to be put into the repository.
This can be used to build a single repository usable for many
hardware architectures.</p></div>
<div class="paragraph"><p>product composer will automatically fall back to "noarch" packages
if the package is not found natively.</p></div>
<div class="paragraph"><p>Setting a global architecture list is optional, when architectures
are listed for each flavor.</p></div>
</div>
<div class="sect3">
<h4 id="_bcntsynctag">5.7.6. bcntsynctag</h4>
<div class="paragraph"><p>Optionaly defines a bcntsynctag for OBS. OBS will sync the build
counter over all packages in same repository and architecture
according to this tag.</p></div>
</div>
<div class="sect3">
<h4 id="_milestone">5.7.7. milestone</h4>
<div class="paragraph"><p>Optionaly defines a milestone which will be used by OBS at release
time. This can be used to turn candidate builds into a Beta1 for
example</p></div>
</div>
<div class="sect3">
<h4 id="_build_options_2">5.7.8. build_options</h4>
<div class="sect4">
<h5 id="_take_all_available_versions">take_all_available_versions</h5>
<div class="paragraph"><p>By default only "the best" version of each rpm is taken.
Use this switch to put all candidates on the medium.
For example for maintenance repositories.</p></div>
</div>
<div class="sect4">
<h5 id="_obs_unordered_product_repos">OBS_unordered_product_repos</h5>
<div class="paragraph"><p>OBS is by default filtering rpm packages based on the repository
path layering.</p></div>
<div class="paragraph"><p>This switch can be used to disable this behaviour in cases where
a binary from a lower priorisated repository should be used.</p></div>
<div class="paragraph"><p>This can increase the amount of required binaries a lot when
dealing with deep path lists.</p></div>
</div>
<div class="sect4">
<h5 id="_ignore_missing_packages">ignore_missing_packages</h5>
<div class="paragraph"><p>Missing packages lead by default to a build failure.
Use this switch to continue. The missing packages are
still listed in the build log.</p></div>
</div>
<div class="sect4">
<h5 id="_hide_flavor_in_product_directory_name">hide_flavor_in_product_directory_name</h5>
<div class="paragraph"><p>The flavor name is by default part of the directory
name of the build result. This can be disabled,
when each flavor has a different arch list. Otherwise
conflicts can happen.</p></div>
</div>
<div class="sect4">
<h5 id="_add_slsa_provenance">add_slsa_provenance</h5>
<div class="paragraph"><p>Add slsa provenance files for each rpm if available</p></div>
</div>
<div class="sect4">
<h5 id="_abort_on_empty_updateinfo">abort_on_empty_updateinfo</h5>
<div class="paragraph"><p>Existing updateinfo.xml are scanned by default and reduced to
the available package binaries. In case none are found the
update is skipped. Enableing this option leads to a build failure
instead.</p></div>
</div>
<div class="sect4">
<h5 id="_skip_updateinfos">skip_updateinfos</h5>
<div class="paragraph"><p>No updateinfo meta information is added to the media.
This might be required when not using take_all_available_versions,
but building on a former released code base.</p></div>
</div>
<div class="sect4">
<h5 id="_updateinfo_packages_only">updateinfo_packages_only</h5>
<div class="paragraph"><p>Build a pure update repository. Skipping all matching rpms
which are not referenced via an updateinfo.</p></div>
</div>
<div class="sect4">
<h5 id="_base_skip_packages">base_skip_packages</h5>
<div class="paragraph"><p>Controls whether packages should be copied in the <code>/install</code> directory
when using base iso images.</p></div>
<div class="paragraph"><p>Enabling this would result in a simply repacked base image, without
any package copied there.</p></div>
</div>
</div>
<div class="sect3">
<h4 id="_iso_2">5.7.9. iso</h4>
<div class="sect4">
<h5 id="_publisher">publisher</h5>
<div class="paragraph"><p>For setting the iso9660 PUBLISHER header</p></div>
</div>
<div class="sect4">
<h5 id="_vendor_id">vendor_id</h5>
<div class="paragraph"><p>For setting the iso9660 VENDOR_ID header</p></div>
</div>
<div class="sect4">
<h5 id="_tree">tree</h5>
<div class="paragraph"><p>Can be set to "drop" for creating only the iso files.</p></div>
</div>
<div class="sect4">
<h5 id="_base">base</h5>
<div class="paragraph"><p>Can be used to copy the result into a pre generated iso file.
product-composer itself is not creating bootable iso images,
aka installer images. But it can used for example to use iso
images with the agama-installer where it copies the main tree
inside.</p></div>
<div class="paragraph"><p>When defining a base iso name, it is expected that:
 * the image gets provided via an rpm called baseiso-NAME
 * the image is available in the /usr/libexec/base-isos directory
   with the given NAME prefix.</p></div>
<div class="paragraph"><p>product-composer will add the main product tree into /install
directory of this media. The build result will be a single
iso file name with the product name and a .install.iso suffix.</p></div>
<div class="paragraph"><p>Only a single repository per product is usable. In case source
or debug rpm&#8217;s need to be added, they need to be part of the
main repository.</p></div>
</div>
</div>
<div class="sect3">
<h4 id="_installcheck">5.7.10. installcheck</h4>
<div class="paragraph"><p>Runs a repository closure test for each architecture. This will
report any missing dependencies and abort.</p></div>
<div class="sect4">
<h5 id="_ignore_errors">ignore_errors</h5>
<div class="paragraph"><p>For reporting the dependency errors, but ignoring them.</p></div>
</div>
</div>
<div class="sect3">
<h4 id="_debug">5.7.11. debug</h4>
<div class="paragraph"><p>Configure the handling of debuginfo and debugsource rpms.
Use either</p></div>
<div class="literalblock">
<div class="content">
<pre><code>debug: include</code></pre>
</div></div>
<div class="paragraph"><p>to include them or</p></div>
<div class="literalblock">
<div class="content">
<pre><code>debug: drop</code></pre>
</div></div>
<div class="paragraph"><p>to drop all debug packages or</p></div>
<div class="literalblock">
<div class="content">
<pre><code>debug: split</code></pre>
</div></div>
<div class="paragraph"><p>to create a seperate medium mwith -Debug suffix.</p></div>
<div class="paragraph"><p>Missing debug packages will always be ignored.</p></div>
<div class="paragraph"><p>This default setting may get specified per flavor.</p></div>
</div>
<div class="sect3">
<h4 id="_packages_2">5.7.12. packages</h4>
<div class="paragraph"><p>The package list. It can contain either simple name or it can
be extended by a &gt;, &gt;=, =, &lt;, &#8656; operator to specify a
specific version constraint.</p></div>
<div class="paragraph"><p>The syntax for the version is rpm like</p></div>
<div class="literalblock">
<div class="content">
<pre><code>[EPOCH:]VERSION[-RELEASE]</code></pre>
</div></div>
<div class="paragraph"><p>A missing epoch means epoch zero. If the release is missing, it
matches any release.</p></div>
<div class="paragraph"><p>The package list can be valid globally or limited to specific flavors
or architectures.</p></div>
</div>
<div class="sect3">
<h4 id="_product_compose_schema">5.7.13. product_compose_schema</h4>
<div class="paragraph"><p>Defines the level of the yaml syntax.
Please expect incompatible changes at any time atm.</p></div>
<div class="paragraph"><p>This will be used to provide backward compability once
we stabilized.</p></div>
</div>
<div class="sect3">
<h4 id="_product_directory_name">5.7.14. product_directory_name</h4>
<div class="paragraph"><p>Can be used to specify a directory or medium name manually.
The default is "name-version".</p></div>
<div class="paragraph"><p>The directory name will always be suffixed by the architecture
and build number.</p></div>
</div>
<div class="sect3">
<h4 id="_source">5.7.15. source</h4>
<div class="paragraph"><p>Configure the handling of src or nosrc rpms for the picked binaries.
Use either</p></div>
<div class="literalblock">
<div class="content">
<pre><code>source: include</code></pre>
</div></div>
<div class="paragraph"><p>to include all source packages or</p></div>
<div class="literalblock">
<div class="content">
<pre><code>source: drop</code></pre>
</div></div>
<div class="paragraph"><p>to drop all source packages or</p></div>
<div class="literalblock">
<div class="content">
<pre><code>source: split</code></pre>
</div></div>
<div class="paragraph"><p>to create a seperate medium with -Source suffix.</p></div>
<div class="paragraph"><p>A missing source package leads to a build failure unless
the ignore_missing_packages built option is used.</p></div>
<div class="paragraph"><p>This default setting may get specified per flavor.</p></div>
</div>
<div class="sect3">
<h4 id="_repodata">5.7.16. repodata</h4>
<div class="paragraph"><p>Write architecture specific repository meta data into the architecture
specific sub directories. This way a client needs to process less
data. The disadvantage is that different URL&#8217;s need to be handled
per architecture.</p></div>
<div class="literalblock">
<div class="content">
<pre><code>repodata: split</code></pre>
</div></div>
<div class="paragraph"><p>It is also possible to have a main repodata including all architectures
in addition to the architecture splitted ones.</p></div>
<div class="literalblock">
<div class="content">
<pre><code>repodata: all</code></pre>
</div></div>
<div class="paragraph"><p>In absence of the repodata element only the main repodata is created
and includes all architectures.</p></div>
<div class="paragraph"><p>This may get specified per flavor.</p></div>
</div>
<div class="sect3">
<h4 id="_vendor">5.7.17. vendor</h4>
<div class="paragraph"><p>Defines the company responsible for the content. Can be for example
openSUSE or SUSE. It is used by the install stack.</p></div>
</div>
<div class="sect3">
<h4 id="_set_updateinfo_from">5.7.18. set_updateinfo_from</h4>
<div class="paragraph"><p>Can be set to replace the "from" attribute in updateinfo.xml files with a fixed value.
This is shown as patch provider by zypp stack. Otherwise the value stays, OBS is setting
the packager from _patchinfo file here by default.</p></div>
</div>
<div class="sect3">
<h4 id="_set_updateinfo_id_prefix">5.7.19. set_updateinfo_id_prefix</h4>
<div class="paragraph"><p>Sets a fixed prefix to all id&#8217;s of included updateinfo data. It is not adding again
if the prefix exists already.</p></div>
<div class="paragraph"><p>This can be used to have a common identifier for an update for many products, but
still being able to identify the filtering for a specific product.</p></div>
</div>
<div class="sect3">
<h4 id="_block_updates_under_embargo">5.7.20. block_updates_under_embargo</h4>
<div class="paragraph"><p>The current default is to include maintenance updates under embargo. This option can
be set to abort when an embargo date is in future.</p></div>
</div>
</div>
</div>
</div>
</div>
<div id="footnotes"><hr /></div>
<div id="footer">
<div id="footer-text">
Last updated
 2024-05-24 15:14:38 CEST
</div>
</div>
</body>
</html>
0707010000000b000041ed000000000000000000000001682dad4d00000000000000000000000000000000000000000000001600000000product-composer/docs0707010000000c000081a4000000000000000000000001682dad4d0000001b000000000000000000000000000000000000002100000000product-composer/etc/config.toml[core]
logging = "WARNING"
0707010000000d000041ed000000000000000000000001682dad4d00000000000000000000000000000000000000000000001500000000product-composer/etc0707010000000e000081a4000000000000000000000001682dad4d00000a1e000000000000000000000000000000000000002d00000000product-composer/examples/ftp.productcompose# Our initial schema version. Be prepared that it breaks until we are
# in full production mode
product_compose_schema: 0.2

vendor: openSUSE
name: Tumbleweed
version: 1.0
# update: sp7
product-type: base # or module
# summary is the short product description as available in meta data
summary: openSUSE Tumbleweed

# OBS specials:
# bcntsynctag: MyProductFamily
# milestone: Beta1

# scc data has no effect to the build result, it is just managing data
# for the infrastructure
scc:
  description: >
    openSUSE Tumbleweed is the rolling distribution by the
    openSUSE.org project.
  # family: sl-micro
  # free: false

iso:
  publisher: 'Iggy'
  volume_id: 'Pop'
#  tree: 'drop'
#  base: 'agama-installer'

build_options:
### For maintenance, otherwise only "the best" version of each package is picked:
- take_all_available_versions
# - ignore_missing_packages
# - hide_flavor_in_product_directory_name
# - block_updates_under_embargo
# - add_slsa_provenance
# - skip_updateinfos
# - updateinfo_packages_only
# - base_skip_packages

installcheck:
 - ignore_errors

# Enable collection of source and debug packages. Either "include" it
# on main medium, "drop" it or "split" it away on extra medium.
source: split
debug: drop

# repository meta data is written into arch specific directories
# + smaller size of meta data to be processed by the client
# - different URL's per arch are needed
repodata: split

# The default architecture list. Each of these will be put on the medium.
# It is optional to have a default list, when each flavor defines an
# architecture list. The main package won't be build in that case.
architectures: [x86_64]

# A flavor list, each flavor may change the architecture list
flavors:
  small: {}
  large_arm:
    architectures: [armv7l, aarch64]
    name: Tumbleweed_ARM
    summary: openSUSE Tumbleweed ARM
    edition: arm
    # debug: include
    # source: drop

unpack:
  - unpackset
  - unpackset_powerpc_DVD_only

# packages to be put on the medium
packagesets:
- name: unpackset_powerpc_DVD_only
  flavors:
  - DVD medium
  architectures:
  - ppc64le
  packages:
  - Super-Special-Slideshow-for-DVD_medium-on-ppc64le

- name: unpackset
  packages:
  - skelcd-openSUSE
  - skelcd-openSUSE-installer

- name: 32bit
  architectures:
  - i586
  - i686
  packages:
  - kernel-default-pae

- packages:
  - kernel-default
  # take only glibc packages newer than 2.38-9
  # note: this works like a rpm dependency, i.e. the release part is optional
  # and epochs can be specified with EPOCH: prefix
  - glibc > 2.38-9
  add:
  - 32bit
  supportstatus: l2
0707010000000f000041ed000000000000000000000001682dad4d00000000000000000000000000000000000000000000001a00000000product-composer/examples07070100000010000081a4000000000000000000000001682dad4d0000034f000000000000000000000000000000000000002000000000product-composer/pyproject.toml[project]
name = "productcomposer"
description = "OBS product image creator"

authors = [
    { name = "Adrian Schröter", email = "adrian@suse.de" },
]
license = {file = "LICENSE"}
requires-python = ">=3.11"
dependencies = [
    "rpm",
    "zstandard",
    "pydantic<2",
    "pyyaml",
    "schema",
]
dynamic = ["version", "readme"]

[project.urls]
"Homepage" = "https://somewhere"

[project.scripts]
productcomposer = "productcomposer.cli:main"

[project.optional-dependencies]
dev = [
    "pytest>=7.3.1,<8",
    "sphinx>=6.2.1,<7",
    "sphinx_rtd_theme>=1.2.1,<2",
]

[build-system]
requires = ["setuptools>=61.0"]
build-backend = "setuptools.build_meta"

[tool.setuptools.dynamic]
version = {attr = "productcomposer.__version__"}
readme = {file = ["README.rst"], content-type = "text/x-rst"}

[tool.setuptools.packages.find]
where = ["src"]
07070100000011000081a4000000000000000000000001682dad4d0000007a000000000000000000000000000000000000003100000000product-composer/src/productcomposer/__init__.py""" Package for the obs product builder application.

"""
from .__version__ import __version__
from .__main__ import main
07070100000012000081a4000000000000000000000001682dad4d000000fa000000000000000000000000000000000000003100000000product-composer/src/productcomposer/__main__.py""" Main application entry point.

    python -m productcomposer  ...

"""


def main():
    """ Execute the application.

    """
    raise NotImplementedError


# Make the script executable.

if __name__ == "__main__":
    raise SystemExit(main())
07070100000013000081a4000000000000000000000001682dad4d000002d2000000000000000000000000000000000000003400000000product-composer/src/productcomposer/__version__.py""" Current version of the obs product builder application.

This project uses the Semantic Versioning scheme in conjunction with PEP 0440:

    <https://semver.org/>
    <https://www.python.org/dev/peps/pep-0440>

Major versions introduce significant changes to the API, and backwards
compatibility is not guaranteed. Minor versions are for new features and other
backwards-compatible changes to the API. Patch versions are for bug fixes and
internal code changes that do not affect the API. Development versions are
incomplete states of a release .

Version 0.x should be considered a development version with an unstable API,
and backwards compatibility is not guaranteed for minor versions.

"""
__version__ = "0.0.0"
07070100000014000081a4000000000000000000000001682dad4d0000006e000000000000000000000000000000000000003500000000product-composer/src/productcomposer/api/__init__.py""" Application commands common to all interfaces.

"""
from .parse import main as parse


__all__ = "parse",
07070100000015000081a4000000000000000000000001682dad4d000000ff000000000000000000000000000000000000003200000000product-composer/src/productcomposer/api/parse.py""" Implement the hello command.

"""
from ..core.logger import logger


def main(name="World") -> str:
    """ Execute the command.

    :param name: name to use in greeting
    """
    logger.debug("executing hello command")
    return "Hello, parser!"
07070100000016000041ed000000000000000000000001682dad4d00000000000000000000000000000000000000000000002900000000product-composer/src/productcomposer/api07070100000017000081a4000000000000000000000001682dad4d0000c40d000000000000000000000000000000000000002c00000000product-composer/src/productcomposer/cli.py""" Implementation of the command line interface.

"""

import os
import re
import shutil
import subprocess
import gettext
import glob
from datetime import datetime
from argparse import ArgumentParser
from xml.etree import ElementTree as ET

from schema import Schema, And, Or, Optional, SchemaError
import yaml

from .core.logger import logger
from .core.PkgSet import PkgSet
from .core.Package import Package
from .core.Pool import Pool
from .wrappers import CreaterepoWrapper
from .wrappers import ModifyrepoWrapper


__all__ = "main",


ET_ENCODING = "unicode"
ISO_PREPARER = "Product Composer - http://www.github.com/openSUSE/product-composer"
DEFAULT_EULADIR = "/usr/share/doc/packages/eulas"


tree_report = {}        # hashed via file name

# hardcoded defaults for now
chksums_tool = 'sha512sum'

# global db for supportstatus
supportstatus = {}
# global db for eulas
eulas = {}
# per package override via supportstatus.txt file
supportstatus_override = {}
# debug aka verbose
verbose_level = 0

compose_schema_iso = Schema({
    Optional('publisher'): str,
    Optional('volume_id'): str,
    Optional('tree'): str,
    Optional('base'): str,
})
compose_schema_packageset = Schema({
    Optional('name'): str,
    Optional('supportstatus'): str,
    Optional('flavors'): [str],
    Optional('architectures'): [str],
    Optional('add'): [str],
    Optional('sub'): [str],
    Optional('intersect'): [str],
    Optional('packages'): Or(None, [str]),
})
compose_schema_scc_cpe = Schema({
    'cpe': str,
    Optional('online'): bool,
})
compose_schema_scc = Schema({
    Optional('description'): str,
    Optional('family'): str,
    Optional('product-class'): str,
    Optional('free'): bool,
    Optional('predecessors'): [compose_schema_scc_cpe],
    Optional('shortname'): str,
    Optional('base-products'): [compose_schema_scc_cpe],
    Optional('root-products'): [compose_schema_scc_cpe],
    Optional('recommended-for'): [compose_schema_scc_cpe],
    Optional('migration-extra-for'): [compose_schema_scc_cpe],
})
compose_schema_build_option = Schema(
    Or(
        'add_slsa_provenance',
        'base_skip_packages',
        'block_updates_under_embargo',
        'hide_flavor_in_product_directory_name',
        'ignore_missing_packages',
        'skip_updateinfos',
        'take_all_available_versions',
        'updateinfo_packages_only',
    )
)
compose_schema_source_and_debug = Schema(
    Or(
        'drop',
        'include',
        'split',
    )
)
compose_schema_repodata = Schema(
    Or(
        'all',
        'split',
    )
)
compose_schema_flavor = Schema({
    Optional('architectures'): [str],
    Optional('name'): str,
    Optional('version'): str,
    Optional('update'): str,
    Optional('edition'): str,
    Optional('product-type'): str,
    Optional('product_directory_name'): str,
    Optional('repodata'): compose_schema_repodata,
    Optional('summary'): str,
    Optional('debug'): compose_schema_source_and_debug,
    Optional('source'): compose_schema_source_and_debug,
    Optional('build_options'): Or(None, [compose_schema_build_option]),
    Optional('scc'): compose_schema_scc,
    Optional('iso'): compose_schema_iso,
})

compose_schema = Schema({
    'product_compose_schema': str,
    'vendor': str,
    'name': str,
    'version': str,
    Optional('update'): str,
    'product-type': str,
    'summary': str,
    Optional('bcntsynctag'): str,
    Optional('milestone'): str,
    Optional('scc'): compose_schema_scc,
    Optional('iso'): compose_schema_iso,
    Optional('installcheck'): Or(None, ['ignore_errors']),
    Optional('build_options'): Or(None, [compose_schema_build_option]),
    Optional('architectures'): [str],

    Optional('product_directory_name'): str,
    Optional('set_updateinfo_from'): str,
    Optional('set_updateinfo_id_prefix'): str,
    Optional('block_updates_under_embargo'): str,
    Optional('debug'): compose_schema_source_and_debug,
    Optional('source'): compose_schema_source_and_debug,
    Optional('repodata'): compose_schema_repodata,

    Optional('flavors'): {str: compose_schema_flavor},
    Optional('packagesets'): [compose_schema_packageset],
    Optional('unpack'): [str],
})

def main(argv=None) -> int:
    """Execute the application CLI.

    :param argv: argument list to parse (sys.argv by default)
    :return: exit status
    """
    #
    # Setup CLI parser
    #
    parser = ArgumentParser('productcomposer')
    subparsers = parser.add_subparsers(required=True, help='sub-command help')

    # One sub parser for each command
    verify_parser = subparsers.add_parser('verify', help='Verify the build recipe')
    build_parser = subparsers.add_parser('build', help='Run a product build')

    verify_parser.set_defaults(func=verify)
    build_parser.set_defaults(func=build)

    # Generic options
    for cmd_parser in (verify_parser, build_parser):
        cmd_parser.add_argument('-f', '--flavor', help='Build a given flavor')
        cmd_parser.add_argument('-v', '--verbose', action='store_true',  help='Enable verbose output')
        cmd_parser.add_argument('--reposdir', action='store',  help='Take packages from this directory')
        cmd_parser.add_argument('filename', default='default.productcompose',  help='Filename of product YAML spec')

    # build command options
    build_parser.add_argument('-r', '--release', default=None,  help='Define a build release counter')
    build_parser.add_argument('--disturl', default=None,  help='Define a disturl')
    build_parser.add_argument('--build-option', action='append', nargs='+', default=[],  help='Set a build option')
    build_parser.add_argument('--vcs', default=None,  help='Define a source repository identifier')
    build_parser.add_argument('--clean', action='store_true',  help='Remove existing output directory first')
    build_parser.add_argument('--euladir', default=DEFAULT_EULADIR, help='Directory containing EULA data')
    build_parser.add_argument('out',  help='Directory to write the result')

    # parse and check
    args = parser.parse_args(argv)
    filename = args.filename
    if not filename:
        # No subcommand was specified.
        print("No filename")
        parser.print_help()
        die(None)

    #
    # Invoke the function
    #
    args.func(args)
    return 0


def die(msg, details=None):
    if msg:
        print("ERROR: " + msg)
    if details:
        print(details)
    raise SystemExit(1)


def warn(msg, details=None):
    print("WARNING: " + msg)
    if details:
        print(details)


def note(msg):
    print(msg)


def build(args):
    flavor = None
    global verbose_level

    if args.flavor:
        f = args.flavor.split('.')
        if f[0] != '':
            flavor = f[0]
    if args.verbose:
        verbose_level = 1

    if not args.out:
        die("No output directory given")

    yml = parse_yaml(args.filename, flavor)

    for arg in args.build_option:
        for option in arg:
            yml['build_options'].append(option)

    if 'architectures' not in yml or not yml['architectures']:
        die(f'No architecture defined for flavor {flavor}')

    directory = os.getcwd()
    if args.filename.startswith('/'):
        directory = os.path.dirname(args.filename)
    reposdir = args.reposdir if args.reposdir else directory + "/repos"

    supportstatus_fn = os.path.join(directory, 'supportstatus.txt')
    if os.path.isfile(supportstatus_fn):
        parse_supportstatus(supportstatus_fn)

    if args.euladir and os.path.isdir(args.euladir):
        parse_eulas(args.euladir)

    pool = Pool()
    note(f"Scanning: {reposdir}")
    pool.scan(reposdir)

    # clean up blacklisted packages
    for u in sorted(pool.lookup_all_updateinfos()):
        for update in u.root.findall('update'):
            if not update.find('blocked_in_product'):
                continue

            parent = update.findall('pkglist')[0].findall('collection')[0]
            for pkgentry in parent.findall('package'):
                name = pkgentry.get('name')
                epoch = pkgentry.get('epoch')
                version = pkgentry.get('version')
                pool.remove_rpms(None, name, '=', epoch, version, None)

    if args.clean and os.path.exists(args.out):
        shutil.rmtree(args.out)

    product_base_dir = get_product_dir(yml, flavor, args.release)

    create_tree(args.out, product_base_dir, yml, pool, flavor, args.vcs, args.disturl)


def verify(args):
    yml = parse_yaml(args.filename, args.flavor)
    if args.flavor == None and 'flavors' in yml:
        for flavor in yml['flavors']:
            yml = parse_yaml(args.filename, flavor)
            if 'architectures' not in yml or not yml['architectures']:
                die(f'No architecture defined for flavor {flavor}')
    elif 'architectures' not in yml or not yml['architectures']:
        die('No architecture defined and no flavor.')



def parse_yaml(filename, flavor):
    with open(filename, 'r') as file:
        yml = yaml.safe_load(file)

    # we may not allow this in future anymore, but for now convert these from float to str
    if 'product_compose_schema' in yml:
        yml['product_compose_schema'] = str(yml['product_compose_schema'])
    if 'version' in yml:
        yml['version'] = str(yml['version'])

    if 'product_compose_schema' not in yml:
        die('missing product composer schema')
    if yml['product_compose_schema'] not in ('0.1', '0.2'):
        die(f'Unsupported product composer schema: {yml["product_compose_schema"]}')

    try:
        compose_schema.validate(yml)
        note(f"Configuration is valid for flavor: {flavor}")
    except SchemaError as se:
        warn(f"YAML syntax is invalid for flavor: {flavor}")
        raise se

    if 'flavors' not in yml:
        yml['flavors'] = []

    if 'build_options' not in yml or yml['build_options'] is None:
        yml['build_options'] = []

    if flavor:
        if flavor not in yml['flavors']:
            die('Flavor not found: ' + flavor)
        f = yml['flavors'][flavor]
        # overwrite global values from flavor overwrites
        for tag in (
            'architectures',
            'name',
            'summary',
            'version',
            'update',
            'edition',
            'product-type',
            'product_directory_name',
            'source',
            'debug',
            'repodata',
        ):
            if tag in f:
                yml[tag] = f[tag]

        # Add additional build_options instead of replacing global defined set.
        if 'build_options' in f:
            for option in f['build_options']:
                yml['build_options'].append(option)

        if 'iso' in f:
            if 'iso' not in yml:
                yml['iso'] = {}
            for tag in ('volume_id', 'publisher', 'tree', 'base'):
                if tag in f['iso']:
                    yml['iso'][tag] = f['iso'][tag]

    if 'installcheck' in yml and yml['installcheck'] is None:
        yml['installcheck'] = []

    # FIXME: validate strings, eg. right set of chars

    return yml


def parse_supportstatus(filename):
    with open(filename, 'r') as file:
        for line in file.readlines():
            a = line.strip().split(' ')
            supportstatus_override[a[0]] = a[1]


def parse_eulas(euladir):
    note(f"Reading eula data from {euladir}")
    for dirpath, dirs, files in os.walk(euladir):
        for filename in files:
            if filename.startswith('.'):
                continue
            pkgname = filename.removesuffix('.en')
            with open(os.path.join(dirpath, filename), encoding="utf-8") as f:
                eulas[pkgname] = f.read()


def get_product_dir(yml, flavor, release):
    name = f'{yml["name"]}-{yml["version"]}'
    if 'product_directory_name' in yml:
        # manual override
        name = yml['product_directory_name']
    if flavor and 'hide_flavor_in_product_directory_name' not in yml['build_options']:
        name += f'-{flavor}'
    if yml['architectures']:
        visible_archs = yml['architectures']
        if 'local' in visible_archs:
            visible_archs.remove('local')
        name += "-" + "-".join(visible_archs)
    if release:
        name += f'-Build{release}'
    if '/' in name:
        die("Illegal product name")
    return name


def run_helper(args, cwd=None, fatal=True, stdout=None, stdin=None, failmsg=None, verbose=False):
    if verbose:
        note(f'Calling {args}')
    if stdout is None:
        stdout = subprocess.PIPE
    if stdin is None:
        stdin = subprocess.PIPE
    popen = subprocess.Popen(args, stdout=stdout, stdin=stdin, cwd=cwd)

    output = popen.communicate()[0]
    if isinstance(output, bytes):
        output = output.decode(errors='backslashreplace')

    if popen.returncode:
        if failmsg:
            msg="Failed to " + failmsg
        else:
            msg="Failed to run " + args[0]
        if fatal:
            die(msg, details=output)
        else:
            warn(msg, details=output)
    return output if stdout == subprocess.PIPE else ''

def create_sha256_for(filename):
    with open(filename + '.sha256', 'w') as sha_file:
        # argument must not have the path
        args = [ 'sha256sum', filename.split('/')[-1] ]
        run_helper(args, cwd=("/"+os.path.join(*filename.split('/')[:-1])), stdout=sha_file, failmsg="create .sha256 file")

def create_iso(outdir, yml, pool, flavor, workdir, application_id):
    verbose = True if verbose_level > 0 else False
    isoconf = yml['iso']
    args = ['/usr/bin/mkisofs', '-quiet', '-p', ISO_PREPARER]
    args += ['-r', '-pad', '-f', '-J', '-joliet-long']
    if 'publisher' in isoconf and isoconf['publisher'] is not None:
        args += ['-publisher', isoconf['publisher']]
    if 'volume_id' in isoconf and isoconf['volume_id'] is not None:
        args += ['-V', isoconf['volume_id']]
    args += ['-A', application_id]
    args += ['-o', workdir + '.iso', workdir]
    run_helper(args, cwd=outdir, failmsg="create iso file", verbose=verbose)
    # simple tag media call ... we may add options for pading or triggering media check later
    args = [ 'tagmedia' , '--digest' , 'sha256', workdir + '.iso' ]
    run_helper(args, cwd=outdir, failmsg="tagmedia iso file", verbose=verbose)
    # creating .sha256 for iso file
    create_sha256_for(workdir + ".iso")

def create_agama_iso(outdir, yml, pool, flavor, workdir, application_id, arch):
    verbose = True if verbose_level > 0 else False
    isoconf = yml['iso']
    base = isoconf['base']
    if verbose:
        note(f"Looking for baseiso-{base} rpm on {arch}")
    agama = pool.lookup_rpm(arch, f"baseiso-{base}")
    if not agama:
        die(f"Base iso in baseiso-{base} rpm was not found")
    baseisodir = f"{outdir}/baseiso"
    os.mkdir(baseisodir)
    args = ['unrpm', '-q', agama.location]
    run_helper(args, cwd=baseisodir, failmsg=f"extract {agama.location}", verbose=verbose)
    files = glob.glob(f"usr/libexec/base-isos/{base}*.iso", root_dir=baseisodir)
    if not files:
        die(f"Base iso {base} not found in {agama}")
    if len(files) > 1:
        die(f"Multiple base isos for {base} found in {agama}")
    agamaiso = f"{baseisodir}/{files[0]}"
    if verbose:
        note(f"Found base iso image {agamaiso}")

    # create new iso
    tempdir = f"{outdir}/mksusecd"
    os.mkdir(tempdir)
    if 'base_skip_packages' not in yml['build_options']:
        args = ['cp', '-al', workdir, f"{tempdir}/install"]
        run_helper(args, failmsg="add tree to agama image")
    args = ['mksusecd', agamaiso, tempdir, '--create', workdir + '.install.iso']
    # mksusecd would take the volume_id, publisher, application_id, preparer from the agama iso
    args += ['--preparer', ISO_PREPARER]
    if 'publisher' in isoconf and isoconf['publisher'] is not None:
        args += ['--vendor', isoconf['publisher']]
    if 'volume_id' in isoconf and isoconf['volume_id'] is not None:
        args += ['--volume', isoconf['volume_id']]
    args += ['--application', application_id]
    run_helper(args, failmsg="add tree to agama image", verbose=verbose)
    # mksusecd already did a tagmedia call with a sha256 digest
    # cleanup directories
    shutil.rmtree(tempdir)
    shutil.rmtree(baseisodir)
    # just for the bootable image, signature is not yet applied, so ignore that error
    run_helper(['verifymedia', workdir + '.install.iso', '--ignore', 'ISO is signed'], fatal=False, failmsg="verify install.iso")
    # creating .sha256 for iso file
    create_sha256_for(workdir + '.install.iso')


def create_tree(outdir, product_base_dir, yml, pool, flavor, vcs=None, disturl=None):
    if not os.path.exists(outdir):
        os.mkdir(outdir)

    maindir = outdir + '/' + product_base_dir
    if not os.path.exists(maindir):
        os.mkdir(maindir)

    workdirectories = [maindir]
    debugdir = sourcedir = None
    if "source" in yml:
        if yml['source'] == 'split':
            sourcedir = outdir + '/' + product_base_dir + '-Source'
            os.mkdir(sourcedir)
            workdirectories.append(sourcedir)
        elif yml['source'] == 'include':
            sourcedir = maindir
        elif yml['source'] != 'drop':
            die("Bad source option, must be either 'include', 'split' or 'drop'")
    if "debug" in yml:
        if yml['debug'] == 'split':
            debugdir = outdir + '/' + product_base_dir + '-Debug'
            os.mkdir(debugdir)
            workdirectories.append(debugdir)
        elif yml['debug'] == 'include':
            debugdir = maindir
        elif yml['debug'] != 'drop':
            die("Bad debug option, must be either 'include', 'split' or 'drop'")

    for arch in yml['architectures']:
        note(f"Linking rpms for {arch}")
        link_rpms_to_tree(maindir, yml, pool, arch, flavor, debugdir, sourcedir)

    for arch in yml['architectures']:
        note(f"Unpack rpms for {arch}")
        unpack_meta_rpms(maindir, yml, pool, arch, flavor, medium=1)  # only for first medium am

    repos = []
    if disturl:
        match = re.match("^obs://([^/]*)/([^/]*)/.*", disturl)
        if match:
            obsname = match.group(1)
            project = match.group(2)
            repo = f"obsproduct://{obsname}/{project}/{yml['name']}/{yml['version']}"
            repos = [repo]
    if vcs:
        repos.append(vcs)

    default_content = ["pool"]
    for file in os.listdir(maindir):
        if not file.startswith('gpg-pubkey-'):
            continue

        args = ['gpg', '--no-keyring', '--no-default-keyring', '--with-colons',
              '--import-options', 'show-only', '--import', '--fingerprint']
        out = run_helper(args, stdin=open(f'{maindir}/{file}', 'rb'),
                         failmsg="get fingerprint of gpg file")
        for line in out.splitlines():
            if line.startswith("fpr:"):
                content = f"{file}?fpr={line.split(':')[9]}"
                default_content.append(content)

    note("Create rpm-md data")
    run_createrepo(maindir, yml, content=default_content, repos=repos)
    if debugdir:
        note("Create rpm-md data for debug directory")
        run_createrepo(debugdir, yml, content=["debug"], repos=repos)
    if sourcedir:
        note("Create rpm-md data for source directory")
        run_createrepo(sourcedir, yml, content=["source"], repos=repos)

    repodatadirectories = workdirectories.copy()
    if 'repodata' in yml:
        if yml['repodata'] != 'all':
            repodatadirectories = []
        for workdir in workdirectories:
            if sourcedir and sourcedir == workdir:
                continue
            for arch in yml['architectures']:
                if os.path.exists(workdir + f"/{arch}"):
                    repodatadirectories.append(workdir + f"/{arch}")

    note("Write report file")
    write_report_file(maindir, maindir + '.report')
    if sourcedir and maindir != sourcedir:
        note("Write report file for source directory")
        write_report_file(sourcedir, sourcedir + '.report')
    if debugdir and maindir != debugdir:
        note("Write report file for debug directory")
        write_report_file(debugdir, debugdir + '.report')

    # CHANGELOG file
    # the tools read the subdirectory of the maindir from environment variable
    os.environ['ROOT_ON_CD'] = '.'
    if os.path.exists("/usr/bin/mk_changelog"):
        args = ["/usr/bin/mk_changelog", maindir]
        run_helper(args)

    # ARCHIVES.gz
    if os.path.exists("/usr/bin/mk_listings"):
        args = ["/usr/bin/mk_listings", maindir]
        run_helper(args)

    # media.X structures FIXME
    mediavendor = yml['vendor'] + ' - ' + product_base_dir
    mediaident = product_base_dir
    # FIXME: calculate from product provides
    mediaproducts = [yml['vendor'] + '-' + yml['name'] + ' ' + str(yml['version']) + '-1']
    create_media_dir(maindir, mediavendor, mediaident, mediaproducts)

    create_checksums_file(maindir)

    for repodatadir in repodatadirectories:
        if os.path.exists(f"{repodatadir}/repodata"):
            create_susedata_xml(repodatadir, yml)

    if 'installcheck' in yml:
       for arch in yml['architectures']:
           note(f"Run installcheck for {arch}")
           args = ['installcheck', arch, '--withsrc']
           subdir = ""
           if 'repodata' in yml:
               subdir = f"/{arch}"
           if not os.path.exists(maindir + subdir):
               warn(f"expected path is missing, no rpm files matched? ({maindir}{subdir})")
               continue
           args.append(find_primary(maindir + subdir))
           if debugdir:
               args.append(find_primary(debugdir + subdir))
           run_helper(args, fatal=('ignore_errors' not in yml['installcheck']), failmsg="run installcheck validation")

    if 'skip_updateinfos' not in yml['build_options']:
        create_updateinfo_xml(maindir, yml, pool, flavor, debugdir, sourcedir)

    # Add License File and create extra .license directory
    if yml.get('iso', {}).get('tree') != 'drop':
      licensefilename = '/license.tar'
      if os.path.exists(maindir + '/license-' + yml['name'] + '.tar') or os.path.exists(maindir + '/license-' + yml['name'] + '.tar.gz'):
          licensefilename = '/license-' + yml['name'] + '.tar'
      if os.path.exists(maindir + licensefilename + '.gz'):
          run_helper(['gzip', '-d', maindir + licensefilename + '.gz'],
                     failmsg="uncompress license.tar.gz")
      if os.path.exists(maindir + licensefilename):
          note("Setup .license directory")
          licensedir = maindir + ".license"
          if not os.path.exists(licensedir):
              os.mkdir(licensedir)
          args = ['tar', 'xf', maindir + licensefilename, '-C', licensedir]
          output = run_helper(args, failmsg="extract license tar ball")
          if not os.path.exists(licensedir + "/license.txt"):
              die("No license.txt extracted", details=output)

          mr = ModifyrepoWrapper(
              file=maindir + licensefilename,
              directory=os.path.join(maindir, "repodata"),
          )
          mr.run_cmd()
          os.unlink(maindir + licensefilename)
          # meta package may bring a second file or expanded symlink, so we need clean up
          if os.path.exists(maindir + '/license.tar'):
              os.unlink(maindir + '/license.tar')
          if os.path.exists(maindir + '/license.tar.gz'):
              os.unlink(maindir + '/license.tar.gz')

    for repodatadir in repodatadirectories:
        # detached signature
        args = ['/usr/lib/build/signdummy', '-d', repodatadir + "/repodata/repomd.xml"]
        run_helper(args, failmsg="create detached signature")

        # pubkey
        with open(repodatadir + "/repodata/repomd.xml.key", 'w') as pubkey_file:
            args = ['/usr/lib/build/signdummy', '-p']
            run_helper(args, stdout=pubkey_file, failmsg="write signature public key")

    for workdir in workdirectories:
        if os.path.exists(workdir + '/CHECKSUMS'):
            args = ['/usr/lib/build/signdummy', '-d', workdir + '/CHECKSUMS']
            run_helper(args, failmsg="create detached signature for CHECKSUMS")

        application_id = product_base_dir
        # When using the baseiso feature, the primary media should be
        # the base iso, with the packages added.
        # Other medias/workdirs would then be generated as usual, as
        # presumably you wouldn't need a bootable iso for source and
        # debuginfo packages.
        if workdir == maindir and 'base' in yml.get('iso', {}):
            agama_arch = yml['architectures'][0]
            note(f"Export main tree into agama iso file for {agama_arch}")
            create_agama_iso(outdir, yml, pool, flavor, workdir, application_id, agama_arch)
        elif 'iso' in yml:
            create_iso(outdir, yml, pool, flavor, workdir, application_id);

        # cleanup
        if yml.get('iso', {}).get('tree') == 'drop':
            shutil.rmtree(workdir)

    # create SBOM data
    generate_sbom_call = None
    if os.path.exists("/usr/lib/build/generate_sbom"):
        generate_sbom_call = ["/usr/lib/build/generate_sbom"]

    # Take sbom generation from OBS server
    # Con: build results are not reproducible
    # Pro: SBOM formats are constant changing, we don't need to adapt always all distributions for that
    if os.path.exists("/.build/generate_sbom"):
        # unfortunatly, it is not exectuable by default
        generate_sbom_call = ['env', 'BUILD_DIR=/.build', 'perl', '/.build/generate_sbom']

    if generate_sbom_call:
        spdx_distro = f"{yml['name']}-{yml['version']}"
        note(f"Creating sboom data for {spdx_distro}")
        # SPDX
        args = generate_sbom_call + [
                 "--format", 'spdx',
                 "--distro", spdx_distro,
                 "--product", maindir
               ]
        with open(maindir + ".spdx.json", 'w') as sbom_file:
            run_helper(args, stdout=sbom_file, failmsg="run generate_sbom for SPDX")

        # CycloneDX
        args = generate_sbom_call + [
                  "--format", 'cyclonedx',
                  "--distro", spdx_distro,
                  "--product", maindir
               ]
        with open(maindir + ".cdx.json", 'w') as sbom_file:
            run_helper(args, stdout=sbom_file, failmsg="run generate_sbom for CycloneDX")

    # cleanup main repodata if wanted and existing
    if 'repodata' in yml and yml['repodata'] != 'all':
        for workdir in workdirectories:
            repodatadir = workdir + "/repodata"
            if os.path.exists(repodatadir):
                shutil.rmtree(repodatadir)


def create_media_dir(maindir, vendorstr, identstr, products):
    media1dir = maindir + '/' + 'media.1'
    if not os.path.isdir(media1dir):
        os.mkdir(media1dir)  # we do only support seperate media atm
    with open(media1dir + '/media', 'w') as media_file:
        media_file.write(vendorstr + "\n")
        media_file.write(identstr + "\n")
        media_file.write("1\n")
    if products:
        with open(media1dir + '/products', 'w') as products_file:
            for productname in products:
                products_file.write('/ ' + productname + "\n")


def create_checksums_file(maindir):
    with open(maindir + '/CHECKSUMS', 'a') as chksums_file:
        for subdir in ('boot', 'EFI', 'docu', 'media.1'):
            if not os.path.exists(maindir + '/' + subdir):
                continue
            for root, dirnames, filenames in os.walk(maindir + '/' + subdir):
                for name in filenames:
                    relname = os.path.relpath(root + '/' + name, maindir)
                    run_helper(
                        [chksums_tool, relname], cwd=maindir, stdout=chksums_file
                    )


# create a fake package entry from an updateinfo package spec


def create_updateinfo_package(pkgentry):
    entry = Package()
    for tag in ('name', 'epoch', 'version', 'release', 'arch'):
        setattr(entry, tag, pkgentry.get(tag))
    return entry


def generate_du_data(pkg, maxdepth):
    seen = set()
    dudata_size = {}
    dudata_count = {}
    for dir, filedatas in pkg.get_directories().items():
        size = 0
        count = 0
        for filedata in filedatas:
            (basename, filesize, cookie) = filedata
            if cookie:
                if cookie in seen:
                    next
                seen.add(cookie)
            size += filesize
            count += 1
        if dir == '':
            dir = '/usr/src/packages/'
        dir = '/' + dir.strip('/')
        subdir = ''
        depth = 0
        for comp in dir.split('/'):
            if comp == '' and subdir != '':
                next
            subdir += comp + '/'
            if subdir not in dudata_size:
                dudata_size[subdir] = 0
                dudata_count[subdir] = 0
            dudata_size[subdir] += size
            dudata_count[subdir] += count
            depth += 1
            if depth > maxdepth:
                break
    dudata = []
    for dir, size in sorted(dudata_size.items()):
        dudata.append((dir, size, dudata_count[dir]))
    return dudata


# Get supported translations based on installed packages
def get_package_translation_languages():
    i18ndir = '/usr/share/locale/en_US/LC_MESSAGES'
    p = re.compile('package-translations-(.+).mo')
    languages = set()
    for file in os.listdir(i18ndir):
        m = p.match(file)
        if m:
            languages.add(m.group(1))
    return sorted(list(languages))

# get the file name from repomd.xml
def find_primary(directory):
    ns = '{http://linux.duke.edu/metadata/repo}'
    tree = ET.parse(directory + '/repodata/repomd.xml')
    return directory + '/' + tree.find(f".//{ns}data[@type='primary']/{ns}location").get('href')

# Create the main susedata.xml with translations, support, and disk usage information
def create_susedata_xml(rpmdir, yml):
    susedatas = {}
    susedatas_count = {}

    # find translation languages
    languages = get_package_translation_languages()

    # create gettext translator object
    i18ntrans = {}
    for lang in languages:
        i18ntrans[lang] = gettext.translation(f'package-translations-{lang}',
                                              languages=['en_US'])

    primary_fn = find_primary(rpmdir)

    # read compressed primary.xml
    openfunction = None
    if primary_fn.endswith('.gz'):
        import gzip
        openfunction = gzip.open
    elif primary_fn.endswith('.zst'):
        import zstandard
        openfunction = zstandard.open
    else:
        die(f"unsupported primary compression type ({primary_fn})")
    tree = ET.parse(openfunction(primary_fn, 'rb'))
    ns = '{http://linux.duke.edu/metadata/common}'

    # Create main susedata structure
    susedatas[''] = ET.Element('susedata')
    susedatas_count[''] = 0

    # go for every rpm file of the repo via the primary
    for pkg in tree.findall(f".//{ns}package[@type='rpm']"):
        name = pkg.find(f'{ns}name').text
        arch = pkg.find(f'{ns}arch').text
        pkgid = pkg.find(f'{ns}checksum').text
        version = pkg.find(f'{ns}version').attrib

        susedatas_count[''] += 1
        package = ET.SubElement(susedatas[''], 'package', {'name': name, 'arch': arch, 'pkgid': pkgid})
        ET.SubElement(package, 'version', version)

        # add supportstatus
        if name in supportstatus and supportstatus[name] is not None:
            ET.SubElement(package, 'keyword').text = f'support_{supportstatus[name]}'

        # add disk usage data
        location = pkg.find(f'{ns}location').get('href')
        if os.path.exists(rpmdir + '/' + location):
            p = Package()
            p.location = rpmdir + '/' + location
            dudata = generate_du_data(p, 3)
            if dudata:
                duelement = ET.SubElement(package, 'diskusage')
                dirselement = ET.SubElement(duelement, 'dirs')
                for duitem in dudata:
                    ET.SubElement(dirselement, 'dir', {'name': duitem[0], 'size': str(duitem[1]), 'count': str(duitem[2])})

        # add eula
        eula = eulas.get(name)
        if eula:
            ET.SubElement(package, 'eula').text = eula

        # get summary/description/category of the package
        summary = pkg.find(f'{ns}summary').text
        description = pkg.find(f'{ns}description').text
        category = pkg.find(".//{http://linux.duke.edu/metadata/rpm}entry[@name='pattern-category()']")
        category = Package._cpeid_hexdecode(category.get('ver')) if category else None

        # look for translations
        for lang in languages:
            isummary = i18ntrans[lang].gettext(summary)
            idescription = i18ntrans[lang].gettext(description)
            icategory = i18ntrans[lang].gettext(category) if category is not None else None
            ieula = eulas.get(name + '.' + lang, eula) if eula is not None else None
            if isummary == summary and idescription == description and icategory == category and ieula == eula:
                continue
            if lang not in susedatas:
                susedatas[lang] = ET.Element('susedata')
                susedatas_count[lang] = 0
            susedatas_count[lang] += 1
            ipackage = ET.SubElement(susedatas[lang], 'package', {'name': name, 'arch': arch, 'pkgid': pkgid})
            ET.SubElement(ipackage, 'version', version)
            if isummary != summary:
                ET.SubElement(ipackage, 'summary', {'lang': lang}).text = isummary
            if idescription != description:
                ET.SubElement(ipackage, 'description', {'lang': lang}).text = idescription
            if icategory != category:
                ET.SubElement(ipackage, 'category', {'lang': lang}).text = icategory
            if ieula != eula:
                ET.SubElement(ipackage, 'eula', {'lang': lang}).text = ieula

    # write all susedata files
    for lang, susedata in sorted(susedatas.items()):
        susedata.set('xmlns', 'http://linux.duke.edu/metadata/susedata')
        susedata.set('packages', str(susedatas_count[lang]))
        ET.indent(susedata, space="    ", level=0)
        mdtype = (f'susedata.{lang}' if lang else 'susedata')
        susedata_fn = f'{rpmdir}/{mdtype}.xml'
        with open(susedata_fn, 'x') as sd_file:
            sd_file.write(ET.tostring(susedata, encoding=ET_ENCODING))
        mr = ModifyrepoWrapper(
            file=susedata_fn,
            mdtype=mdtype,
            directory=os.path.join(rpmdir, "repodata"),
        )
        mr.run_cmd()
        os.unlink(susedata_fn)


# Add updateinfo.xml to metadata
def create_updateinfo_xml(rpmdir, yml, pool, flavor, debugdir, sourcedir):
    if not pool.updateinfos:
        return

    missing_package = False

    # build the union of the package sets for all requested architectures
    main_pkgset = PkgSet('main')
    for arch in yml['architectures']:
        main_pkgset.add(create_package_set(yml, arch, flavor, 'main', pool=pool))
    main_pkgset_names = main_pkgset.names()

    uitemp = None

    for u in sorted(pool.lookup_all_updateinfos()):
        note("Add updateinfo " + u.location)
        for update in u.root.findall('update'):
            needed = False
            parent = update.findall('pkglist')[0].findall('collection')[0]

            # drop OBS internal patchinforef element
            for pr in update.findall('patchinforef'):
                update.remove(pr)

            if 'set_updateinfo_from' in yml:
                update.set('from', yml['set_updateinfo_from'])

            id_node = update.find('id')
            if 'set_updateinfo_id_prefix' in yml:
                # avoid double application of same prefix
                id_text = re.sub(r'^'+yml['set_updateinfo_id_prefix'], '', id_node.text)
                id_node.text = yml['set_updateinfo_id_prefix'] + id_text

            for pkgentry in parent.findall('package'):
                src = pkgentry.get('src')

                # check for embargo date
                embargo = pkgentry.get('embargo_date')
                if embargo is not None:
                    try:
                        embargo_time = datetime.strptime(embargo, '%Y-%m-%d %H:%M')
                    except ValueError:
                        embargo_time = datetime.strptime(embargo, '%Y-%m-%d')

                    if embargo_time > datetime.now():
                        warn(f"Update is still under embargo! {update.find('id').text}")
                        if 'block_updates_under_embargo' in yml['build_options']:
                            die("shutting down due to block_updates_under_embargo flag")

                # clean internal attributes
                for internal_attributes in (
                    'supportstatus',
                    'superseded_by',
                    'embargo_date',
                ):
                    pkgentry.attrib.pop(internal_attributes, None)

                # check if we have files for the entry
                if os.path.exists(rpmdir + '/' + src):
                    needed = True
                    continue
                if debugdir and os.path.exists(debugdir + '/' + src):
                    needed = True
                    continue
                if sourcedir and os.path.exists(sourcedir + '/' + src):
                    needed = True
                    continue
                name = pkgentry.get('name')
                pkgarch = pkgentry.get('arch')

                # do not insist on debuginfo or source packages
                if pkgarch == 'src' or pkgarch == 'nosrc':
                    parent.remove(pkgentry)
                    continue
                if name.endswith('-debuginfo') or name.endswith('-debugsource'):
                    parent.remove(pkgentry)
                    continue
                # ignore unwanted architectures
                if pkgarch != 'noarch' and pkgarch not in yml['architectures']:
                    parent.remove(pkgentry)
                    continue

                # check if we should have this package
                if name in main_pkgset_names:
                    updatepkg = create_updateinfo_package(pkgentry)
                    if main_pkgset.matchespkg(None, updatepkg):
                        warn(f"package {updatepkg} not found")
                        missing_package = True

                parent.remove(pkgentry)

            if not needed:
                if 'abort_on_empty_updateinfo' in yml['build_options']:
                    die(f'Stumbled over an updateinfo.xml where no rpm is used: {id_node.text}')
                continue

            if not uitemp:
                uitemp = open(rpmdir + '/updateinfo.xml', 'x')
                uitemp.write("<updates>\n  ")
            uitemp.write(ET.tostring(update, encoding=ET_ENCODING))

    if uitemp:
        uitemp.write("</updates>\n")
        uitemp.close()

        mr = ModifyrepoWrapper(
                file=os.path.join(rpmdir, "updateinfo.xml"),
                directory=os.path.join(rpmdir, "repodata"),
                )
        mr.run_cmd()

        os.unlink(rpmdir + '/updateinfo.xml')

    if missing_package and 'ignore_missing_packages' not in yml['build_options']:
        die('Abort due to missing packages for updateinfo')

def run_createrepo(rpmdir, yml, content=[], repos=[]):
    product_type = '/o'
    if 'product-type' in yml:
        if yml['product-type'] == 'base':
            product_type = '/o'
        elif yml['product-type'] in ['module', 'extension']:
            product_type = '/a'
        else:
            die('Undefined product-type')
    cr = CreaterepoWrapper(directory=".")
    cr.distro = f"{yml.get('summary', yml['name'])} {yml['version']}"
    cr.cpeid = f"cpe:{product_type}:{yml['vendor']}:{yml['name']}:{yml['version']}"
    if 'update' in yml:
        cr.cpeid = cr.cpeid + f":{yml['update']}"
        if 'edition' in yml:
            cr.cpeid = cr.cpeid + f":{yml['edition']}"
    elif 'edition' in yml:
        cr.cpeid = cr.cpeid + f"::{yml['edition']}"
    cr.repos = repos
    # cr.split = True
    # cr.baseurl = "media://"
    cr.content = content
    cr.excludes = ["boot"]
    # default case including all architectures. Unique URL for all of them.
    # we need it in any case at least temporarly
    cr.run_cmd(cwd=rpmdir, stdout=subprocess.PIPE)
    # multiple arch specific meta data set
    if 'repodata' in yml:
        cr.complete_arch_list = yml['architectures']
        for arch in yml['architectures']:
            if os.path.isdir(f"{rpmdir}/{arch}"):
                cr.arch_specific_repodata = arch
                cr.run_cmd(cwd=rpmdir, stdout=subprocess.PIPE)


def unpack_one_meta_rpm(rpmdir, rpm, medium):
    tempdir = rpmdir + "/temp"
    os.mkdir(tempdir)
    run_helper(['unrpm', '-q', rpm.location], cwd=tempdir, failmsg=f"extract {rpm.location}")

    skel_dir = tempdir + "/usr/lib/skelcd/CD" + str(medium)
    if os.path.exists(skel_dir):
        shutil.copytree(skel_dir, rpmdir, dirs_exist_ok=True)
    shutil.rmtree(tempdir)


def unpack_meta_rpms(rpmdir, yml, pool, arch, flavor, medium):
    missing_package = False
    for unpack_pkgset_name in yml.get('unpack', []):
        unpack_pkgset = create_package_set(yml, arch, flavor, unpack_pkgset_name, pool=pool)
        for sel in unpack_pkgset:
            rpm = pool.lookup_rpm(arch, sel.name, sel.op, sel.epoch, sel.version, sel.release)
            if not rpm:
                warn(f"package {sel} not found")
                missing_package = True
                continue
            unpack_one_meta_rpm(rpmdir, rpm, medium)

    if missing_package and 'ignore_missing_packages' not in yml['build_options']:
        die('Abort due to missing meta packages')


def create_package_set_all(setname, pool, arch):
    if pool is None:
        die('need a package pool to create the __all__ package set')
    pkgset = PkgSet(setname)
    pkgset.add_specs([n for n in pool.names(arch) if not (n.endswith('-debuginfo') or n.endswith('-debugsource'))])

    return pkgset


def create_package_set(yml, arch, flavor, setname, pool=None):
    pkgsets = {}
    for entry in list(yml['packagesets']):
        name = entry['name'] if 'name' in entry else 'main'
        if name in pkgsets and pkgsets[name] is not None:
            die(f'package set {name} is already defined')
        pkgsets[name] = None
        if 'flavors' in entry:
            if flavor is None or entry['flavors'] is None:
                continue
            if flavor not in entry['flavors']:
                continue
        if 'architectures' in entry:
            if arch not in entry['architectures']:
                continue
        pkgset = PkgSet(name)
        pkgsets[name] = pkgset
        if 'supportstatus' in entry:
            pkgset.supportstatus = entry['supportstatus']
        if 'packages' in entry and entry['packages']:
            pkgset.add_specs(entry['packages'])
        for setop in 'add', 'sub', 'intersect':
            if setop not in entry:
                continue
            for oname in entry[setop]:
                if oname == '__all__' and oname not in pkgsets:
                    pkgsets[oname] = create_package_set_all(oname, pool, arch)
                if oname == name or oname not in pkgsets:
                    die(f'package set {oname} does not exist')
                if pkgsets[oname] is None:
                    pkgsets[oname] = PkgSet(oname)  # instantiate
                if setop == 'add':
                    pkgset.add(pkgsets[oname])
                elif setop == 'sub':
                    pkgset.sub(pkgsets[oname])
                elif setop == 'intersect':
                    pkgset.intersect(pkgsets[oname])
                else:
                    die(f"unsupported package set operation '{setop}'")

    if setname not in pkgsets:
        die(f'package set {setname} is not defined')
    if pkgsets[setname] is None:
        pkgsets[setname] = PkgSet(setname)  # instantiate
    return pkgsets[setname]


def link_rpms_to_tree(rpmdir, yml, pool, arch, flavor, debugdir=None, sourcedir=None):
    singlemode = True
    if 'take_all_available_versions' in yml['build_options']:
        singlemode = False
    add_slsa = False
    if 'add_slsa_provenance' in yml['build_options']:
        add_slsa = True

    referenced_update_rpms = None
    if 'updateinfo_packages_only' in yml['build_options']:
        if not pool.updateinfos:
            die("filtering for updates enabled, but no updateinfo found")
        if singlemode:
            die("filtering for updates enabled, but take_all_available_versions is not set")

        referenced_update_rpms = {}
        for u in sorted(pool.lookup_all_updateinfos()):
            for update in u.root.findall('update'):
                parent = update.findall('pkglist')[0].findall('collection')[0]
                for pkgentry in parent.findall('package'):
                    referenced_update_rpms[pkgentry.get('src')] = 1

    main_pkgset = create_package_set(yml, arch, flavor, 'main', pool=pool)

    missing_package = None
    for sel in main_pkgset:
        if singlemode:
            rpm = pool.lookup_rpm(arch, sel.name, sel.op, sel.epoch, sel.version, sel.release)
            rpms = [rpm] if rpm else []
        else:
            rpms = pool.lookup_all_rpms(arch, sel.name, sel.op, sel.epoch, sel.version, sel.release)

        if not rpms:
            if referenced_update_rpms is not None:
                continue
            warn(f"package {sel} not found for {arch}")
            missing_package = True
            continue

        for rpm in rpms:
            if referenced_update_rpms is not None:
                if (rpm.arch + '/' + rpm.canonfilename) not in referenced_update_rpms:
                    note(f"No update for {rpm}")
                    continue

            link_entry_into_dir(rpm, rpmdir, add_slsa=add_slsa)
            if rpm.name in supportstatus_override:
                supportstatus[rpm.name] = supportstatus_override[rpm.name]
            else:
                supportstatus[rpm.name] = sel.supportstatus

            srcrpm = rpm.get_src_package()
            if not srcrpm:
                warn(f"package {rpm} does not have a source rpm")
                continue

            if sourcedir:
                # so we need to add also the src rpm
                srpm = pool.lookup_rpm(srcrpm.arch, srcrpm.name, '=', None, srcrpm.version, srcrpm.release)
                if srpm:
                    link_entry_into_dir(srpm, sourcedir, add_slsa=add_slsa)
                else:
                    details = f"         required by  {rpm}"
                    warn(f"source rpm package {srcrpm} not found", details=details)
                    missing_package = True

            if debugdir:
                drpm = pool.lookup_rpm(arch, srcrpm.name + "-debugsource", '=', None, srcrpm.version, srcrpm.release)
                if drpm:
                    link_entry_into_dir(drpm, debugdir, add_slsa=add_slsa)

                drpm = pool.lookup_rpm(arch, rpm.name + "-debuginfo", '=', rpm.epoch, rpm.version, rpm.release)
                if drpm:
                    link_entry_into_dir(drpm, debugdir, add_slsa=add_slsa)

    if missing_package and 'ignore_missing_packages' not in yml['build_options']:
        die('Abort due to missing packages')


def link_file_into_dir(source, directory, name=None):
    if not os.path.exists(directory):
        os.mkdir(directory)
    if name is None:
        name = os.path.basename(source)
    outname = directory + '/' + name
    if not os.path.exists(outname):
        if os.path.islink(source):
            # osc creates a repos/ structure with symlinks to it's cache
            # but these would point outside of our media
            shutil.copyfile(source, outname)
        else:
            os.link(source, outname)


def link_entry_into_dir(entry, directory, add_slsa=False):
    canonfilename = entry.canonfilename
    outname = directory + '/' + entry.arch + '/' + canonfilename
    if not os.path.exists(outname):
        link_file_into_dir(entry.location, directory + '/' + entry.arch, name=canonfilename)
        add_entry_to_report(entry, outname)
        if add_slsa:
            slsalocation = entry.location.removesuffix('.rpm') + '.slsa_provenance.json'
            if os.path.exists(slsalocation):
                slsaname = canonfilename.removesuffix('.rpm') + '.slsa_provenance.json'
                link_file_into_dir(slsalocation, directory + '/' + entry.arch, name=slsaname)

def add_entry_to_report(entry, outname):
    # first one wins, see link_file_into_dir
    if outname not in tree_report:
        tree_report[outname] = entry


def write_report_file(directory, outfile):
    root = ET.Element('report')
    if not directory.endswith('/'):
        directory += '/'
    for fn, entry in sorted(tree_report.items()):
        if not fn.startswith(directory):
            continue
        binary = ET.SubElement(root, 'binary')
        binary.text = 'obs://' + entry.origin
        for tag in (
            'name',
            'epoch',
            'version',
            'release',
            'arch',
            'buildtime',
            'disturl',
            'license',
        ):
            val = getattr(entry, tag, None)
            if val is None or val == '':
                continue
            if tag == 'epoch' and val == 0:
                continue
            if tag == 'arch':
                binary.set('binaryarch', str(val))
            else:
                binary.set(tag, str(val))
        if entry.name.endswith('-release'):
            cpeid = entry.product_cpeid
            if cpeid:
                binary.set('cpeid', cpeid)
    tree = ET.ElementTree(root)
    tree.write(outfile)


if __name__ == "__main__":
    try:
        status = main()
    except Exception as err:
        # Error handler of last resort.
        logger.error(repr(err))
        logger.critical("shutting down due to fatal error")
        raise  # print stack trace
    else:
        raise SystemExit(status)

# vim: sw=4 et
07070100000018000081a4000000000000000000000001682dad4d00001441000000000000000000000000000000000000003500000000product-composer/src/productcomposer/core/Package.py""" Package base class

"""
import os
import re
import rpm
import functools


@functools.total_ordering
class Package:
    def __init__(self, location=None, rpm_ts=None):
        if location is None:
            return
        self.location = location
        h = self._read_rpm_header(rpm_ts=rpm_ts)
        for tag in 'name', 'epoch', 'version', 'release', 'arch', 'sourcerpm', \
                   'buildtime', 'disturl', 'license', 'filesizes', 'filemodes', \
                   'filedevices', 'fileinodes', 'dirindexes', 'basenames', 'dirnames':
            val = h[tag]
            if isinstance(val, bytes):
                val = val.decode('utf-8')
            setattr(self, tag, val)
        if not self.sourcerpm:
            self.arch = 'nosrc' if h['nosource'] or h['nopatch'] else 'src'

    def __eq__(self, other):
        return (self.name, self.evr) == (other.name, other.evr)

    def __lt__(self, other):
        if self.name == other.name:
            return rpm.labelCompare((self.epoch, self.version, self.release), (other.epoch, other.version, other.release)) == -1
        return self.name < other.name

    def __str__(self):
        return self.nevra

    @property
    def evr(self):
        if self.epoch and self.epoch != "0":
            return f"{self.epoch}:{self.version}-{self.release}"
        return f"{self.version}-{self.release}"

    @property
    def nevra(self):
        return f"{self.name}-{self.evr}.{self.arch}"

    @property
    def canonfilename(self):
        return f"{self.name}-{self.version}-{self.release}.{self.arch}.rpm"

    @property
    def provides(self):
        h = self._read_rpm_header()
        if h is None:
            return None
        return [dep.DNEVR()[2:] for dep in rpm.ds(h, 'provides')]

    def _read_rpm_header(self, rpm_ts=None):
        if self.location is None:
            return None
        if rpm_ts is None:
            rpm_ts = rpm.TransactionSet()
            rpm_ts.setVSFlags(rpm._RPMVSF_NOSIGNATURES)
        fd = os.open(self.location, os.O_RDONLY)
        h = rpm_ts.hdrFromFdno(fd)
        os.close(fd)
        return h

    @staticmethod
    def _cpeid_hexdecode(p):
        pout = ''
        while True:
            match = re.match(r'^(.*?)%([0-9a-fA-F][0-9a-fA-F])(.*)', p)
            if not match:
                return pout + p
            pout = pout + match.group(1) + chr(int(match.group(2), 16))
            p = match.group(3)

    @functools.cached_property
    def product_cpeid(self):
        cpeid_prefix = "product-cpeid() = "
        for dep in self.provides:
            if dep.startswith(cpeid_prefix):
                return Package._cpeid_hexdecode(dep[len(cpeid_prefix):])
        return None

    def get_src_package(self):
        if not self.sourcerpm:
            return None
        match = re.match(r'^(.*)-([^-]*)-([^-]*)\.([^\.]*)\.rpm$', self.sourcerpm)
        if not match:
            return None
        srcpkg = Package()
        srcpkg.name = match.group(1)
        srcpkg.epoch = None             # sadly unknown
        srcpkg.version = match.group(2)
        srcpkg.release = match.group(3)
        srcpkg.arch = match.group(4)
        return srcpkg

    def matches(self, arch, name, op, epoch, version, release):
        if name is not None and self.name != name:
            return False
        if arch is not None and self.arch != arch:
            if arch == 'src' or arch == 'nosrc' or self.arch != 'noarch':
                return False
        if op is None:
            return True
        # special case a missing release or epoch in the match as labelCompare
        # does not handle it
        tepoch = self.epoch if epoch is not None else None
        trelease = self.release if release is not None else None
        cmp = rpm.labelCompare((tepoch, self.version, trelease), (epoch, version, release))
        if cmp > 0:
            return '>' in op
        if cmp < 0:
            return '<' in op
        return '=' in op

    def get_directories(self):
        h = self._read_rpm_header()
        if h is None:
            return None
        dirs = {}
        filedevs = h['filedevices']
        fileinos= h['fileinodes']
        filesizes = h['filesizes']
        filemodes = h['filemodes']
        dirnames = h['dirnames']
        dirindexes = h['dirindexes']
        basenames = h['basenames']
        if not basenames:
            return dirs
        for basename, dirindex, filesize, filemode, filedev, fileino in zip(basenames, dirindexes, filesizes, filemodes, filedevs, fileinos):
            dirname = dirnames[dirindex]
            if isinstance(basename, bytes):
                basename = basename.decode('utf-8')
            if isinstance(dirname, bytes):
                dirname = dirname.decode('utf-8')
            if dirname != '' and not dirname.endswith('/'):
                dirname += '/'
            if not dirname in dirs:
                dirs[dirname] = []
            cookie = f"{filedev}/{fileino}"
            if (filemode & 0o170000) != 0o100000:
                filesize = 0
            dirs[dirname].append((basename, filesize, cookie))
        return dirs
            
            

# vim: sw=4 et
07070100000019000081a4000000000000000000000001682dad4d000014f7000000000000000000000000000000000000003700000000product-composer/src/productcomposer/core/PkgSelect.py""" Package selector specification

"""

import re
import rpm


class PkgSelect:
    def __init__(self, spec, supportstatus=None):
        self.supportstatus = supportstatus
        match = re.match(r'([^><=]*)([><=]=?)(.*)', spec.replace(' ', ''))
        if match:
            self.name = match.group(1)
            self.op = match.group(2)
            epoch = '0'
            version = match.group(3)
            release = None
            if ':' in version:
                (epoch, version) = version.split(':', 2)
            if '-' in version:
                (version, release) = version.rsplit('-', 2)
            self.epoch = epoch
            self.version = version
            self.release = release
        else:
            self.name = spec
            self.op = None
            self.epoch = None
            self.version = None
            self.release = None

    def matchespkg(self, arch, pkg):
        return pkg.matches(arch, self.name, self.op, self.epoch, self.version, self.release)

    @staticmethod
    def _sub_ops(op1, op2):
        if '>' in op2:
            op1 = re.sub(r'>', '', op1)
        if '<' in op2:
            op1 = re.sub(r'<', '', op1)
        if '=' in op2:
            op1 = re.sub(r'=', '', op1)
        return op1

    @staticmethod
    def _intersect_ops(op1, op2):
        outop = ''
        if '<' in op1 and '<' in op2:
            outop = outop + '<'
        if '>' in op1 and '>' in op2:
            outop = outop + '>'
        if '=' in op1 and '=' in op2:
            outop = outop + '='
        return outop

    def _cmp_evr(self, other):
        release1 = self.release if self.release is not None else other.release
        release2 = other.release if other.release is not None else self.release
        return rpm.labelCompare((self.epoch, self.version, release1), (other.epoch, other.version, release2))

    def _throw_unsupported_sub(self, other):
        raise RuntimeError(f"unsupported sub operation: {self}, {other}")

    def _throw_unsupported_intersect(self, other):
        raise RuntimeError(f"unsupported intersect operation: {self}, {other}")

    def sub(self, other):
        if self.name != other.name:
            return self
        if other.op is None:
            return None
        if self.op is None:
            out = self.copy()
            out.op = PkgSelect._sub_ops('<=>', other.op)
            return out
        cmp = self._cmp_evr(other)
        if cmp == 0:
            if (self.release is not None and other.release is None) or (other.release is not None and self.release is None):
                self._throw_unsupported_sub(other)
            out = self.copy()
            out.op = PkgSelect._sub_ops(self.op, other.op)
            return out if out.op != '' else None
        elif cmp < 0:
            if '>' in self.op:
                self._throw_unsupported_sub(other)
            return None if '<' in other.op else self
        elif cmp > 0:
            if '<' in self.op:
                self._throw_unsupported_sub(other)
            return None if '>' in other.op else self
        self._throw_unsupported_sub(other)

    def intersect(self, other):
        if self.name != other.name:
            return None
        if other.op is None:
            return self
        if self.op is None:
            return other
        cmp = self._cmp_evr(other)
        if cmp == 0:
            if self.release is not None or other.release is None:
                out = self.copy()
            else:
                out = other.copy()
            out.op = PkgSelect._intersect_ops(self.op, other.op)
            if out.op == '':
                if (self.release is not None and other.release is None) or (other.release is not None and self.release is None):
                    self._throw_unsupported_intersect(other)
                return None
            return out
        elif cmp < 0:
            if '>' in self.op and '<' not in other.op:
                return other
            if '<' in other.op and '>' not in self.op:
                return self
            if '<' not in other.op and '>' not in self.op:
                return None
        elif cmp > 0:
            if '>' in other.op and '<' not in self.op:
                return self
            if '<' in self.op and '>' not in other.op:
                return other
            if '<' not in self.op and '>' not in other.op:
                return None
        self._throw_unsupported_intersect(other)

    def copy(self):
        out = PkgSelect(self.name)
        out.op = self.op
        out.epoch = self.epoch
        out.version = self.version
        out.release = self.release
        out.supportstatus = self.supportstatus
        return out

    def __str__(self):
        if self.op is None:
            return self.name
        evr = self.version
        if self.release is not None:
            evr = evr + '-' + self.release
        if self.epoch and self.epoch != '0':
            evr = self.epoch + ':' + evr
        return self.name + ' ' + self.op + ' ' + evr

    def __hash__(self):
        if self.op:
            return hash((self.name, self.op, self.epoch, self.version, self.release))
        else:
            return hash(self.name)

    def __eq__(self, other):
        if self.name != other.name:
            return False
        return str(self) == str(other)

# vim: sw=4 et
0707010000001a000081a4000000000000000000000001682dad4d00000ab3000000000000000000000000000000000000003400000000product-composer/src/productcomposer/core/PkgSet.py""" Package selection set

"""

from .PkgSelect import PkgSelect


class PkgSet:
    def __init__(self, name):
        self.name = name
        self.pkgs = []
        self.byname = None
        self.supportstatus = None

    def _create_byname(self):
        byname = {}
        for sel in self.pkgs:
            name = sel.name
            if name not in byname:
                byname[name] = []
            byname[name].append(sel)
        self.byname = byname

    def _byname(self):
        if self.byname is None:
            self._create_byname()
        return self.byname

    def add_specs(self, specs):
        for spec in specs:
            sel = PkgSelect(spec, supportstatus=self.supportstatus)
            self.pkgs.append(sel)
        self.byname = None

    def add(self, other):
        s1 = set(self)
        for sel in other.pkgs:
            if sel not in s1:
                if self.supportstatus is not None and sel.supportstatus is None:
                    sel = sel.copy()
                    sel.supportstatus = self.supportstatus
                self.pkgs.append(sel)
                s1.add(sel)
        self.byname = None

    def sub(self, other):
        otherbyname = other._byname()
        pkgs = []
        for sel in self.pkgs:
            name = sel.name
            if name not in otherbyname:
                pkgs.append(sel)
                continue
            for other_sel in otherbyname[name]:
                if sel is not None:
                    sel = sel.sub(other_sel)
            if sel is not None:
                pkgs.append(sel)
        self.pkgs = pkgs
        self.byname = None

    def intersect(self, other):
        otherbyname = other._byname()
        pkgs = []
        s1 = set()
        pkgs = []
        for sel in self.pkgs:
            name = sel.name
            if name not in otherbyname:
                continue
            for osel in otherbyname[name]:
                isel = sel.intersect(osel)
                if isel and isel not in s1:
                    pkgs.append(isel)
                    s1.add(isel)
        self.pkgs = pkgs
        self.byname = None

    def matchespkg(self, arch, pkg):
        if self.byname is None:
            self._create_byname()
        if pkg.name not in self.byname:
            return False
        for sel in self.byname[pkg.name]:
            if sel.matchespkg(arch, pkg):
                return True
        return False

    def names(self):
        if self.byname is None:
            self._create_byname()
        return set(self.byname.keys())

    def __str__(self):
        return self.name + "(" + ", ".join(str(p) for p in self.pkgs) + ")"

    def __iter__(self):
        return iter(self.pkgs)

# vim: sw=4 et
0707010000001b000081a4000000000000000000000001682dad4d000009bf000000000000000000000000000000000000003200000000product-composer/src/productcomposer/core/Pool.py""" Pool base class

"""

import os
import rpm

from .Package import Package
from .Updateinfo import Updateinfo


class Pool:
    def __init__(self):
        self.rpms = {}
        self.updateinfos = {}

    def make_rpm(self, location, rpm_ts=None):
        return Package(location, rpm_ts=rpm_ts)

    def make_updateinfo(self, location):
        return Updateinfo(location)

    def add_rpm(self, pkg, origin=None):
        if origin is not None:
            pkg.origin = origin
        name = pkg.name
        if not name in self.rpms:
            self.rpms[name] = []
        self.rpms[name].append(pkg)

    def add_updateinfo(self, uinfo):
        self.updateinfos[uinfo.location] = uinfo

    def scan(self, directory):
        ts = rpm.TransactionSet()
        ts.setVSFlags(rpm._RPMVSF_NOSIGNATURES)

        for dirpath, dirs, files in os.walk(directory):
            reldirpath = os.path.relpath(dirpath, directory)
            for filename in files:
                fname = os.path.join(dirpath, filename)
                if filename.endswith('updateinfo.xml'):
                    uinfo = self.make_updateinfo(fname)
                    self.add_updateinfo(uinfo)
                elif filename.endswith('.rpm'):
                    pkg = self.make_rpm(fname, rpm_ts=ts)
                    self.add_rpm(pkg, os.path.join(reldirpath, filename))

    def lookup_all_rpms(self, arch, name, op=None, epoch=None, version=None, release=None):
        if name not in self.rpms:
            return []
        return [rpm for rpm in self.rpms[name] if rpm.matches(arch, name, op, epoch, version, release)]

    def lookup_rpm(self, arch, name, op=None, epoch=None, version=None, release=None):
        return max(self.lookup_all_rpms(arch, name, op, epoch, version, release), default=None)

    def lookup_all_updateinfos(self):
        return self.updateinfos.values()

    def remove_rpms(self, arch, name, op=None, epoch=None, version=None, release=None):
        if name not in self.rpms:
            return
        self.rpms[name] = [rpm for rpm in self.rpms[name] if not rpm.matches(arch, name, op, epoch, version, release)]

    def names(self, arch=None):
        if arch is None:
            return set(self.rpms.keys())
        names = set()
        for name in self.rpms:
            for pkg in self.rpms[name]:
                if pkg.matches(arch, None, None, None, None, None):
                    names.add(name)
                    break
        return names

# vim: sw=4 et
0707010000001c000081a4000000000000000000000001682dad4d000001d9000000000000000000000000000000000000003800000000product-composer/src/productcomposer/core/Updateinfo.py""" Updateinfo base class

"""
import functools

from xml.etree import ElementTree as ET


@functools.total_ordering
class Updateinfo:
    def __init__(self, location=None):
        if location is None:
            return
        self.root = ET.parse(location).getroot()
        self.location = location

    def __eq__(self, other):
        return self.location == other.location

    def __lt__(self, other):
        return self.location < other.location

# vim: sw=4 et
0707010000001d000081a4000000000000000000000001682dad4d00000026000000000000000000000000000000000000003600000000product-composer/src/productcomposer/core/__init__.py""" Core implementation package.

"""
0707010000001e000081a4000000000000000000000001682dad4d00000dfc000000000000000000000000000000000000003400000000product-composer/src/productcomposer/core/config.py""" Global application configuration.

This module defines a global configuration object. Other modules should use
this object to store application-wide configuration values.

"""
from pathlib import Path
from string import Template
import re
try:
    import tomllib  # Python 3.11+
except ModuleNotFoundError:
    import tomli as tomllib

from .logger import logger


__all__ = "config", "TomlConfig"


class _AttrDict(dict):
    """ A dict-like object with attribute access.

    """
    def __getitem__(self, key: str):
        """ Access dict values by key.

        :param key: key to retrieve
        """
        value = super(_AttrDict, self).__getitem__(key)
        if isinstance(value, dict):
            # For mixed recursive assignment (e.g. `a["b"].c = value` to work
            # as expected, all dict-like values must themselves be _AttrDicts.
            # The "right way" to do this would be to convert to an _AttrDict on
            # assignment, but that requires overriding both __setitem__
            # (straightforward) and __init__ (good luck). An explicit type
            # check is used here instead of EAFP because exceptions would be
            # frequent for hierarchical data with lots of nested dicts.
            self[key] = value = _AttrDict(value)
        return value

    def __getattr__(self, key: str) -> object:
        """ Get dict values as attributes.

        :param key: key to retrieve
        """
        return self[key]

    def __setattr__(self, key: str, value: object):
        """ Set dict values as attributes.

        :param key: key to set
        :param value: new value for key
        """
        self[key] = value
        return


class TomlConfig(_AttrDict):
    """ Store data from TOML configuration files.

    """
    def __init__(self, paths=None, root=None, params=None):
        """ Initialize this object.

        :param paths: one or more config file paths to load
        :param root: place config values at this root
        :param params: mapping of parameter substitutions
        """
        super().__init__()
        if paths:
            self.load(paths, root, params)
        return

    def load(self, paths, root=None, params=None):
        """ Load data from configuration files.

        Configuration values are read from a sequence of one or more TOML
        files. Files are read in the given order, and a duplicate value will
        overwrite the existing value. If a root is specified the config data
        will be loaded under that attribute.

        :param paths: one or more config file paths to load
        :param root: place config values at this root
        :param params: mapping of parameter substitutions
        """
        try:
            paths = [Path(paths)]
        except TypeError:
            # Assume this is a sequence of paths.
            pass
        if params is None:
            params = {}
        for path in paths:
            # Comments must be stripped prior to template substitution to avoid
            # any unintended semantics such as stray `$` symbols.
            comment = re.compile(r"\s*#.*$", re.MULTILINE)
            with open(path, "rt") as stream:
                logger.info(f"Reading config data from '{path}'")
                conf = comment.sub("", stream.read())
                toml = Template(conf).substitute(params)
                data = tomllib.loads(toml)
            if root:
                self.setdefault(root, {}).update(data)
            else:
                self.update(data)
        return


config = TomlConfig()
0707010000001f000081a4000000000000000000000001682dad4d00000be0000000000000000000000000000000000000003400000000product-composer/src/productcomposer/core/logger.py""" Global application logging.

All modules use the same global logging object. No messages will be emitted
until the logger is started.

"""
from logging import getLogger, getLoggerClass, setLoggerClass
from logging import Formatter, NullHandler, StreamHandler


__all__ = "logger",


class _Logger(getLoggerClass()):
    """ Message logger.

    """
    LOGFMT = "%(asctime)s;%(levelname)s;%(name)s;%(message)s"

    def __init__(self, name=None):
        """ Initialize this logger.

        Loggers with the same name refer to the same underlying object.
        Names are hierarchical, e.g. 'parent.child' defines a logger that is a
        descendant of 'parent'.

        :param name: logger name (application name by default)
        """
        # With a NullHandler, client code may make logging calls without regard
        # to whether the logger has been started yet. The standard Logger API
        # may be used to add and remove additional handlers, but the
        # NullHandler should always be left in place.
        super().__init__(name or __name__.split(".")[0])
        self.addHandler(NullHandler())  # default to no output
        return

    def start(self, level="WARN", stream=None):
        """ Start logging to a stream.

        Until the logger is started, no messages will be emitted. This applies
        to all loggers with the same name and any child loggers.

        Multiple streams can be logged to by calling start() for each one.
        Calling start() more than once for the same stream will result in
        duplicate records to that stream.

        Messages less than the given priority level will be ignored. The
        default level conforms to the *nix convention that a successful run
        should produce no diagnostic output. Call setLevel() to change the
        logger's priority level after it has been stared. Available levels and
        their suggested meanings:

            DEBUG - output useful for developers
            INFO - trace normal program flow, especially external interactions
            WARN - an abnormal condition was detected that might need attention
            ERROR - an error was detected but execution continued
            CRITICAL - an error was detected and execution was halted

        :param level: logger priority level
        :param stream: output stream (stderr by default)
        """
        self.setLevel(level.upper())
        handler = StreamHandler(stream)
        handler.setFormatter(Formatter(self.LOGFMT))
        handler.setLevel(self.level)
        self.addHandler(handler)
        return

    def stop(self):
        """ Stop logging with this logger.

        """
        for handler in self.handlers[1:]:
            # Remove everything but the NullHandler.
            self.removeHandler(handler)
        return


# Never instantiate a Logger object directly, always use getLogger().
setLoggerClass(_Logger)  # applies to all subsequent getLogger() calls
logger = getLogger(__name__.split(".", 1)[0])  # use application name
07070100000020000041ed000000000000000000000001682dad4d00000000000000000000000000000000000000000000002a00000000product-composer/src/productcomposer/core07070100000021000081a4000000000000000000000001682dad4d0000015b000000000000000000000000000000000000003100000000product-composer/src/productcomposer/defaults.py"""
Product composer executes programs that have their own defaults.
These defaults rarely change, but if they do, they'll impact product composes.

To avoid such unexpected changes, we define our defaults here
and explicitly pass them to the programs.
"""


CREATEREPO_CHECKSUM_TYPE: str = "sha512"
CREATEREPO_GENERAL_COMPRESS_TYPE: str = "zstd"
07070100000022000081a4000000000000000000000001682dad4d00000054000000000000000000000000000000000000003a00000000product-composer/src/productcomposer/wrappers/__init__.pyfrom .createrepo import CreaterepoWrapper
from .modifyrepo import ModifyrepoWrapper
07070100000023000081a4000000000000000000000001682dad4d0000036a000000000000000000000000000000000000003800000000product-composer/src/productcomposer/wrappers/common.py__all__ = (
    "BaseWrapper",
    "Field",
)


import os
import subprocess
from abc import abstractmethod

from pydantic import BaseModel
from pydantic import Field


class BaseWrapper(BaseModel, validate_assignment=True, extra="forbid"):
    @abstractmethod
    def get_cmd(self) -> list[str]:
        pass

    def run_cmd(self, check=True, stdout=None, stderr=None, cwd=None, env=None) -> subprocess.CompletedProcess:
        cmd = self.get_cmd()

        if env:
            # merge partial user-specified env with os.environ and pass it to the program call
            full_env = os.environ.copy()
            full_env.update(env)
            env = full_env

        return subprocess.run(
            cmd,
            check=check,
            stdout=stdout,
            stderr=stderr,
            cwd=cwd,
            env=env,
            encoding="utf-8",
        )
07070100000024000081a4000000000000000000000001682dad4d000007d5000000000000000000000000000000000000003c00000000product-composer/src/productcomposer/wrappers/createrepo.pyfrom .common import *
from .. import defaults


class CreaterepoWrapper(BaseWrapper):
    directory: str = Field()
    baseurl: str | None = Field(default=None)
    checksum_type: str = Field(default=defaults.CREATEREPO_CHECKSUM_TYPE)
    content: list[str] | None = Field(default=None)
    cpeid: str | None = Field(default=None)
    distro: str | None = Field(default=None)
    repos: list[str] | None = Field(default=None)
    excludes: list[str] | None = Field(default=None)
    general_compress_type: str = Field(default=defaults.CREATEREPO_GENERAL_COMPRESS_TYPE)
    split: bool = Field(default=False)
    arch_specific_repodata: str | None = Field(default=None)
    complete_arch_list: list[str] | None = Field(default=None)

    def get_cmd(self):
        cmd = ["createrepo", self.directory]

        cmd.append("--no-database")
        cmd.append("--unique-md-filenames")
        cmd.append(f"--checksum={self.checksum_type}")
        cmd.append(f"--general-compress-type={self.general_compress_type}")

        if self.baseurl:
            cmd.append(f"--baseurl={self.baseurl}")

        if self.content:
            for i in self.content:
                cmd.append(f"--content={i}")

        if self.distro:
            if self.cpeid:
                cmd.append(f"--distro={self.cpeid},{self.distro}")
            else:
                cmd.append(f"--distro={self.distro}")

        if self.excludes:
            for i in self.excludes:
                cmd.append(f"--excludes={i}")

        if self.repos:
            for i in self.repos:
                cmd.append(f"--repo={i}")

        if self.split:
            cmd.append("--split")

        if self.arch_specific_repodata:
            cmd.append("--location-prefix=../")
            cmd.append(f"--outputdir={self.arch_specific_repodata}")
            for exclude in self.complete_arch_list:
                if exclude != self.arch_specific_repodata:
                    cmd.append(f"--excludes=*.{exclude}.rpm")

        return cmd
07070100000025000081a4000000000000000000000001682dad4d000003a0000000000000000000000000000000000000003c00000000product-composer/src/productcomposer/wrappers/modifyrepo.pyfrom pydantic.types import DirectoryPath
from pydantic.types import FilePath

from .common import *
from .. import defaults


class ModifyrepoWrapper(BaseWrapper):
    file: FilePath = Field()
    directory: DirectoryPath = Field()
    checksum_type: str = Field(default=defaults.CREATEREPO_CHECKSUM_TYPE)
    compress: bool = Field(default=True)
    compress_type: str = Field(default=defaults.CREATEREPO_GENERAL_COMPRESS_TYPE)
    mdtype: str | None = Field(default=None)

    def get_cmd(self):
        cmd = ["modifyrepo", self.file, self.directory]

        cmd.append("--unique-md-filenames")
        cmd.append(f"--checksum={self.checksum_type}")

        if self.compress:
            cmd.append("--compress")
        else:
            cmd.append("--no-compress")

        cmd.append(f"--compress-type={self.compress_type}")

        if self.mdtype:
            cmd.append(f"--mdtype={self.mdtype}")

        return cmd
07070100000026000041ed000000000000000000000001682dad4d00000000000000000000000000000000000000000000002e00000000product-composer/src/productcomposer/wrappers07070100000027000041ed000000000000000000000001682dad4d00000000000000000000000000000000000000000000002500000000product-composer/src/productcomposer07070100000028000041ed000000000000000000000001682dad4d00000000000000000000000000000000000000000000001500000000product-composer/src07070100000029000081a4000000000000000000000001682dad4d0000002d000000000000000000000000000000000000002200000000product-composer/tests/.gitignore# Ignore pytest cache files.

.pytest_cache/
0707010000002a000081a4000000000000000000000001682dad4d00000043000000000000000000000000000000000000002900000000product-composer/tests/assets/conf1.tomlstr = "$$str"  # literal `$`, no substitution
var = "${var1}$var2"
0707010000002b000081a4000000000000000000000001682dad4d00000035000000000000000000000000000000000000002900000000product-composer/tests/assets/conf2.tomlvar = "${var1}$var3"  # override `var` in conf1.toml
0707010000002c000041ed000000000000000000000001682dad4d00000000000000000000000000000000000000000000001e00000000product-composer/tests/assets0707010000002d000081a4000000000000000000000001682dad4d0000008c000000000000000000000000000000000000002200000000product-composer/tests/pytest.ini[pytest]
# Names with with a leading underscore are ignored.
python_files = test_*.py
python_classes = [A-Z]*Test
python_functions = test_*
0707010000002e000081a4000000000000000000000001682dad4d000007e8000000000000000000000000000000000000003900000000product-composer/tests/unit/core/test_config.py.disabled""" Test suite for the core.config module.

"""
from pathlib import Path

import pytest
from {{ cookiecutter.app_name }}.core.config import *  # tests __all__


class TomlConfigTest(object):
    """ Test suite for the YamlConfig class.

    """
    @classmethod
    @pytest.fixture
    def files(cls, tmp_path):
        """ Return configuration files for testing.

        """
        files = "conf1.toml", "conf2.toml"
        return tuple(Path("tests", "assets", item) for item in files)

    @classmethod
    @pytest.fixture
    def params(cls):
        """ Define configuration parameters.

        """
        return {"var1": "VAR1", "var2": "VAR2", "var3": "VAR3"}

    def test_item(self):
        """ Test item access.

        """
        config = TomlConfig()
        config["root"] = {}
        config["root"]["key"] = "value"
        assert config["root"]["key"] == "value"
        return

    def test_attr(self):
        """ Test attribute access.

        """
        config = TomlConfig()
        config.root = {}
        config.root.key = "value"
        assert config.root.key == "value"
        return

    @pytest.mark.parametrize("root", (None, "root"))
    def test_init(self, files, params, root):
        """ Test the __init__() method for loading a file.

        """
        merged = {"str": "$str", "var": "VAR1VAR3"}
        config = TomlConfig(files, root, params)
        if root:
            assert config == {root: merged}
        else:
            assert config == merged
        return

    @pytest.mark.parametrize("root", (None, "root"))
    def test_load(self, files, params, root):
        """ Test the load() method.

        """
        merged = {"str": "$str", "var": "VAR1VAR3"}
        config = TomlConfig()
        config.load(files, root, params)
        if root:
            assert config == {root: merged}
        else:
            assert config == merged
        return


# Make the module executable.

if __name__ == "__main__":
    raise SystemExit(pytest.main([__file__]))
0707010000002f000081a4000000000000000000000001682dad4d000008af000000000000000000000000000000000000003900000000product-composer/tests/unit/core/test_logger.py.disabled""" Test suite for the core.logger module.

The script can be executed on its own or incorporated into a larger test suite.
However the tests are run, be aware of which version of the package is actually
being tested. If the package is installed in site-packages, that version takes
precedence over the version in this project directory. Use a virtualenv test
environment or setuptools develop mode to test against the development version.

"""
from logging import DEBUG
from io import StringIO

import pytest

from obsimager.core.logger import logger as _logger


@pytest.fixture
def logger():
    """ Get the global logger object for testing.

    """
    yield _logger
    _logger.stop()  # reset logger after each test
    return


class LoggerTest(object):
    """ Test suite for the Logger class.

    """
    def test_start(self, capsys, logger):
        """ Test the start method.

        """
        message = "test message"
        logger.start("debug")
        logger.debug(message)
        _, stderr = capsys.readouterr()
        assert logger.level == DEBUG
        assert message in stderr
        return

    def test_stop(self, capsys, logger):
        """ Test the stop() method.

        """
        logger.start("debug")
        logger.stop()
        logger.critical("test")
        _, stderr = capsys.readouterr()
        assert not stderr
        return

    def test_restart(self, capsys, logger):
        """ Test a restart.

        """
        debug_message = "debug message"
        logger.start("INFO")
        logger.debug(debug_message)
        _, stderr = capsys.readouterr()
        assert debug_message not in stderr
        logger.stop()
        logger.start("DEBUG")
        logger.debug(debug_message)
        _, stderr = capsys.readouterr()
        assert debug_message in stderr
        return

    def test_stream(self, logger):
        """ Test output to an alternate stream.

        """
        message = "test message"
        stream = StringIO()
        logger.start("debug", stream)
        logger.debug(message)
        assert message in stream.getvalue()
        return


# Make the module executable.

if __name__ == "__main__":
    raise SystemExit(pytest.main([__file__]))
07070100000030000041ed000000000000000000000001682dad4d00000000000000000000000000000000000000000000002100000000product-composer/tests/unit/core07070100000031000081a4000000000000000000000001682dad4d0000034f000000000000000000000000000000000000003100000000product-composer/tests/unit/test_api.py.disabled""" Test suite for the api module.

The script can be executed on its own or incorporated into a larger test suite.
However the tests are run, be aware of which version of the module is actually
being tested. If the library is installed in site-packages, that version takes
precedence over the version in this project directory. Use a virtualenv test
environment or setuptools develop mode to test against the development version.

"""
import pytest
from obsimager.api import *  # tests __all__


def test_hello():
    """ Test the hello() function.

    """
    assert hello() == "Hello, World!"
    return


def test_hello_name():
    """ Test the hello() function with a name.

    """
    assert hello("foo") == "Hello, foo!"
    return


# Make the script executable.

if __name__ == "__main__":
    raise SystemExit(pytest.main([__file__]))
07070100000032000081a4000000000000000000000001682dad4d000005d6000000000000000000000000000000000000003100000000product-composer/tests/unit/test_cli.py.disabled""" Test suite for the cli module.

The script can be executed on its own or incorporated into a larger test suite.
However the tests are run, be aware of which version of the module is actually
being tested. If the library is installed in site-packages, that version takes
precedence over the version in this project directory. Use a virtualenv test
environment or setuptools develop mode to test against the development version.

"""
from shlex import split
from subprocess import call
from sys import executable

import pytest
from obsimager.cli import *  # test __all__


@pytest.fixture(params=("--help", "hello"))
def command(request):
    """ Return the command to run.

    """
    return request.param


def test_main(command):
    """ Test the main() function.

    """
    try:
        status = main(split(command))
    except SystemExit as ex:
        status = ex.code
    assert status == 0
    return

def test_main_none():
    """ Test the main() function with no arguments.
    
    """
    with pytest.raises(SystemExit) as exinfo:
        main([])  # displays a help message and exits gracefully
    assert exinfo.value.code == 1


def test_script(command):
    """ Test command line execution.

    """
    # Call with the --help option as a basic sanity check.
    cmdl = f"{executable} -m obsimager.cli {command} --help"
    assert 0 == call(cmdl.split())
    return


# Make the script executable.

if __name__ == "__main__":
    raise SystemExit(pytest.main([__file__]))
07070100000033000041ed000000000000000000000001682dad4d00000000000000000000000000000000000000000000001c00000000product-composer/tests/unit07070100000034000041ed000000000000000000000001682dad4d00000000000000000000000000000000000000000000001700000000product-composer/tests07070100000035000041ed000000000000000000000001682dad4d00000000000000000000000000000000000000000000001100000000product-composer07070100000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000b00000000TRAILER!!!
openSUSE Build Service is sponsored by