File obs-service-node_modules.obscpio of Package obs-service-node_modules
0707010000000000008180000000000000000000000001612CBB170000042F000000000000000000000000000000000000002100000000obs-service-node_modules/COPYINGMIT License
Copyright (c) 2020-2021, SUSE LLC
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
0707010000000100008180000000000000000000000001612CBB1700000A31000000000000000000000000000000000000002300000000obs-service-node_modules/README.mdBuild RPM packages using node modules offline
=============================================
By default, npm download dependencies from `registry.npmjs.org` and
hides the details in the `node_modules` subdirectory. Its job is to
resolve version dependencies and provide it to Node application in such
a way that it satisfied the dependencies and does not conflict with
other dependencies. To be able to build and rebuild a package from
sources, we will need to be able to install and possibly update these
dependencies in a networkless environment like OBS.
When `npm` installs dependencies, it will create a `package-lock.json`
that will contain the entire list of packages that can possible exist in
the `node_modules` directory structure.
The purpose of this tool is to parse `package-lock.json` and prepare all
externally download sources for use by `npm` during `rpmbuild`.
## runtime requirements
`npm 7+` is required to produce `package-lock.json` with
`lockfileVersion:2`
## As OBS service
- Get `package-lock.json` with `localfileVersion: 2`. For example,
- `npm install --package-lock-only --legacy-peer-deps` with npm 7+
- --legacy-peer-deps is required to fetch peer dependencies from remote
locally so they are available during peer resolution in the VM. Without
this you may get additional warnings during install.
- Make sure to put the `package-lock.json` next to the spec file and
remove it from the sources. Sources should only have `package.json`,
even if they ship a compatible `package-lock.json`
- Add the following line to the spec file:
```
%include %{_sourcedir}/node_modules.spec.inc
```
- Create file `_service` with the following content:
```
<services>
<service name="node_modules" mode="manual">
<param name="cpio">node_modules.obscpio</param>
<param name="output">node_modules.spec.inc</param>
<param name="source-offset">10000</param>
</service>
</services>
```
- `osc service localrun`
- this generates the NPM dependency archive along with its source URLs
- `osc add node_modules.obscpio`
- `osc add node_modules.spec.inc`
- `osc commit`
### Example
```
Source10: package-lock.json
Source11: node_modules.spec.inc
%include %{_sourcedir}/node_modules.spec.inc
BuildRequires: local-npm-registry
[...]
%prep
%setup
local-npm-registry %{_sourcedir} install --also=dev
[...]
%build
npm run build
```
### In Practice
https://build.opensuse.org/package/show/openSUSE:Factory/cockpit-podman
### External Resources
https://github.com/openSUSE/npm-localhost-proxy
07070100000002000081C0000000000000000000000001612CBB17000050A2000000000000000000000000000000000000002900000000obs-service-node_modules/node_modules.py#!/usr/bin/python3
# Copyright (c) 2020 SUSE LLC
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
import argparse
import hashlib
import json
import logging
import os
import glob
import subprocess
import sys
import stat
import time
import struct
import urllib.error
import urllib.parse
import urllib.request
from base64 import b64decode
from binascii import hexlify
from lxml import etree as ET
from pathlib import Path
# filename -> { url: <string>, sum: <string>, path = set([<string>, ..]) }
MODULE_MAP = dict()
# this is a hack for obs_scm integration
OBS_SCM_COMPRESSION = None
class CpioReader:
def __init__(self, fn):
self.fh = open(fn, 'rb')
def extract(self, outdir):
class CpioFile:
def __init__(self, fh):
self.fh = fh
self.name = None
def __enter__(self):
if (self.fh.tell() & 3):
raise Exception("invalid offset %d" % self.fh.tell())
fmt = "6s8s8s8s8s8s8s8s8s8s8s8s8s8s"
fields = struct.unpack(fmt, self.fh.read(struct.calcsize(fmt)))
if fields[0] != b"070701":
raise Exception("invalid cpio header %s" % fields[0])
names = ("c_ino", "c_mode", "c_uid", "c_gid",
"c_nlink", "c_mtime", "c_filesize",
"c_devmajor", "c_devminor", "c_rdevmajor",
"c_rdevminor", "c_namesize", "c_check")
for (n, v) in zip(names, fields[1:]):
setattr(self, n, int(v, 16))
self.name = struct.unpack('%ds' % (self.c_namesize - 1), self.fh.read(self.c_namesize - 1))[0]
self.fh.read(1) # \0
if (self.c_namesize+2) % 4:
self.fh.read(4 - (self.c_namesize+2) % 4)
return self
def __exit__(self, exc_type, exc_value, traceback):
if exc_type:
return None
if self.c_filesize % 4:
self.fh.read(4 - self.c_filesize % 4)
def last(self):
return self.name == b'TRAILER!!!'
def __str__(self):
return "[%s %d]" % (self.name, self.c_filesize)
def read(self):
return self.fh.read(self.c_filesize)
while True:
with CpioFile(self.fh) as f:
if f.last():
break
with open(os.path.join(outdir if outdir else '.', os.path.basename(f.name.decode())), 'wb') as ofh:
ofh.write(f.read())
class CpioWriter:
def __init__(self, fn):
self.cpio = open(fn, 'wb')
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
if exc_type:
return None
self.add('TRAILER!!!', b'')
return self
def add(self, name, content, perm=0o644):
if isinstance(name, str):
name = name.encode()
if isinstance(content, str):
content = content.encode()
name += b'\0'
mode = perm | 0x8000 # regular file
size = len(content)
header = b'070701%08x%08x%08x%08x%08x%08x%08x%08x%08x%08x%08x%08x%08x%s' % (
0, mode, 0, 0, 1, 0, size, 0, 0, 0, 0, len(name), 0, name)
self.cpio.write(header)
if len(header):
self.cpio.write(b'\0' * (4 - len(header) % 4))
self.cpio.write(content)
if size % 4:
self.cpio.write(b'\0' * (4 - size % 4))
def addstream(self, name, fh):
if isinstance(name, str):
name = name.encode()
name += b'\0'
info = os.stat(fh.fileno())
size = info[stat.ST_SIZE]
header = b'070701%08x%08x%08x%08x%08x%08x%08x%08x%08x%08x%08x%08x%08x%s' % (
0, # inode
info[stat.ST_MODE],
info[stat.ST_UID],
info[stat.ST_GID],
1, # nlink
info[stat.ST_MTIME],
size,
0, # major
0, # minor
0, # rmajor
0, # rminor
len(name),
0, # checksum
name
)
self.cpio.write(header)
if len(header) % 4:
self.cpio.write(b'\0' * (4 - len(header) % 4))
self.cpio.write(fh.read())
if size % 4:
self.cpio.write(b'\0' * (4 - size % 4))
def addfile(self, name):
with open(name, 'rb') as fh:
self.addstream(name, fh)
def is_supported_fetch_url(from_entry):
if from_entry[0] == '@':
from_entry = from_entry[1:]
end_name_pos = from_entry.find('@')
schema_pos = from_entry.find('//')
if schema_pos > end_name_pos:
from_entry = from_entry[end_name_pos+1:]
o = urllib.parse.urlparse(from_entry)
if o.scheme in ("git+http", "git+https", "git+ssh", "https"):
return o
return False
def add_git_dependency(o, module, install_path):
_, scheme = o.scheme.split("+")
branch = "master"
# XXX: not sure that is correct
if o.fragment:
branch = o.fragment
p = os.path.basename(o.path)
if p.endswith(".git"):
p = p[:-4]
if OBS_SCM_COMPRESSION:
fn = "{}-{}.tar.{}".format(p, branch, OBS_SCM_COMPRESSION)
else:
fn = "{}-{}.tgz".format(p, branch)
MODULE_MAP[fn] = {
"scm": "git",
"branch": branch,
"basename": p,
"url": urllib.parse.urlunparse(
(scheme, o.netloc, o.path, o.params, o.query, None)
),
}
MODULE_MAP[fn].setdefault("path", set()).add(install_path)
return True
def add_standard_dependency(url, integrity, module, install_path):
algo, chksum = integrity.split("-", 2)
chksum = hexlify(b64decode(chksum)).decode("ascii")
fn = os.path.basename(url)
# looks like some module are from some kind of branch and
# may use the same file name. So prefix with this
# namespace.
if "/" in module:
fn = module.split("/")[0] + "-" + fn
if fn in MODULE_MAP:
if (
MODULE_MAP[fn]["url"] != url
or MODULE_MAP[fn]["algo"] != algo
or MODULE_MAP[fn]["chksum"] != chksum
):
logging.error(
"%s: mismatch %s <> %s, %s:%s <> %s:%s",
module,
MODULE_MAP[fn]["url"],
url,
MODULE_MAP[fn]["algo"],
MODULE_MAP[fn]["chksum"],
algo,
chksum,
)
else:
MODULE_MAP[fn] = {"url": url, "algo": algo, "chksum": chksum}
MODULE_MAP[fn].setdefault("path", set()).add(install_path)
def fetch_non_resolved_dependency_location(entry, module, install_path):
# format of the "from" field is in `npm-package-arg` NPM package
labels = ["from", "version"]
o = False
for label in labels:
if (label not in entry):
continue
o = is_supported_fetch_url(entry[label])
if (o != False):
break
if (o == False):
# unsupported localtion or nothing to download?
if "from" in entry:
logging.warning(
"entry %s is from unsupported location %s",
module,
entry["from"],
)
return False
logging.warning("entry %s has no download", module)
return False
if o.scheme == "https":
integrity = entry["integrity"]
return add_standard_dependency(urllib.parse.urlunparse(o), integrity, module, install_path)
elif o.scheme.startswith("git+"):
return add_git_dependency(o, module, install_path)
return False
def collect_deps_recursive(d, deps):
for module in sorted(deps):
path = "/".join(("node_modules", module))
if d:
path = "/".join((d, path))
entry = deps[module]
if "resolved" not in entry:
fetch_non_resolved_dependency_location(entry, module, path)
else:
url = entry["resolved"]
if "integrity" not in entry:
logging.warning("No integrity field for %s. Try to regenerate package-lock.json.", url)
integrity = 'NONE-'
else:
integrity = entry["integrity"]
add_standard_dependency(url, integrity, module, path)
if "dependencies" in entry:
collect_deps_recursive(path, entry["dependencies"])
def write_rpm_sources(fh, args):
i = args.source_offset if args.source_offset is not None else ''
for fn in sorted(MODULE_MAP):
fh.write("Source{}: {}#/{}\n".format(i, MODULE_MAP[fn]["url"], fn))
if args.source_offset is not None:
i += 1
def process_packagelock_file(js):
if not "lockfileVersion" in js or js["lockfileVersion"] != 2:
raise Exception("Only package-lock.json with lockfileVersion=2 are supported")
if "dependencies" in js:
collect_deps_recursive("", js["dependencies"])
def main(args):
# special settings when run as obs service
if args.outdir:
if not args.spec and not args.output:
specfiles = glob.glob('*.spec')
if specfiles:
if len(specfiles) > 1:
raise Exception("more than one spec file found. Choose one")
args.spec = specfiles[0]
else:
raise Exception("This service needs a spec file to operate with")
if not args.checksums:
args.checksums = 'node_modules.sums'
args.download = True
def _out(fn):
return os.path.join(args.outdir, fn) if args.outdir else fn
def update_checksum(fn):
with open(_out(fn), 'rb') as fh:
h = hashlib.new(MODULE_MAP[fn].setdefault("algo", 'sha256'), fh.read())
MODULE_MAP[fn]["chksum"] = h.hexdigest()
pattern = f"*{args.input}"
input_file = next(reversed(sorted(Path(Path.cwd()).glob(pattern))), None)
with open(input_file) as fh:
js = json.load(fh)
if "name" in js:
process_packagelock_file(js)
else:
for i in js.keys():
process_packagelock_file(js[i])
if args.output:
with open(_out(args.output), "w") as fh:
write_rpm_sources(fh, args)
if args.spec:
ok = False
newfn = _out(args.spec)
if not args.outdir:
newfn += '.new'
with open(newfn, "w") as ofh:
with open(args.spec, "r") as ifh:
for line in ifh:
if line.startswith('# NODE_MODULES BEGIN'):
ofh.write(line)
for line in ifh:
if line.startswith('# NODE_MODULES END'):
write_rpm_sources(ofh, args)
ok = True
break
ofh.write(line)
if not ok:
raise Exception("# NODE_MODULES [BEGIN|END] not found")
if not args.outdir:
os.rename(args.spec+".new", args.spec)
if args.download:
if args.cpio and os.path.exists(args.cpio) and not args.download_always:
CpioReader(args.cpio).extract(args.outdir)
for fn in sorted(MODULE_MAP):
if args.file and fn not in args.file:
continue
url = MODULE_MAP[fn]["url"]
if "scm" in MODULE_MAP[fn]:
if os.path.exists(_out(fn)) and MODULE_MAP[fn]["branch"] != "master" and not args.download_always:
logging.info("skipping update of existing %s", _out(fn))
continue
d = MODULE_MAP[fn]["basename"]
# TODO: use same cache as tar_scm
if os.path.exists(d):
r = subprocess.run(["git", "remote", "update"], cwd=d)
if r.returncode:
logging.error("failed to clone %s", url)
continue
else:
r = subprocess.run(["git", "clone", "--bare", url, d])
if r.returncode:
logging.error("failed to clone %s", url)
continue
r = subprocess.run(
[
"git",
"archive",
"--format=tar." + (OBS_SCM_COMPRESSION if OBS_SCM_COMPRESSION else 'gz'),
"-o",
_out(fn),
"--prefix",
"package/",
MODULE_MAP[fn]["branch"],
],
cwd=d,
)
if not args.outdir:
os.rename(os.path.join(d, fn), fn)
if r.returncode:
logging.error("failed to create tar %s", url)
continue
else:
req = urllib.request.Request(url)
if os.path.exists(_out(fn)):
if not args.download_always:
logging.info("skipping download of existing %s", fn)
continue
stamp = time.strftime(
"%a, %d %b %Y %H:%M:%S GMT", time.gmtime(os.path.getmtime(_out(fn)))
)
logging.debug("adding If-Modified-Since %s: %s", fn, stamp)
req.add_header("If-Modified-Since", stamp)
logging.info("fetching %s as %s", url, fn)
algo = MODULE_MAP[fn]["algo"]
chksum = MODULE_MAP[fn]["chksum"]
h = hashlib.new(algo)
response = urllib.request.urlopen(req)
try:
data = response.read()
h.update(data)
if h.hexdigest() != chksum:
logging.error(
"checksum failure for %s %s %s %s",
fn,
algo,
h.hexdigest,
chksum,
)
else:
try:
with open(_out(fn) + ".new", "wb") as fh:
fh.write(data)
except OSError as e:
logging.error(e)
finally:
os.rename(_out(fn) + ".new", _out(fn))
except urllib.error.HTTPError as e:
logging.error(e)
if args.checksums:
with open(_out(args.checksums), "w") as fh:
for fn in sorted(MODULE_MAP):
if 'algo' not in MODULE_MAP[fn]:
update_checksum(fn)
fh.write(
"{} ({}) = {}\n".format(
MODULE_MAP[fn]["algo"].upper(), fn, MODULE_MAP[fn]["chksum"]
)
)
if args.cpio:
with CpioWriter(_out(args.cpio) + ".new") as c:
for fn in sorted(MODULE_MAP):
with open(_out(fn), 'rb') as fh:
c.addstream(os.path.basename(fn), fh)
os.unlink(_out(fn))
os.rename(_out(args.cpio) + ".new", _out(args.cpio))
if args.obs_service:
parser = ET.XMLParser(remove_blank_text=True)
tree = ET.parse(args.obs_service, parser)
root = tree.getroot()
# to make sure pretty printing works
for element in root.iter():
element.tail = None
if not args.obs_service_scm_only:
# FIXME: remove only entries we added?
for node in root.findall("service[@name='download_url']"):
root.remove(node)
tar_scm_toremove = set()
for fn in sorted(MODULE_MAP):
if "scm" in MODULE_MAP[fn]:
tar_scm_toremove.add(MODULE_MAP[fn]['url'])
for u in tar_scm_toremove:
for node in root.findall("service[@name='obs_scm']"):
if node.find("param[@name='url']").text == u:
root.remove(node)
for fn in sorted(MODULE_MAP):
if args.file and fn not in args.file:
continue
url = MODULE_MAP[fn]["url"]
if "scm" in MODULE_MAP[fn]:
s = ET.SubElement(root, 'service', {'name': 'obs_scm'})
ET.SubElement(s, 'param', {'name': 'scm'}).text = "git"
ET.SubElement(s, 'param', {'name': 'url'}).text = MODULE_MAP[fn]["url"]
ET.SubElement(s, 'param', {'name': 'revision'}).text = MODULE_MAP[fn]["branch"]
ET.SubElement(s, 'param', {'name': 'version'}).text = MODULE_MAP[fn]["branch"]
elif not args.obs_service_scm_only:
s = ET.SubElement(root, 'service', {'name': 'download_url'})
ET.SubElement(s, 'param', {'name': 'url'}).text = MODULE_MAP[fn]["url"]
ET.SubElement(s, 'param', {'name': 'prefer-old'}).text = 'enable'
tree.write(args.obs_service, pretty_print=True)
return 0
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description="Maintain spec file for node modules"
)
parser.add_argument("--dry", action="store_true", help="dry run")
parser.add_argument("--debug", action="store_true", help="debug output")
parser.add_argument("--verbose", action="store_true", help="verbose")
parser.add_argument(
"-i",
"--input",
metavar="FILE",
default="package-lock.json",
help="input package lock file",
)
parser.add_argument(
"-f", "--file", nargs="+", metavar="FILE", help="limit to file"
)
parser.add_argument(
"-o", "--output", metavar="FILE", help="spec files source lines into that file"
)
parser.add_argument(
"--spec", metavar="FILE", help="spec file to process"
)
parser.add_argument(
"--source-offset", metavar="N", type=int, help="Spec file source offset"
)
parser.add_argument(
"--checksums", metavar="FILE", help="Write BSD style checksum file"
)
parser.add_argument(
"--obs-service", metavar="FILE", help="OBS service file for download_url"
)
parser.add_argument(
"--outdir", metavar="DIR", help="where to put files"
)
parser.add_argument(
"--cpio", metavar="ARCHIVE", help="cpio archive to use instead of individual files"
)
parser.add_argument(
"--compression", metavar="EXT", help="use EXT compression"
)
parser.add_argument(
"--obs-service-scm-only",
action="store_true",
help="only generate tar_scm entries in service file",
)
parser.add_argument("--download", action="store_true", help="download files")
parser.add_argument(
"--download-always",
action="store_true",
help="download existing files again",
)
args = parser.parse_args()
if args.debug:
level = logging.DEBUG
elif args.verbose:
level = logging.INFO
else:
level = logging.WARNING
logging.basicConfig(format='%(levelname)s:%(message)s', level=level)
if args.outdir and not args.outdir[0] == '/':
raise Exception("outdir must be absolute")
if args.compression:
OBS_SCM_COMPRESSION = args.compression
elif args.obs_service:
OBS_SCM_COMPRESSION = 'xz'
sys.exit(main(args))
# vim: sw=4 et
0707010000000300008180000000000000000000000001612CBB17000001D6000000000000000000000000000000000000002E00000000obs-service-node_modules/node_modules.service<service name="node_modules">
<summary>download node modules</summary>
<description>download node modules</description>
<parameter name="cpio">
<description>cpio file name to store all tarballs in</description>
</parameter>
<parameter name="output">
<description>write rpm source lines to that file</description>
</parameter>
<parameter name="source-offset">
<description>rpm source number to start with</description>
</parameter>
</service>
07070100000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000B00000000TRAILER!!!50 blocks