From: Aaron Conole <aconole@redhat.com>
To: Adam Hassick <ahassick@iol.unh.edu>
Cc: ci@dpdk.org, alialnu@nvidia.com, Owen Hilyard <ohilyard@iol.unh.edu>
Subject: Re: [PATCH v8 3/6] containers/builder: Dockerfile creation script
Date: Fri, 04 Aug 2023 09:59:46 -0400 [thread overview]
Message-ID: <f7tttte6h7x.fsf@redhat.com> (raw)
In-Reply-To: <20230717210815.29737-4-ahassick@iol.unh.edu> (Adam Hassick's message of "Mon, 17 Jul 2023 17:08:12 -0400")
Adam Hassick <ahassick@iol.unh.edu> writes:
> From: Owen Hilyard <ohilyard@iol.unh.edu>
>
> This script will template out all of the Dockerfiles based on the
> definitions provided in the inventory using the jinja2 templating
> library.
>
> Signed-off-by: Owen Hilyard <ohilyard@iol.unh.edu>
> Signed-off-by: Adam Hassick <ahassick@iol.unh.edu>
> ---
Please run this through black. It has some formatting errors.
> containers/template_engine/make_dockerfile.py | 358 ++++++++++++++++++
> 1 file changed, 358 insertions(+)
> create mode 100755 containers/template_engine/make_dockerfile.py
>
> diff --git a/containers/template_engine/make_dockerfile.py b/containers/template_engine/make_dockerfile.py
> new file mode 100755
> index 0000000..60269a0
> --- /dev/null
> +++ b/containers/template_engine/make_dockerfile.py
> @@ -0,0 +1,358 @@
> +#!/usr/bin/env python3
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright (c) 2022 University of New Hampshire
> +import argparse
> +import json
> +import logging
> +import os
> +import re
> +from dataclasses import dataclass
> +from datetime import datetime
> +import platform
> +from typing import Any, Dict, List, Optional
> +
> +import jsonschema
> +import yaml
> +from jinja2 import Environment, FileSystemLoader, select_autoescape
> +
> +
> +@dataclass(frozen=True)
> +class Options:
> + on_rhel: bool
> + fail_on_unbuildable: bool
> + has_coverity: bool
> + build_libabigail: bool
> + build_abi: bool
> + output_dir: str
> + registry_hostname: str
> + host_arch_only: bool
> + omit_latest: bool
> + is_builder: bool
> + date_override: Optional[str]
> + ninja_workers: Optional[int]
> +
> +
> +def _get_arg_parser() -> argparse.ArgumentParser:
> + parser = argparse.ArgumentParser(description="Makes the dockerfile")
> + parser.add_argument("--output-dir", required=True)
> + parser.add_argument(
> + "--rhel",
> + action="store_true",
> + help="Overwrite the check for running on RHEL",
> + default=False,
> + )
> + parser.add_argument(
> + "--fail-on-unbuildable",
> + action="store_true",
> + help="If any container would not be possible to build, fail and exit with a non-zero exit code.",
> + default=False,
> + )
> + parser.add_argument(
> + "--build-abi",
> + action="store_true",
> + help="Whether to build the ABI references into the image. Disabled by \
> + default due to producing 10+ GB images. \
> + Implies '--build-libabigail'.",
> + )
> + parser.add_argument(
> + "--build-libabigail",
> + action="store_true",
> + help="Whether to build libabigail from source for distros that do not \
> + package it. Implied by '--build-abi'",
> + )
> + parser.add_argument(
> + "--host-arch-only",
> + action="store_true",
> + help="Only build containers for the architecture of the host system",
> + )
> + parser.add_argument(
> + "--omit-latest",
> + action="store_true",
> + help="Whether to include the \"latest\" tag in the generated makefile."
> + )
> + parser.add_argument(
> + "--builder-mode",
> + action="store_true",
> + help="Specifies that the makefile is being templated for a builder. \
> + This implicitly sets \"--host-arch-only\" to true and disables making the manifests.",
> + default=False
> + )
> + parser.add_argument(
> + "--date",
> + type=str,
> + help="Overrides generation of the timestamp and uses the provided string instead."
> + )
> + parser.add_argument(
> + "--ninja-workers",
> + type=int,
> + help="Specifies a number of ninja workers to limit builds to. Uses the ninja default when not given."
> + )
> + parser.add_argument(
> + "--coverity",
> + action="store_true",
> + help="Whether the Coverity Scan binaries are available for building the Coverity containers.",
> + default=False
> + )
> + return parser
> +
> +
> +def parse_args() -> Options:
> + parser = _get_arg_parser()
> + args = parser.parse_args()
> +
> + registry_hostname = (
> + os.environ.get("DPDK_CI_CONTAINERS_REGISTRY_HOSTNAME") or "localhost"
> + )
> +
> + # In order to to build the ABIs, libabigail must be built from source on
> + # some platforms
> + build_libabigail: bool = args.build_libabigail or args.build_abi
> +
> + opts = Options(
> + on_rhel=args.rhel,
> + fail_on_unbuildable=args.fail_on_unbuildable,
> + build_libabigail=build_libabigail,
> + build_abi=args.build_abi,
> + output_dir=args.output_dir,
> + registry_hostname=registry_hostname,
> + host_arch_only=args.host_arch_only or args.builder_mode,
> + omit_latest=args.omit_latest,
> + is_builder=args.builder_mode,
> + date_override=args.date,
> + ninja_workers=args.ninja_workers,
> + has_coverity=args.coverity
> + )
> +
> + logging.info(f"make_dockerfile.py options: {opts}")
> + return opts
> +
> +
> +def running_on_RHEL(options: Options) -> bool:
> + """
> + RHEL containers can only be built on RHEL, so disable them and emit a
> + warning if not on RHEL.
> + """
> + redhat_release_path = "/etc/redhat-release"
> +
> + if os.path.exists(redhat_release_path):
> + with open(redhat_release_path) as f:
> + first_line = f.readline()
> + on_rhel = "Red Hat Enterprise Linux" in first_line
> + if on_rhel:
> + logging.info("Running on RHEL, allowing RHEL containers")
> + return True
> +
> + logging.warning("Not on RHEL, disabling RHEL containers")
> + assert options is not None, "Internal state error, OPTIONS should not be None"
> +
> + if options.on_rhel:
> + logging.info("Override enabled, enabling RHEL containers")
> +
> + return options.on_rhel
> +
> +
> +def get_path_to_parent_directory() -> str:
> + return os.path.dirname(__file__)
> +
> +
> +def get_raw_inventory():
> + parent_dir = get_path_to_parent_directory()
> +
> + schema_path = os.path.join(parent_dir, "inventory_schema.json")
> + inventory_path = os.path.join(parent_dir, "inventory.yaml")
> +
> + inventory: Dict[str, Any]
> + with open(inventory_path, "r") as f:
> + inventory = yaml.safe_load(f)
> +
> + schema: Dict[str, Any]
> + with open(schema_path, "r") as f:
> + schema = json.load(f)
> +
> + jsonschema.validate(instance=inventory, schema=schema)
> + return inventory
> +
> +
> +def apply_group_config_to_target(
> + target: Dict[str, Any],
> + raw_inventory: Dict[str, Any],
> + on_rhel: bool,
> + fail_on_unbuildable: bool,
> +) -> Optional[Dict[str, Any]]:
> + groups_for_target: List[Dict[str, Any]] = []
> + groups: List[Dict[str, Any]] = raw_inventory["dockerfiles"]["groups"]
> + group = groups[target["group"]]
> +
> + target_primary_group = target["group"]
> +
> + assert isinstance(target_primary_group, str), "Target group name was not a string"
> +
> + requires_rhel = "rhel" in target_primary_group.lower()
> +
> + if requires_rhel and not on_rhel:
> + logging.warning(
> + f"Disabling target {target['name']}, because it must be built on RHEL."
> + )
> + if fail_on_unbuildable:
> + raise AssertionError(
> + f"Not on RHEL and target {target['name']} must be built on RHEL"
> + )
> +
> + return None
> +
> + while group["parent"] != "NONE":
> + groups_for_target.append(group)
> + group = groups[group["parent"]]
> +
> + groups_for_target.append(group) # add the "all" group
> + groups_for_target.reverse() # reverse it so overrides work
> +
> + target_packages: List[str] = target.get("packages") or []
> +
> + for group in groups_for_target:
> + target_packages = [*target_packages, *(group.get("packages") or [])]
> + target = dict(target, **group)
> +
> + target["packages"] = target_packages
> +
> + return target
> +
> +def apply_defaults_to_target(target: Dict[str, Any]) -> Dict[str, Any]:
> + def default_if_unset(target: Dict[str, Any], key: str, value: Any) -> Dict[str, Any]:
> + if key not in target:
> + target[key] = value
> +
> + return target
> +
> + target = default_if_unset(target, "requires_coverity", False)
> + target = default_if_unset(target, "force_disable_abi", False)
> + target = default_if_unset(target, "minimum_dpdk_version", dict(major=0, minor=0, revision=0))
> + target = default_if_unset(target, "extra_information", {})
> +
> + return target
> +
> +def get_host_arch() -> str:
> + machine: str = platform.machine()
> + match machine:
> + case "aarch64" | "armv8b" | "armv8l":
> + return "linux/arm64"
> + case "ppc64le":
> + return "linux/ppc64le"
> + case "x86_64" | "x64" | "amd64":
> + return "linux/amd64"
> + case arch:
> + raise ValueError(f"Unknown arch {arch}")
> +
> +def process_target(
> + target: Dict[str, Any],
> + raw_inventory: Dict[str, Any],
> + has_coverity: bool,
> + on_rhel: bool,
> + fail_on_unbuildable: bool,
> + host_arch_only: bool,
> + build_timestamp: str
> +) -> Optional[Dict[str, Any]]:
> + target = apply_defaults_to_target(target)
> + # Copy the platforms, for building the manifest list.
> +
> + # Write the build timestamp.
> + target["extra_information"].update({
> + "build_timestamp": build_timestamp
> + })
> +
> + if (not has_coverity) and target["requires_coverity"]:
> + print(f"Disabling {target['name']}. Target requires Coverity, and it is not enabled.")
> + return None
> +
> + if host_arch_only:
> + host_arch = get_host_arch()
> + if host_arch in target["platforms"]:
> + target["platforms"] = [host_arch]
> + else:
> + return None
> +
> + return apply_group_config_to_target(
> + target, raw_inventory, on_rhel, fail_on_unbuildable
> + )
> +
> +def get_processed_inventory(options: Options, build_timestamp: str) -> Dict[str, Any]:
> + raw_inventory: Dict[str, Any] = get_raw_inventory()
> + on_rhel = running_on_RHEL(options)
> + targets = raw_inventory["dockerfiles"]["targets"]
> + targets = [
> + process_target(
> + target, raw_inventory, options.has_coverity, on_rhel, options.fail_on_unbuildable, options.host_arch_only, build_timestamp
> + )
> + for target in targets
> + ]
> + # remove disabled options
> + targets = [target for target in targets if target is not None]
> + raw_inventory["dockerfiles"]["targets"] = targets
> +
> + return raw_inventory
> +
> +
> +def main():
> + options: Options = parse_args()
> +
> + env = Environment(
> + loader=FileSystemLoader("templates"),
> + )
> +
> + build_timestamp = datetime.now().strftime("%Y-%m-%d-%H-%M-%S")
> +
> + inventory = get_processed_inventory(options, build_timestamp)
> +
> + if options.date_override:
> + timestamp = options.date_override
> + else:
> + timestamp = datetime.now().strftime("%Y-%m-%d")
> +
> + for target in inventory["dockerfiles"]["targets"]:
> + template = env.get_template(f"containers/{target['group']}.dockerfile.j2")
> + dockerfile_location = os.path.join(
> + options.output_dir, target["name"] + ".dockerfile"
> + )
> +
> + tags: list[str] = target.get("extra_tags") or []
> +
> + tags.insert(0, "$R/$N:$T")
> + if not options.omit_latest:
> + tags.insert(0, "$R/$N:latest")
> + else:
> + tags = list(filter(lambda x: re.match('^.*:latest$', x) is None, tags))
> +
> + target["tags"] = tags
> +
> + rendered_dockerfile = template.render(
> + timestamp=timestamp,
> + target=target,
> + build_libabigail=options.build_libabigail,
> + build_abi=options.build_abi,
> + build_timestamp=build_timestamp,
> + registry_hostname=options.registry_hostname,
> + ninja_workers=options.ninja_workers,
> + **inventory,
> + )
> + with open(dockerfile_location, "w") as output_file:
> + output_file.write(rendered_dockerfile)
> +
> + makefile_template = env.get_template(f"containers.makefile.j2")
> + rendered_makefile = makefile_template.render(
> + timestamp=timestamp,
> + build_libabigail=options.build_libabigail,
> + build_abi=options.build_abi,
> + host_arch_only=options.host_arch_only,
> + registry_hostname=options.registry_hostname,
> + is_builder=options.is_builder,
> + **inventory,
> + )
> + makefile_output_path = os.path.join(options.output_dir, "Makefile")
> + with open(makefile_output_path, "w") as f:
> + f.write(rendered_makefile)
> +
> +
> +if __name__ == "__main__":
> + logging.basicConfig()
> + logging.root.setLevel(0) # log everything
> + main()
next prev parent reply other threads:[~2023-08-04 13:59 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-07-17 21:08 [PATCH v8 0/6] Community Lab Containers and Builder Engine Adam Hassick
2023-07-17 21:08 ` [PATCH v8 1/6] containers/docs: Add container builder start Adam Hassick
2023-07-17 21:08 ` [PATCH v8 2/6] containers/inventory: Add inventory for container builder Adam Hassick
2023-08-04 13:58 ` Aaron Conole
2023-08-04 19:30 ` Adam Hassick
2023-07-17 21:08 ` [PATCH v8 3/6] containers/builder: Dockerfile creation script Adam Hassick
2023-08-04 13:59 ` Aaron Conole [this message]
2023-07-17 21:08 ` [PATCH v8 4/6] containers/templates: Templates for Dockerfiles Adam Hassick
2023-08-04 14:02 ` Aaron Conole
2023-08-04 19:34 ` Adam Hassick
2023-07-17 21:08 ` [PATCH v8 5/6] containers/container_builder: Container for python scripts Adam Hassick
2023-08-04 14:03 ` Aaron Conole
2023-07-17 21:08 ` [PATCH v8 6/6] containers/Makefile: Makefile to automate builds Adam Hassick
2023-08-04 14:06 ` Aaron Conole
2023-07-28 20:38 ` [PATCH v8 0/6] Community Lab Containers and Builder Engine Aaron Conole
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=f7tttte6h7x.fsf@redhat.com \
--to=aconole@redhat.com \
--cc=ahassick@iol.unh.edu \
--cc=alialnu@nvidia.com \
--cc=ci@dpdk.org \
--cc=ohilyard@iol.unh.edu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).