diff --git a/tektonc/README.md b/tektonc/README.md new file mode 100644 index 00000000..ec639c7b --- /dev/null +++ b/tektonc/README.md @@ -0,0 +1,189 @@ +# tektonc — A Minimal Template Expander for Tekton Pipelines + +`tektonc` is a lightweight command-line tool that helps authors write **reusable Tekton pipeline templates** using +a small extension to standard Tekton YAML. + +It is designed for the [`llm-d-benchmark`](https://llm-d.ai) repository, where multiple model, workload, inference-scheduler, and other platform configuration variants need to be expressed cleanly without duplicating boilerplate. + +--- + +## ✨ Purpose + +Tekton already provides a powerful foundation for modular and reproducible orchestration: +- **Modularity** — reusable `Task` definitions and `Step`-level composition. +- **Precedence & dependencies** — control flow through `runAfter` relationships. +- **Parallelism** — automatic execution of independent tasks. +- **Failure tolerance** — built-in retries and error handling. +- **Cleanup & teardown** — handled elegantly using `finally` blocks. + +However, in complex `llm-d` benchmarking workflows, you often have a base pipeline structure that needs to repeat the same sequence of tasks for several **models**, **workload variants**, or **inference configurations**. + +Manually authoring these combinations quickly leads to large, repetitive, and error-prone YAML. + +`tektonc` solves this problem by introducing a **single, minimal construct** for compile-time expansion, enabling high-level loops and parameter sweeps while keeping everything 100% Tekton-compatible. + +```yaml +loopName: +foreach: + domain: + var1: [a, b, c] + var2: [x, y] +tasks: + - name: ... + runAfter: ... +``` + +Everything else remains **pure Tekton** — `tektonc` only handles structured expansion. + +--- + +## 🧩 Overview + +### Input +1. A Jinja-based Tekton pipeline template (`pipeline.yaml.j2`) +2. A simple YAML file of template values (`values.yaml`) + +### Output +A **flat, valid Tekton pipeline YAML** ready for `kubectl apply` or `tkn pipeline start`. + +### Example + +**Template (`pipeline.yaml.j2`):** + +```yaml +apiVersion: tekton.dev/v1 +kind: Pipeline +metadata: + name: {{ pipeline_name }} +spec: + params: + - name: message + type: string + tasks: + - name: print-start + taskRef: { name: echo } + params: + - name: text + value: "Starting pipeline {{ pipeline_name }}" + + - loopName: per-model + foreach: + domain: + modelRef: {{ models|tojson }} + tasks: + - name: "process-{{ modelRef|dns }}" + taskRef: { name: process-model } + runAfter: [ print-start ] + params: + - { name: model, value: "{{ modelRef }}" } +``` + +**Values (`values.yaml`):** + +```yaml +pipeline_name: demo-pipeline +models: ["llama-7b", "qwen-2.5-7b"] +``` + +Run: + +```bash +tektonc -t pipeline.yaml.j2 -f values.yaml -o build/pipeline.yaml +``` + +Result (`build/pipeline.yaml`): + +```yaml +apiVersion: tekton.dev/v1 +kind: Pipeline +metadata: + name: demo-pipeline +spec: + params: + - name: message + type: string + tasks: + - name: print-start + taskRef: + name: echo + params: + - name: text + value: Starting pipeline demo-pipeline + - name: process-llama-7b + taskRef: + name: process-model + runAfter: + - print-start + params: + - { name: model, value: llama-7b } + - name: process-qwen-2-5-7b + taskRef: + name: process-model + runAfter: + - print-start + params: + - { name: model, value: qwen-2-5-7b } +``` + +--- + +## 🚀 Capabilities + +- **Single construct** — only `loopName + foreach + tasks` +- **Nested loops** — define inner/outer iterations naturally +- **Native Tekton** — all fields (`retries`, `when`, `workspaces`, etc.) pass through unchanged +- **Finally blocks** — support the same loop semantics for teardown/cleanup +- **Deterministic expansion** — Cartesian product enumeration of domains +- **Safe** — Jinja variables (`{{ }}`) resolved at compile-time; Tekton params (`$(params.xxx)`) left untouched + +--- + +## 🧠 When to Use It + +Use `tektonc` when you need to: +- generate a Tekton pipeline for benchmarking `llm-d` configurations, +- run configuration sweeps or inference experiments, +- keep YAML human-readable while supporting complex graph expansions. + +--- + +## 🛠️ Installation + +```bash +pip install -r requirements.txt +``` + +Then test it: + +```bash +python3 tektonc.py -t tektoncsample/quickstart/pipeline.yaml.j2 -f tektoncsample/quickstart/values.yaml --explain +``` + +--- + +## 📘 Command Reference + +``` +tektonc -t TEMPLATE -f VALUES [-o OUTPUT] [--explain] +``` + +| Flag | Description | +|------|--------------| +| `-t, --template` | Path to Jinja template file (`pipeline.yaml.j2`) | +| `-f, --values` | Path to YAML/JSON file containing template variables | +| `-o, --out` | Output file (default: stdout) | +| `--explain` | Print an easy-to-read table of task names and dependencies | + +--- + +## 🤝 Contributing + +- Keep new features minimal and Tekton-native. +- Avoid adding new syntax unless absolutely necessary. +- Open PRs against the `llm-d-benchmark` repo with clear examples under `tektoncsample/`. + +--- + +**In short:** +`tektonc` makes Tekton authoring for `llm-d-benchmark` scalable — without inventing a new DSL. +It keeps templates clean, YAML valid, and expansion predictable. diff --git a/tektonc/requirements.txt b/tektonc/requirements.txt new file mode 100644 index 00000000..6f9f03f3 --- /dev/null +++ b/tektonc/requirements.txt @@ -0,0 +1,5 @@ +# tektonc — minimal Tekton pipeline template compiler +# (compatible with Python 3.9+) + +jinja2>=3.1 +PyYAML>=6.0 diff --git a/tektonc/tasks.md b/tektonc/tasks.md new file mode 100644 index 00000000..db6d1dab --- /dev/null +++ b/tektonc/tasks.md @@ -0,0 +1,230 @@ +Based on experiments with Tekton, some basic composable tasks might include the following. + +## Tooling + +Tasks to install tooling and to configure the environment. This might include installing and configuring the cluster; for example, a gateway provider (istio, kgateway, gke), LWS, Tekton, etc. +It might also include installing runtime tooling such as llmdbench, helm, yq, git, kubectl, oc, etc. + +## Stack Creation + +Tasks to create elements of the model stack -- gateway, GAIE, and model servers. + +To delpoy each stack, a unique DNS compatible identifier (`model_label`) is required. It serves two purposes: + +(a) For each model service, a GAIE deployment is created. The `InferencePool` identifies the pods of the model service using a set of match labels. Typically, the `llm-d.ai/model` label is used for this. Its value must be unique across all model services in the namespace. The `model_label` can be used for this. + +(b) At the level of the Gateway, there must be a means to distinguish requests for one model service vs. another. For most workload generators, the simplest mechanism is to modify the request path by inserting a model specific prefix in the path. This prefix must be unique to the instance of the deployed model. Again, the `model_label` can be used for this (in an `HTTPRoute`). + +### Task: `deploy_gateway` + +**Description:** + +Installs a gateway pod into a namespace. + +Notes: A gateway pod can be used for multiple namespaces. This requires additional configuration and is ignored for now. It is assumed that if a model is deployed + +**Inputs**: + +- *namespace* +- *release_name* - Helm release name +- *helm_chart_values* - (default: ?) +- *helm_chart* - (default: `llm-d-infra/llm-d-infra`) +- *helm_chart_version* - (default: `none` (latest)) +- *helm_chart_repository_url* - (default: `https://llm-d-incubation.github.io/llm-d-infra/`) + +**Outputs**: + +_ _name_ - name of gateway created +- _serviceUrl_ - endpoint (incl. port) to be used by requests + +### Task: `deploy_gaie` + +**Description**: + +Installs Kubernetes Gateway Inference Extension objects: an endpoint picker and inference pool. + +**Inputs**: + +- *namespace* +- *model_label* - used to configure `InferencePool` match labels. +- *release_name* - Helm release name; unique to stack +- *helm_chart_values* - [samples](https://github.com/llm-d/llm-d/tree/main/guides/prereq/gateway-provider/common-configurations) +- *helm_chart* - (default: `oci://registry.k8s.io/gateway-api-inference-extension/charts/inferencepool`) +- *helm_chart_version* (default: `v1.0.1`) +- *helm_chart_repository_url* (default: `none`) +- *helm_overrides* - list of fields to set? values file to apply? *model_label* used here? + +**Outputs**: + +### Task: `deploy_model` + +**Description**: + +Installs vllm engines. + +**Inputs**: + +- *namespace* +- *model_label* - used to configure labels. +- *release_name* - Helm release name; unique to stack +- *helm_chart_values* - [samples](https://github.com/llm-d/llm-d/tree/main/guides/prereq/gateway-provider/common-configurations) +- *helm_chart* - (default: `llm-d-modelservice/llm-d-modelservice`) +- *helm_chart_version* (default: `none` (latest)) +- *helm_chart_repository_url* (default: `https://llm-d-incubation.github.io/llm-d-modelservice/`) +- *helm_overrides* - list of fields to set? values file to apply? + +**Outputs**: + +### Task: `create_httproute` + +**Description:** + +Create an `HTTPRoute` object to match requests to a Gateway to the GAIE `InferencePool` (and hence to the model service Pods). One HTTPRoute can be created per stack. Alternatively, a single `HTTPRoute` can configure multiple mappings (currently required for Istio). + +**Inputs**: + +- *namespace* +- *manifest* - Requires Gateway *name* and InferencePool *name* and a *model_label* + +**Outputs**: + +### Task: `download_model` + +**Description:** + +Downloads model from HF to a locally mounted disk. + +**Inputs**: + +- *model* +- *HF_TOKEN* +- *path* - location to which the model should be downloaded + +**Outputs**: + +- *endpoint* - url for sending requests from within the cluster + +## Run Workloads + +### Task: `create_workload_profile` + +**Description**: + +Modify a workload profile template for a particular execution. The profile format is specific to workload generator (harness) type. Should this be part of **run_workload**? + +**Inputs**: + +- **harness_type** +- **workload_profile_template** - workload profile (yaml) or location of profile template +- **changes** - name/path/value information to modify template; In addition to the workload parameters, this includes: + + - **stack_endpoint** - endpoint to be used to send requests + - **model** - HF model name + +**Outputs**: + +- **workload_profile** - yaml string or url to location + +### Task: `run_workload_inference-perf` + +**Description**: + +Generate workload using _inference perf_. On completion, results are saved to a locally mounted filesystem. + +**Inputs**: + +- **workoad_profile** - workload profile (yaml) +- **path** - path to where results should be saved + +**Outputs**: + +### Task: `transform_results_inference-perf` + +**Description**: + +Convert results from execution of _inference perf_ to a universal format. + +**Inputs**: + +- **source_path** - path to where results are saved +- **target_path** - location where converted results should be saved + +**Outputs**: + +### Task: `run_workload_vllm_benchmark` + +**Description**: + +Generate workload using _vllm benchmark_. On completion, results are saved to a locally mounted filesystem. + +Details as above for `run_workload_inference_perf` with addition of input `HF_TOKEN`. + +### Task: `transform_results_vllm_benchmark` + +**Description**: + +Convert results from execution of _vllm benchmark_ to a universal format. Details are as above for `convert_profile_inference-perf. + +### Task: `run_workload_guidellm` + +**Description**: + +Generate workload using _guidellm_. On completion, results are saved to a locally mounted filesystem. + +Details as above for `run_workload_inference_perf`. + +### Task: `transform_results_guidellm` + +**Description**: + +Convert results from execution of _guidellm_ to a universal format. Details are as above for `convert_profile_inference-perf. + +### Task: `run_workload_fmperf` + +**Description**: + +Generate workload using _fmperf_. On completion, results are saved to a locally mounted filesystem. + +Details as above for `run_workload_inference_perf`. + +### Task: `transform_results_fmperf` + +**Description**: + +Convert results from execution of _fmperf_ to a universal format. Details are as above for `convert_profile_inference-perf. + +## Document + +### Task: `record` + +**Description**: + +Record configuration of one stack and one or more workload executions. + +**Inputs**: + +- All inputs from `deploy_gaie`, `deploy_model`, `create_httproute`, and `run_workflow` + +**Outputs**: + +- list of paths? + +### Task: `upload` + +**Description**: + +Copy results from a locally mounted files to remote location. Should there be one task per target type? + +**Inputs**: + +- list of paths to upload +- target_details + + - this is specific to the target type, for example for s3 compatible bucket: + - *AWS_ACCESS_KEY_ID* + - *AWS_SECRET_ACCESS_KEY* + - *s3_endpoint* + - *s3_bucket* + - *target_object_name* + +**Outputs**: \ No newline at end of file diff --git a/tektonc/tektonc.py b/tektonc/tektonc.py new file mode 100644 index 00000000..b658381e --- /dev/null +++ b/tektonc/tektonc.py @@ -0,0 +1,325 @@ +#!/usr/bin/env python3 +""" +tektonc — minimal render+expand for Tekton templates with loop nodes. + +Authoring grammar (one construct only): + Loop node := { loopName: str, foreach: { domain: { var: [..], ... } }, tasks: [ , ... ] } + Task node := any Tekton task map (name, taskRef/taskSpec, params, runAfter, workspaces, retries, when, timeout, ...) + +Semantics: + - Expansion is cartesian over foreach.domain (keys sorted for determinism). + - Loops can nest; variables from outer loops are in scope for inner loops. + - Dependencies/parallelism are expressed purely via native Tekton 'runAfter'. + - 'finally' supports the same loop nodes as 'tasks'. + - No validation yet (name uniqueness, runAfter targets, DAG acyclicity)—add later. + +CLI: + tektonc -t pipeline.yaml.j2 -f values.yaml [-o build/pipeline.yaml] [--explain] +""" + +from __future__ import annotations + +import argparse +import copy +import itertools +import os +import sys +from typing import Any, Dict, Iterable, List, Mapping, MutableMapping + +import json, yaml +from jinja2 import Environment, StrictUndefined, TemplateError, Undefined +from jinja2.runtime import Undefined as RTUndefined + + + +# ────────────────────────────────────────────────────────────────────────────── +# Jinja helpers +# Two-pass render: +# - Outer env: preserves unknown loop vars (e.g., {{ modelRef|dns }} stays literal) +# - Inner env: strict; resolves loop vars during loop expansion +# ────────────────────────────────────────────────────────────────────────────── + + +def _dns_inner(s: str) -> str: + """DNS-1123-ish: lowercase, alnum and dash, trim to 63 chars with hash fallback.""" + import re, hashlib + s2 = re.sub(r'[^a-z0-9-]+', '-', str(s).lower()).strip('-') + if len(s2) <= 63: + return s2 + h = hashlib.sha1(s2.encode()).hexdigest()[:8] + return (s2[:63-1-8] + '-' + h).strip('-') + +def _slug_inner(s: str) -> str: + """Looser slug for params: keep letters/numbers/._-; replace others with '-'.""" + import re + return re.sub(r'[^A-Za-z0-9_.-]+', '-', str(s)) + +# Outer filters: if value is undefined, round-trip original expression +def _dns_outer(val: object) -> str: + if isinstance(val, RTUndefined): + name = getattr(val, "_undefined_name", None) or "" + return "{{ " + name + "|dns }}" + return _dns_inner(val) # type: ignore[arg-type] + +def _slug_outer(val: object) -> str: + if isinstance(val, RTUndefined): + name = getattr(val, "_undefined_name", None) or "" + return "{{ " + name + "|slug }}" + return _slug_inner(val) # type: ignore[arg-type] + +class PassthroughUndefined(Undefined): + """ + OUTER render: keep unknown variables as their original Jinja expression, + including dotted attributes and item access, so the INNER pass can resolve them. + - {{ model }} -> "{{ model }}" + - {{ model.name }} -> "{{ model.name }}" + - {{ model['port'] }} -> "{{ model['port'] }}" + - {{ model.name|dns }} -> dns_outer will see an Undefined and reconstruct "{{ model.name|dns }}" + """ + __slots__ = () + + # Compose a new Undefined that remembers the full Jinja expression text. + def _compose(self, suffix: str) -> "PassthroughUndefined": + base = getattr(self, "_undefined_name", None) or "?" + expr = f"{base}{suffix}" + # Undefined signature: (hint=None, obj=None, name=None, exc=None) + return PassthroughUndefined(name=expr) + + # Attribute access: {{ x.y }} + def __getattr__(self, name: str) -> "PassthroughUndefined": # type: ignore[override] + return self._compose(f".{name}") + + # Item access: {{ x['k'] }} / {{ x[0] }} + def __getitem__(self, key) -> "PassthroughUndefined": # type: ignore[override] + # Use repr to round-trip quotes correctly + return self._compose(f"[{repr(key)}]") + + # Function call: {{ f(x) }} -> best-effort string form + def __call__(self, *args, **kwargs) -> "PassthroughUndefined": # type: ignore[override] + return self._compose("(...)") + + # Stringification -> the literal Jinja expression + def __str__(self) -> str: # type: ignore[override] + name = getattr(self, "_undefined_name", None) + return "{{ " + name + " }}" if name else "{{ ?? }}" + + def __iter__(self): + return iter(()) + + def __bool__(self): + return False + +def _enum(seq): + """Return [{i, item}, ...] for easy serial chains in Jinja.""" + return [{"i": i, "item": v} for i, v in enumerate(seq)] + +def build_env_outer() -> Environment: + env = Environment(undefined=PassthroughUndefined, autoescape=False, trim_blocks=True, lstrip_blocks=False) + env.filters.update({"dns": _dns_outer, "slug": _slug_outer, "tojson": json.dumps}) + env.globals.update({"enumerate_list": _enum}) + return env + +def build_env_inner() -> Environment: + env = Environment(undefined=StrictUndefined, autoescape=False, trim_blocks=True, lstrip_blocks=False) + env.filters.update({"dns": _dns_inner, "slug": _slug_inner, "tojson": json.dumps}) + env.globals.update({"enumerate_list": _enum}) + return env + + +# ────────────────────────────────────────────────────────────────────────────── +# Expander (no validation yet) +# ────────────────────────────────────────────────────────────────────────────── + +def expand_document(doc: MutableMapping[str, Any], + globals: Mapping[str, Any] | None = None, + jinja_env: Environment | None = None) -> Dict[str, Any]: + """ + Expand loops in a Pipeline document: + - Recursively expands spec.tasks (required) and spec.finally (optional) + - Returns a NEW dict; input is not mutated + """ + env = jinja_env or build_env_inner() + scope: Dict[str, Any] = dict(globals or {}) + + out: Dict[str, Any] = copy.deepcopy(doc) # type: ignore[assignment] + spec = out.get("spec") or {} + + spec["tasks"] = expand_list(spec.get("tasks", []), scope, env) + if "finally" in spec: + spec["finally"] = expand_list(spec.get("finally", []), scope, env) + + out["spec"] = spec + return out + +def expand_list(nodes: Iterable[Any], + scope: Mapping[str, Any], + env: Environment) -> List[Dict[str, Any]]: + """ + Core recursive expander. + + If a node is a loop node (loopName + foreach.domain + tasks list): + * Enumerate cartesian product over the domain (keys sorted for determinism) + * For each binding, extend scope and recursively expand the child 'tasks' + * Concatenate all expansions + + Else (plain Tekton task): + * Deep-copy the map; render ALL scalar strings with current scope (via Jinja) + * Append as a single task in the flat list + """ + flat: List[Dict[str, Any]] = [] + for node in nodes or []: + if _is_loop_node(node): + domain = node["foreach"]["domain"] + child_nodes = node.get("tasks", []) + for binding in _cartesian_bindings(domain): + child_scope = dict(scope) + child_scope.update(binding) + flat.extend(expand_list(child_nodes, child_scope, env)) + else: + rendered = _render_scalars(copy.deepcopy(node), scope, env) + # After scalar render, node should be a mapping for Tekton; we pass it through + flat.append(rendered) # type: ignore[arg-type] + return flat + +# ────────────────────────────────────────────────────────────────────────────── +# Internals +# ────────────────────────────────────────────────────────────────────────────── + +def _is_loop_node(node: Any) -> bool: + """A loop node must be a mapping with loopName, foreach.domain, and tasks (list).""" + from collections.abc import Mapping as _Mapping + if not isinstance(node, _Mapping): + return False + if "loopName" not in node or "foreach" not in node or "tasks" not in node: + return False + f = node["foreach"] + if not isinstance(f, dict) or "domain" not in f: + return False + if not isinstance(node["tasks"], list): + return False + return True + +def _cartesian_bindings(domain: Mapping[str, Iterable[Any]]) -> Iterable[Dict[str, Any]]: + """ + Deterministic cartesian enumeration of a domain dict: {var: [v1, v2], ...} + - Sort domain keys to ensure stable order + - Preserve the order of each value list + - Yield dicts like {'var1': v1, 'var2': v2, ...} + """ + if not isinstance(domain, Mapping): + raise TypeError("foreach.domain must be a mapping of {var: list}") + + keys = sorted(domain.keys()) + lists: List[List[Any]] = [] + for k in keys: + vals = domain[k] + if isinstance(vals, (str, bytes)): + raise TypeError(f"foreach.domain['{k}'] must be an iterable of values (not string)") + lists.append(list(vals)) + + for combo in itertools.product(*lists): + yield dict(zip(keys, combo)) + +def _render_scalars(obj: Any, scope: Mapping[str, Any], env: Environment) -> Any: + """ + Recursively render scalar strings using Jinja with the given scope. + - Dict: render values + - List/Tuple: render each element + - String: env.from_string(s).render(scope) + - Other scalars: return as-is + + Note: We do NOT render dict keys — only values. + """ + from collections.abc import Mapping as _Mapping + if isinstance(obj, _Mapping): + return {k: _render_scalars(v, scope, env) for k, v in obj.items()} + if isinstance(obj, list): + return [_render_scalars(v, scope, env) for v in obj] + if isinstance(obj, tuple): + return tuple(_render_scalars(v, scope, env) for v in obj) + if isinstance(obj, str): + try: + return env.from_string(obj).render(**scope) + except TemplateError as e: + raise RuntimeError(f"Template render failed for: {obj!r} (scope keys={list(scope.keys())})") from e + return obj + +# ────────────────────────────────────────────────────────────────────────────── +# CLI +# ────────────────────────────────────────────────────────────────────────────── + +def parse_args(argv=None): + ap = argparse.ArgumentParser(description="Render + expand Tekton templates with loop nodes") + ap.add_argument("-t", "--template", required=True, help="Jinja template file (use - for stdin)") + ap.add_argument("-f", "--values", required=True, help="YAML/JSON values file (use - for stdin)") + ap.add_argument("-o", "--out", help="Output YAML file (default: stdout)") + ap.add_argument("--explain", action="store_true", help="Print name/runAfter table to stderr after expansion") + return ap.parse_args(argv) + +def _read_text(path: str) -> str: + return sys.stdin.read() if path == "-" else open(path, "r").read() + +def _load_values(path: str) -> Dict[str, Any]: + data = _read_text(path) + return yaml.safe_load(data) or {} + +def _explain(expanded: Mapping[str, Any]) -> None: + def print_section(title: str, items: List[Mapping[str, Any]]): + print(f"# {title}", file=sys.stderr) + print(f"{'TASK NAME':<60} RUNAFTER", file=sys.stderr) + print("-" * 90, file=sys.stderr) + for t in items: + name = t.get("name", "") # type: ignore[assignment] + ra = t.get("runAfter", []) + ra_str = ", ".join(ra) if isinstance(ra, list) else str(ra) + print(f"{name:<60} {ra_str}", file=sys.stderr) + print("", file=sys.stderr) + + spec = expanded.get("spec") or {} + tasks = spec.get("tasks", []) + print_section("spec.tasks", tasks) + if "finally" in spec: + print_section("spec.finally", spec.get("finally", [])) + +def main(argv=None) -> int: + args = parse_args(argv) + + try: + values = _load_values(args.values) + + # 1) OUTER render with globals; loop vars are preserved verbatim + env_outer = build_env_outer() + template_src = _read_text(args.template) + rendered = env_outer.from_string(template_src).render(**values) + + # 2) YAML parse + doc = yaml.safe_load(rendered) + if not isinstance(doc, dict): + print("Rendered template is not a YAML mapping (expected a Pipeline).", file=sys.stderr) + return 1 + + # 3) Loop expansion with INNER strict env (resolves loop vars) + env_inner = build_env_inner() + expanded: Dict[str, Any] = expand_document(doc, globals=values, jinja_env=env_inner) + + # 4) Optional explain + if args.explain: + _explain(expanded) + + # 5) Output + out_text = yaml.safe_dump(expanded, sort_keys=False) + if args.out: + with open(args.out, "w") as f: + f.write(out_text) + else: + sys.stdout.write(out_text) + return 0 + + except TemplateError as e: + print(f"Template render error: {e}", file=sys.stderr) + except Exception as e: + print(f"Error: {e}", file=sys.stderr) + return 1 + +if __name__ == "__main__": + raise SystemExit(main()) diff --git a/tektoncsample/nested-loops/pipeline.yaml.j2 b/tektoncsample/nested-loops/pipeline.yaml.j2 new file mode 100644 index 00000000..89d0dd8d --- /dev/null +++ b/tektoncsample/nested-loops/pipeline.yaml.j2 @@ -0,0 +1,60 @@ +apiVersion: tekton.dev/v1 +kind: Pipeline +metadata: + # Quote + default so YAML stays valid even if the value is missing + name: "{{ pipeline_name|default('nested-loops-demo') }}" +spec: + tasks: + - name: prep + taskRef: { name: prep-env } + + # OUTER loop: per model + - loopName: per-model + foreach: + domain: + # Safe default list if 'models' missing in values.yaml + modelRef: {{ models|default(['llama-7b','qwen-2.5-7b']) }} + tasks: + - name: "dl-{{ modelRef|dns }}" + taskRef: { name: download-model } + runAfter: [ prep ] + params: + - { name: modelRef, value: "{{ modelRef }}" } + + # INNER loop: per prefix for this model + - loopName: per-prefix + foreach: + domain: + # Safe default list if 'prefixes' missing in values.yaml + prefix: {{ prefixes|default(['A','B']) }} + tasks: + - name: "svc-{{ modelRef|dns }}-{{ prefix|slug }}" + taskRef: { name: start-service } + runAfter: [ "dl-{{ modelRef|dns }}" ] + params: + - { name: modelRef, value: "{{ modelRef }}" } + - { name: prefix, value: "{{ prefix }}" } + + - name: "job-{{ modelRef|dns }}-{{ prefix|slug }}" + taskRef: { name: run-job } + runAfter: [ "svc-{{ modelRef|dns }}-{{ prefix|slug }}" ] + params: + - { name: modelRef, value: "{{ modelRef }}" } + - { name: prefix, value: "{{ prefix }}" } + + # Per-model fan-in: wait for all prefixes to finish their job step + - name: "agg-{{ modelRef|dns }}" + taskRef: { name: aggregate-results } + runAfter: + {% for p in prefixes|default(['A','B']) %} + - "job-{{ modelRef|dns }}-{{ p|slug }}" + {% endfor %} + + finally: + # Optional: a global summary after all per-model aggregates + - name: global-summary + taskRef: { name: global-summarize } + runAfter: + {% for m in models|default(['llama-7b','qwen-2.5-7b']) %} + - "agg-{{ m|dns }}" + {% endfor %} diff --git a/tektoncsample/nested-loops/values.yaml b/tektoncsample/nested-loops/values.yaml new file mode 100644 index 00000000..61c7bbe2 --- /dev/null +++ b/tektoncsample/nested-loops/values.yaml @@ -0,0 +1,3 @@ +pipeline_name: nested-loops-demo +models: ["llama-7b", "qwen-2.5-7b"] +prefixes: ["A", "B"] diff --git a/tektoncsample/object-loops/pipeline.yaml.j2 b/tektoncsample/object-loops/pipeline.yaml.j2 new file mode 100644 index 00000000..c9dc0dd3 --- /dev/null +++ b/tektoncsample/object-loops/pipeline.yaml.j2 @@ -0,0 +1,53 @@ +# pipeline.yaml.j2 — minimal example: iterate over objects +apiVersion: tekton.dev/v1 +kind: Pipeline +metadata: + name: "{{ pipeline_name }}" +spec: + params: + - name: message + type: string + + tasks: + # A setup task that runs first + - name: print-start + taskRef: { name: echo } + params: + - name: text + value: "Starting pipeline {{ pipeline_name }}" + + # Loop over a list of model objects + - loopName: per-model + foreach: + domain: + model: {{ models }} + tasks: + # Each 'model' is a dict with fields: name, port, quant + - name: "serve-{{ model.name|dns }}" + taskRef: { name: start-service } + runAfter: [ print-start ] + params: + - { name: modelRef, value: "{{ model.name }}" } + - { name: port, value: "{{ model.port }}" } + - { name: quant, value: "{{ model.quant }}" } + + - name: "test-{{ model.name|dns }}" + taskRef: { name: run-test } + runAfter: [ "serve-{{ model.name|dns }}" ] + params: + - { name: modelRef, value: "{{ model.name }}" } + - { name: port, value: "{{ model.port }}" } + - { name: message, value: "$(params.message)" } + + finally: + # Cleanup task per model + - loopName: cleanup + foreach: + domain: + model: {{ models }} + tasks: + - name: "cleanup-{{ model.name|dns }}" + taskRef: { name: stop-service } + runAfter: [ "test-{{ model.name|dns }}" ] + params: + - { name: modelRef, value: "{{ model.name }}" } diff --git a/tektoncsample/object-loops/values.yaml b/tektoncsample/object-loops/values.yaml new file mode 100644 index 00000000..0a96f25f --- /dev/null +++ b/tektoncsample/object-loops/values.yaml @@ -0,0 +1,9 @@ +pipeline_name: object-loop-demo + +models: + - name: llama-7b + port: "8080" + quant: "fp16" + - name: qwen-2.5-7b + port: "9090" + quant: "int4" diff --git a/tektoncsample/quickstart/pipeline.yaml.j2 b/tektoncsample/quickstart/pipeline.yaml.j2 new file mode 100644 index 00000000..c91b8855 --- /dev/null +++ b/tektoncsample/quickstart/pipeline.yaml.j2 @@ -0,0 +1,55 @@ +# ============================================================================= +# pipeline.yaml.j2 — Minimal example for tektonc (no tojson) +# ============================================================================= +apiVersion: tekton.dev/v1 +kind: Pipeline +metadata: + name: {{ pipeline_name }} +spec: + params: + - name: message + type: string + + tasks: + # 1) Plain Tekton task — unchanged by the expander + - name: print-start + taskRef: { name: echo } + params: + - name: text + value: "Starting pipeline {{ pipeline_name }}" + + # 2) Loop: one task per modelRef (compile-time fan-out) + - loopName: per-model + foreach: + domain: + # Render the list directly; YAML accepts it (e.g., ['llama-7b','qwen-2.5-7b']) + modelRef: {{ models }} + tasks: + - name: "process-{{ modelRef|dns }}" + taskRef: { name: process-model } + runAfter: [ print-start ] + params: + - { name: model, value: "{{ modelRef }}" } + # Tekton param — resolved at runtime by Tekton, not by Jinja + - { name: message, value: "$(params.message)" } + + # 3) Aggregate after all per-model tasks finish (inline list to avoid indent issues) + - name: aggregate-results + taskRef: { name: aggregate } + runAfter: [ {% for m in models %}process-{{ m|dns }}{% if not loop.last %}, {% endif %}{% endfor %} ] + params: + - name: note + value: "All models processed." + + finally: + # 4) Finally loop: one cleanup per model, after aggregate-results + - loopName: cleanup + foreach: + domain: + modelRef: {{ models }} + tasks: + - name: "cleanup-{{ modelRef|dns }}" + taskRef: { name: cleanup-model } + runAfter: [ aggregate-results ] + params: + - { name: model, value: "{{ modelRef }}" } diff --git a/tektoncsample/quickstart/values.yaml b/tektoncsample/quickstart/values.yaml new file mode 100644 index 00000000..ae207fa8 --- /dev/null +++ b/tektoncsample/quickstart/values.yaml @@ -0,0 +1,4 @@ +pipeline_name: demo-pipeline +models: + - llama-7b + - qwen-2.5-7b