Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 18 additions & 0 deletions .devcontainer/devcontainer.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
{
"name": "FFRD Specs devcontainer",
"image": "mcr.microsoft.com/devcontainers/base:bookworm",
"features": {
"ghcr.io/devcontainers/features/docker-outside-of-docker:1": {
"enableNonRootDocker": "true"
},
"ghcr.io/devcontainers/features/python:1": {}
},
"postCreateCommand": ".devcontainer/setup-argo.sh",
"customizations": {
"vscode": {
"extensions": [
"ms-kubernetes-tools.vscode-kubernetes-tools"
]
}
}
}
45 changes: 45 additions & 0 deletions .devcontainer/setup-argo.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
#!/bin/bash
set -e

echo "🚀 Setting up Argo Workflows development environment..."

echo "📦 Installing kubectl..."
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/

echo "📦 Installing Argo CLI..."
curl -sLO https://github.com/argoproj/argo-workflows/releases/download/v3.7.0/argo-linux-amd64.gz
gunzip argo-linux-amd64.gz
chmod +x argo-linux-amd64
sudo mv argo-linux-amd64 /usr/local/bin/argo

echo "🧹 Cleaning up any existing k3s-server container..."
docker rm -f k3s-server 2>/dev/null || true

echo "🔧 Starting k3s Kubernetes cluster..."
docker run -d --name k3s-server --privileged -p 6443:6443 rancher/k3s:latest server --disable=traefik

echo "⏳ Waiting for k3s to start..."
sleep 10

echo "🔧 Configuring kubectl..."
mkdir -p ~/.kube
docker exec k3s-server cat /etc/rancher/k3s/k3s.yaml | sed 's/127.0.0.1/172.17.0.2/g' > ~/.kube/config
CONTAINER_IP=$(docker inspect k3s-server | grep '"IPAddress"' | head -1 | cut -d'"' -f4)
docker exec k3s-server cat /etc/rancher/k3s/k3s.yaml | sed "s/127.0.0.1/$CONTAINER_IP/g" > ~/.kube/config
sed -i '/certificate-authority-data:/d' ~/.kube/config
sed -i '/server:/a\ insecure-skip-tls-verify: true' ~/.kube/config

echo "📦 Installing Argo Workflows..."
kubectl create namespace argo || true
kubectl apply -n argo -f https://github.com/argoproj/argo-workflows/releases/download/v3.7.0/install.yaml

echo "⏳ Waiting for Argo Workflows to be ready..."
kubectl wait --for=condition=available --timeout=300s deployment/argo-server -n argo || echo "⚠️ Argo server may still be starting..."
kubectl wait --for=condition=available --timeout=300s deployment/workflow-controller -n argo || echo "⚠️ Workflow controller may still be starting..."

echo "🔐 Setting up RBAC for workflows..."
kubectl create rolebinding default-admin --clusterrole=admin --serviceaccount=default:default

echo "✅ Setup complete!"
9 changes: 9 additions & 0 deletions docs/proposals/orchestration/orchestration.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
{% include "proposals/orchestration/standard.md" %}

______________________________________________________________________

{% include "proposals/orchestration/technical-capabilities.md" %}

______________________________________________________________________

{% include "proposals/orchestration/reference.md" %}
248 changes: 248 additions & 0 deletions docs/proposals/orchestration/reference.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,248 @@
## 📚 Reference

### Argo Workflows Implementation

This reference implementation demonstrates how Argo Workflows could support FFRD orchestration needs. Argo Workflows is presented as one example of an orchestration system that offers relevant capabilities, alongside other potential solutions that could meet similar workflow requirements.

#### Implementation Overview

The reference implementation uses Argo Workflows running on Kubernetes to illustrate:

- DAG-based workflow execution patterns with explicit task dependencies
- Container execution approaches with shared volume access
- Parallel task execution techniques with parameterization
- Shared volume management strategies for data exchange between tasks
- Logging and monitoring capabilities for workflow observability

#### Example Workflow Structure

The following example demonstrates a basic FFRD workflow pattern with parallel processing and data collection:

```yaml
# This is a simplified example showing the orchestration pattern
# Full FFRD workflows would use FFRD-compliant containers and configurations

apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: dag-example-
spec:
entrypoint: main
volumeClaimTemplates: # Create a shared volume for the workflow
- metadata:
name: workdir
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
templates:
- name: main
dag:
tasks:
- name: generate-number
template: generate-number
- name: process-numbers
dependencies: [generate-number]
template: process-numbers
- name: sum-results
dependencies: [process-numbers]
template: sum-results

- name: generate-number
container:
image: alpine:3.18
command: [sh, -c]
args: ["echo 5 > /work/number.txt"]
volumeMounts:
- name: workdir
mountPath: /work

- name: process-numbers
parallelism: 2 # Run two steps at a time
steps:
- - name: process-number
template: process-number
withItems: # Iterate over this list of numbers
- 1
- 2
- 3
- 4
arguments:
parameters:
- name: item
value: "{{ '{{item}}' }}" # Pass the item from the list to the process-number template

- name: process-number
inputs:
parameters:
- name: item
container:
image: alpine:3.18
command: [sh, -c]
args:
- |
num=$(cat /work/number.txt)
result=$((num + {{ '{{inputs.parameters.item}}' }}))
echo $result > /work/result-{{ '{{inputs.parameters.item}}' }}.txt
volumeMounts:
- name: workdir
mountPath: /work

- name: sum-results
container:
image: alpine:3.18
command: [sh, -c]
args:
- |
sum=0
for file in /work/result-*.txt; do
sum=$((sum + $(cat $file)))
done
echo "Total sum: $sum"
volumeMounts:
- name: workdir
mountPath: /work
```

#### Key Implementation Features

##### DAG Structure

- Uses Argo's DAG template to define explicit task dependencies (`dependencies: [generate-number]`)
- Demonstrates parallel execution through steps with `withItems` parameterization
- Shows sequential workflow phases (generate → process → collect)

##### Container Execution

- Executes standard containers (Alpine Linux) as a pattern for FFRD containers
- Demonstrates passing command line arguments to containers
- Shows volume mounting for data access across all tasks

##### Data Sharing

- Uses persistent volume claims (`volumeClaimTemplates`) for shared storage
- Consistent volume mounting (`/work`) across all workflow tasks
- Demonstrates file-based data exchange between workflow steps

##### Parameterization

- Shows parameter passing with `withItems` for parallel task execution
- Demonstrates template parameter usage with `inputs.parameters.item`
- Illustrates how to iterate over lists to create multiple parallel tasks

#### Deployment Requirements

##### Infrastructure

- Kubernetes cluster
- Argo Workflows
- Container runtime (Docker, containerd, or CRI-O)
- Persistent storage provisioner

##### Configuration

- Argo Workflows controller installation
- RBAC configuration for workflow execution
- Storage class configuration for volume provisioning
- Container registry access credentials

#### Usage Examples

##### Validate Workflow

```bash
# Validate the workflow definition
argo lint reference.yaml
```

##### Submit Workflow

```bash
# Submit the workflow to Argo
argo submit reference.yaml
```

##### Monitor Execution

```bash
# List all workflows
argo list

# Watch workflow execution (use actual workflow name from list)
argo get dag-example-abc123

# View workflow logs
argo logs dag-example-abc123
```

##### Access Results

```bash
# View workflow status and results
argo get dag-example-abc123
```

### Dev Container Setup (Optional)

This section describes how to set up a development environment for running argo workflows locally using Visual Studio Code Dev Containers. This setup includes a lightweight Kubernetes cluster (k3s) with Argo Workflows installed, allowing you to run and test the reference implementation locally.

1. Open [this](https://github.com/fema-ffrd/specs) repository in VS Code
1. When prompted, click "Reopen in Container" or use the Command Palette (Ctrl+Shift+P) and select "Dev Containers: Reopen in Container"
1. The container will automatically set up the environment and install dependencies

#### What Gets Installed

The setup includes:

- **Base**: Debian 12 (bookworm) container
- **Docker**: Docker-outside-of-Docker for running k3s
- **kubectl**: Kubernetes CLI
- **argo**: Argo Workflows CLI v3.7.0
- **k3s**: Lightweight Kubernetes cluster
- **Argo Workflows**: v3.7.0 installed in the cluster

```
┌─────────────────────────────────────-┐
│ DevContainer │
│ ┌─────────────┐ ┌───────────────┐ │
│ │ argo │ │ kubectl │ │
│ │ CLI │ │ CLI │ │
│ └─────────────┘ └───────────────┘ │
│ │
│ ┌─────────────────────────────────┐ │
│ │ Docker Host │ │
│ │ ┌─────────────────────────────┐│ │
│ │ │ k3s Container ││ │
│ │ │ ┌─────────────────────────┐││ │
│ │ │ │ Argo Workflows │││ │
│ │ │ └─────────────────────────┘││ │
│ │ └─────────────────────────────┘│ │
│ └─────────────────────────────────┘ │
└─────────────────────────────────────-┘
```

#### Useful Commands

Once setup is complete, you can use these commands:

```bash
# Validate workflow files
argo lint reference/orchestration/argo/reference.yaml

# Submit workflow files
argo submit reference/orchestration/argo/reference.yaml

# Watch the workflow execution
argo submit --watch reference/orchestration/argo/reference.yaml

# List all workflows
argo list

# View logs for a specific workflow
argo logs <workflow-name>
```

#### Useful Links

- Read the [Argo Workflows documentation](https://argo-workflows.readthedocs.io/)
Loading