Skip to content

interlink-hq/interlink-kubernetes-plugin

Repository files navigation

InterLink Kubernetes Plugin

InterLink plugin to extend the capabilities of local Kubernetes clusters by offloading workloads to remote clusters.

InterLink Offloading

Index

Prerequisites

  • Python 3.12 (for local development)
  • Docker (for containerized runtime)
  • Poetry (for src/infr/scripts/docker_build_and_publish.sh)
  • Access to a remote Kubernetes cluster (kubeconfig)
  • Helm CLI only if using the deprecated TCP tunnel feature

How to Run

Configure

src/private/config.sample.ini defines the plugin configuration. Create config.ini from it and provide your environment values:

cp src/private/config.sample.ini src/private/config.ini

Key properties:

  • k8s.kubeconfig_path: path to kubeconfig for the remote cluster (default: private/k8s/kubeconfig.yaml), see the Troubleshooting section for common errors
  • k8s.kubeconfig: optional inline kubeconfig as JSON
  • k8s.client_configuration: optional JSON passed to Kubernetes Python client configuration, see configuration object
  • app.socket_address: plugin listen address, support TCP hosts (http://0.0.0.0) and unix sockets (default: unix:///var/run/.plugin.sock)
  • app.socket_port: plugin listen port for TCP mode (default: 0, ignored in unix socket mode)
  • offloading.namespace_prefix: prefix for offloaded namespaces (default: offloading)
  • offloading.namespace_prefix_exclusions: namespaces excluded from prefixing
  • offloading.node_selector: optional selector JSON applied to offloaded pods
  • offloading.node_tolerations: optional tolerations JSON applied to offloaded pods

By default, config is read from src/private/config.ini. You can override this with CONFIG_FILE_PATH.

Docker Run

Images are currently published to hub.docker.com/r/mginfn/interlink-kubernetes-plugin.

If your config files are under ./src/private, run the plugin in socket mode:

docker run --rm \
  -v ./src/private:/interlink-kubernetes-plugin/private \
  -v ./.devcontainer/sockets:/root/sockets \
  -e APP_SOCKET_ADDRESS=unix:///root/sockets/.plugin.sock \
  docker.io/mginfn/interlink-kubernetes-plugin:latest

For TCP mode:

docker run --rm \
  -v ./src/private:/interlink-kubernetes-plugin/private \
  -e APP_SOCKET_ADDRESS=http://0.0.0.0 \
  -e APP_SOCKET_PORT=4000 \
  -p 30400:4000 \
  docker.io/mginfn/interlink-kubernetes-plugin:latest

Image Build and Publish

Use src/infr/scripts/docker_build_and_publish.sh to build and publish the plugin image.

Default behavior:

  • reads image version from pyproject.toml
  • builds mginfn/interlink-kubernetes-plugin:<version>
  • pushes <version> and latest tags

Run from repository root:

src/infr/scripts/docker_build_and_publish.sh

Build only (skip push):

src/infr/scripts/docker_build_and_publish.sh --no-push

Set an explicit image tag:

src/infr/scripts/docker_build_and_publish.sh --tag 1.2.0

If --tag does not match the version in pyproject.toml, the script prompts whether to update pyproject.toml to the provided tag before proceeding.

Create deprecated chart archives (opt-in):

src/infr/scripts/docker_build_and_publish.sh --create-chart-archives

Show all options:

src/infr/scripts/docker_build_and_publish.sh --help

Development Run

Install dependencies:

pip install -r src/infr/containers/dev/requirements.txt

Start the API server:

cd src
python -m app.server

Socket Mode

Configure the plugin in src/private/config.ini:

[app]
socket_address=unix:///var/run/.plugin.sock
socket_port=0

Then run:

cd src
python -m app.server

InterLink API server must use the same socket path:

SidecarURL: "unix:///var/run/.plugin.sock"
SidecarPort: "0"

In socket mode, the plugin exposes the same API endpoints (/status, /getLogs, /create, /delete) and forwards them to KubernetesPluginService exactly as in TCP mode.

Install via Ansible role

See Ansible Role InterLink > In-cluster to install InterLink components together with this Kubernetes Plugin in a running cluster.

API Endpoints

The v1 controller exposes:

  • GET /status
  • GET /getLogs
  • POST /create
  • POST /delete

Interactive docs are available at /docs (configurable via app.api_docs_path).

Features

InterLink Mesh Networking

The Kubernetes plugin supports the Interlink Mesh Networking feature that enables offloaded pods running on the remote cluster to seamlessly communicate with services and pods in the local cluster and back.

The plugin implements the mesh networking setup through a mesh-setup Sidecar Container that is injected into offloaded pods. Accordingly, the InterLink Virtual Kubelet in the local cluster must be configured to provide a custom mesh setup script:

virtualNode:
  network:
    # Enable full mesh networking
    fullMesh: true
    ... other mesh config ...
    # Mesh script for Kubernetes plugin
    meshScriptTemplatePath: "/etc/interlink/custom-mesh.sh"
    meshScriptConfigMapName: "mesh-script-template-k8s"

where the custom script is defined by the following ConfigMap: src/infr/manifests/mesh-script-template-k8s.sh.

The mesh networking setup creates a secure overlay network between the local and remote clusters, allowing pods to communicate as if they were in the same cluster. Key ideas:

  • "pod identity" lives in the local cluster
  • "pod execution" lives in the remote cluster (offloaded pod).
  • a WireGuard-over-WebSocket mesh + local DNAT/MASQUERADE makes it feel seamless.

Architectural diagram:

┌─────────────────────────────────────────────────────────────────────────────────────────────┐
│                                         LOCAL CLUSTER                                       │
│                                                                                             │
│   ┌───────────────┐       ┌───────────────┐        ┌───────────────────────────────────┐    │
│   │    Ingress    │ ----> │    Service    │ -----> │   "Shadow" Pod (appears local)    │    │
│   └───────────────┘       └───────────────┘        └───────────────────────────────────┘    │
│                                                          |                                  │
│                                                          | inbound traffic to pod IP        │
│                                                          v                                  │
│        ┌─────────────────────────────────────────────────────────────────────────────┐      │
│        │                         Shadow Pod: 3 containers                            │      │
│        │                                                                             │      │
│        │  (A) wstunnel-server                                                        │      │
│        │   - listens: ws://0.0.0.0:28080                                             │      │
│        │   - restrict upgrade path prefix: <secret>                                  │      │
│        │   - restrict-to: 127.0.0.1:51820                                            │      │
│        │                                                                             │      │
│        │  (B) WireGuard server (wg-quick up wg0)                                     │      │
│        │   - wg0 IP: 10.7.0.1/32                                                     │      │
│        │   - listens UDP: 51820 (localhost-only enforced by iptables)                │      │
│        │   - NAT: MASQUERADE for overlay                                             │      │
│        │                                                                             │      │
│        │  (C) iptables forwarder (ALL traffic -> remote peer)                        │      │
│        │   - PREROUTING DNAT: almost all TCP/UDP arriving on main iface              │      │
│        │       EXCEPT ports (28080, 51820, mesh/health ports...)                     │      │
│        │       --> 10.7.0.2                                                          │      │
│        │   - POSTROUTING MASQUERADE: replies come back through shadow pod            │      │
│        │   - FORWARD ACCEPT: main iface <-> wg0                                      │      │
│        └─────────────────────────────────────────────────────────────────────────────┘      │
│                                               |                                             │
│                                               | DNAT’d traffic routed via wg0               │
│                                               v                                             │
└─────────────────────────────────────────────────────────────────────────────────────────────┘
                                                |
                                                | WireGuard tunnel (UDP)
                                                | carried inside WebSocket (wstunnel)
                                                v
┌─────────────────────────────────────────────────────────────────────────────────────────────┐
│                                         REMOTE CLUSTER                                      │
│                                                                                             │
│   ┌───────────────────────────────────────────────────────────────────────────────────┐     │
│   │             Offloaded Pod (real execution happens here): 2 containers             │     │
│   │                                                                                   │     │
│   │  (A) mesh-setup sidecar (runs mesh.sh)                                            │     │
│   │    wstunnel-client                                                                │     │
│   │    - local UDP listen: 127.0.0.1:51821                                            │     │
│   │    - forwards UDP to: 127.0.0.1:51820 (on local cluster shadow pod) via WS        │     │
│   │    - connects to: ws://<local-ingress-host>:80/<secret-prefix>                    │     │
│   │    wireguard-go client                                                            │     │
│   │    - wg IFACE IP: 10.7.0.2/32                                                     │     │
│   │    - Endpoint: 127.0.0.1:51821                                                    │     │
│   │    - AllowedIPs include local cluster CIDRs (pods/services)                       │     │
│   │    - Routes added for those CIDRs via wg IFACE                                    │     │
│   │                                                                                   │     │
│   │  (B) Workload container (your app)                                                │     │
│   │    - receives forwarded traffic as if it were hitting local pod                   │     │
│   │    - sends responses back through wg tunnel                                       │     │
│   └───────────────────────────────────────────────────────────────────────────────────┘     │
└─────────────────────────────────────────────────────────────────────────────────────────────┘

Pod Volumes

Note: this feature is experimental and may be subject to breaking changes.

The plugin supports offloading pods that reference spec.volumes and mount them with spec.containers[*].volumeMounts for:

  • configMap
  • secret
  • emptyDir
  • persistentVolumeClaim

Behavior summary:

  • referenced ConfigMap and Secret objects are offloaded and scoped to the pod UID (i.e., their names are suffixed with POD's uid)
  • when the offloaded pod is deleted, scoped ConfigMap and Secret objects are deleted as well
  • PVC offloading is enabled per pod using metadata annotation interlink.io/remote-pvc (comma-separated list of PVC names that will be offloaded)
  • PVC cleanup policy is controlled by PVC annotation interlink.io/pvc-retention-policy (delete or retain)

See example manifest: test-pod-pvc.yaml.

Notes:

  • that since the POD is submitted to the local cluster, the PVC must exist in the local cluster as well, otherwise Kubernetes won't schedule it on the VirtualNode (and the POD won't be offloaded) current PVC support is experimental and not yet supported by InterLink API Server. See interlink-hq/interLink#396.

Microservices Offloading (Deprecated)

Note: this feature is deprecated and may be removed in future releases. It is recommended to use InterLink Mesh Networking instead.

The plugin supports offloading HTTP microservices through TCP tunnel Helm charts: src/infr/charts/tcp-tunnel/README.md.

To enable this feature, configure in config.ini:

  • tcp_tunnel.gateway_host: gateway host IP/DNS
  • tcp_tunnel.gateway_port: gateway SSH port
  • tcp_tunnel.gateway_ssh_private_key: SSH private key

For offloaded microservices, explicitly declare container TCP ports in pod specs. See test-microservice.yaml.

Microservice Offloading

The plugin installs a Bastion release in the remote cluster for each offloaded pod. You must install and expose one Gateway instance in the local cluster.

Troubleshooting

401 Unauthorized

If the plugin raises 401 Unauthorized, check the remote kubeconfig.

The cluster section must include the URL of the Kubernetes API Server and the inline base64-encoded CA certificate:

clusters:
- cluster:
    certificate-authority-data: <base64-encoded-CA-certificate>
    server: https://api-kubernetes.example.com
  name: my-cluster

Alternatively, provide a CA certificate path and ensure it is readable by the plugin:

clusters:
- name: cluster-name
  cluster:
    certificate-authority: /path/to/ca.crt
    server: https://api-kubernetes.example.com

You can disable server certificate verification (but you will get "InsecureRequestWarning" in plugin's logs):

clusters:
- cluster:
    insecure-skip-tls-verify: true
    server: https://api-kubernetes.example.com
  name: my-cluster

In the users section, include client certificate/key or token authentication:

users:
- name: admin
  user:
    client-certificate-data: <base64-encoded-certificate>
    client-key-data: <base64-encoded-key>

Or file paths:

users:
- name: admin
  user:
    client-certificate: /path/to/client.crt
    client-key: /path/to/client.key

Or token-based authentication:

users:
- name: admin
  user:
    token: <auth-token>

TLS verify failed

If the plugin raises certificate verify failed: unable to get local issuer certificate while reaching the remote cluster, your cluster may use self-signed certificates.

You can disable certificate verification:

  • k8s.client_configuration={"verify_ssl": false}

Or explicitly provide CA/client certificates:

  • k8s.client_configuration={"verify_ssl": true, "ssl_ca_cert": "private/k8s-microk8s/ca.crt", "cert_file": "private/k8s-microk8s/client.crt", "key_file": "private/k8s-microk8s/client.key"}

Credits

Originally created by Mauro Gattari @ INFN in 2024.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors