Skip to content

kurv1ts/platform-engineering

Repository files navigation

Internal Developer Platform

Kubernetes Terraform ArgoCD Backstage TypeScript

A learning-focused Internal Developer Platform built to explore how Backstage + GitOps can reduce service onboarding time and enforce platform standards without custom per-team setup.

What problem does it solve?

IDP aims to hide the infrastructure complexity from software engineers by providing a self service layer for building, deploying and managing applications. This allows software engineers to focus on creating value for businesses by reducing cognitive load and offering consistent infrastructure & deployments.

High-Level Flow

For the detailed design and local access patterns (hosts/port-forwarding), see docs/001-architecture-overview.md.

Repository Structure

platform-engineering/
├── apps/                    # Platform-managed applications
│   └── backstage/          # IDP portal with custom plugins
├── docs/                   # Architecture decisions & setup guides
├── gitops/                 # ArgoCD applications and configurations
│   ├── argo/              # Root app, projects, ApplicationSets
│   ├── apps/              # Platform controlled apps
│   ├── clusters/          # Cluster-specific configurations
    └── platform/          # Platform controlled resources without runtime,e.g admission policies, namespaces
├── infra/                  # Terraform infrastructure code
│   ├── modules/           # Reusable modules (k8s)
│   └── envs/              # Environment configurations
└── templates/              # Service templates for scaffolding
    └── node/              # Node.js service template

Tech Stack

Layer Technology Purpose
Portal Backstage Self-service UI, software catalog, scaffolding
GitOps ArgoCD Declarative deployments, drift detection
Infrastructure Terraform + Kind Local K8s cluster provisioning
Ingress Traefik Traffic routing, TLS termination
Secrets Sealed Secrets GitOps-compatible secret management
Observability Prometheus Operator, Grafana Metrics collection, visualization
Config Management Kustomize Environment-specific overlays

What I Built

Custom Backstage Scaffolder Action

Extended Backstage with a custom scaffolder action that prepares environment (env vars/secrets etc.) for the scaffolded service repository.

Authentication for Backstage

Backstage UI sign-in uses GitHub OAuth (OAuth2) so users and groups map cleanly to GitHub identities, which then drives ownership in the catalog and template permissions.

Expected secrets/vars (high level): GITHUB_CLIENT_ID, GITHUB_CLIENT_SECRET for login; GITHUB_TOKEN for Backstage GitHub integration (scaffolder publishing + the catalog-info.yaml commit action).

Instead of a PAT-style GITHUB_TOKEN, a GitHub App is a better production choice due to scoped permissions and stronger auditability. If GITHUB_TOKEN is a PAT, it should be rotated regularly. This project uses GITHUB_TOKEN for simplicity.

Key file: apps/backstage/plugins/scaffolder-backend-module-custom-actions/

GitOps with App-of-Apps Pattern

Implemented a hierarchical ArgoCD structure with separate projects for bootstrap, platform, and workloads. This was chosen over a single-project setup to make trust boundaries explicit, even at small scale - it's easier to relax constraints later than to introduce them.

Key file: gitops/argo/root-application.yaml

Auto-Discovery of New Services

Currently

Hardcoded repository url is in the responsible applicationSet because for auto-discovery, Github SCM generator needs to be used which requires Github Org and relocation of this project.

Ideally

ApplicationSet with SCM Provider generator automatically discovers repositories tagged with idp-managed and deploys them without manual ArgoCD configuration. The tradeoff: this couples deploy decisions to GitHub topics, which wouldn't scale to a multi-org setup. Good enough for now.

Key file: gitops/argo/applicationSets/workloads/discovered-apps.yaml

Observability-First Service Templates

Node.js service template with OpenTelemetry auto-instrumentation, Prometheus metrics, and structured JSON logging baked in. The alternative was letting teams add observability themselves, but that leads to inconsistent instrumentation and gaps when debugging cross-service issues.

Key file: templates/node/

Chaos Engineering Capabilities

Demo services include configurable error rates and latency injection for testing resilience and observability pipelines. These aren't production patterns - they exist to generate interesting telemetry data for testing the observability stack.

Key file: apps/rental/src/index.ts

Getting Started

Prerequisites: Docker, Terraform, kubectl, ArgoCD CLI

1. Provision Infrastructure

cd infra/envs/dev
terraform init && terraform apply

To use kubectl on Kind cluster run

export KUBECONFIG=~/.kube/company-x-cluster-dev-kubeconfig

2. Bootstrap ArgoCD

See docs/002-argocd-bootstrap.md for detailed steps.

3. Deploy Backstage

Prerequisite: Github environment's' with DOCKER_USERNAME and DOCKER_TOKEN variables set. Backstage has a CI workflow that builds, pushes to Docker Hub and opens a PR to update the GitOps manifests automatically:

# Trigger via GitHub Actions UI or CLI
gh workflow run backstage-ci.yaml -f environment=dev

ArgoCD detects the updated image tag in gitops/apps/platform/backstage/overlays/$ENV/kustomization.yaml and syncs the deployment.

Documentation

More detailed documentation

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors