Skip to content

Axel92803/forgestack

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

18 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

This project has been created as a self-directed portfolio project by itanvuia

πŸ”¨ ForgeStack

Infrastructure as Code

"Don't repeat yourself β€” automate yourself."

β€” Every engineer who configured the same server twice

42 School Ansible Debian Arch Linux Hyper--V Shell


πŸ“– About this Project

ForgeStack takes everything learned in Born2beRoot and encodes it as reproducible, version-controlled automation. Instead of configuring a server by hand (running commands, editing files, hoping you remember every step) you write declarative Ansible playbooks that describe the desired state. One command provisions a fully hardened Linux server from a minimal install.

The project targets both Debian and Arch Linux from a single codebase, using Ansible's conditional task execution and OS-specific group variables to handle the differences between distributions.

This is Phase 1 of a two-phase project. Phase 2 will extend this hardened base to deploy a containerised AI inference server (Ollama + reverse proxy + authentication) on top of the same infrastructure.

Configuration management: Ansible Target OS: Debian 13 (Trixie) + Arch Linux Virtualisation: Hyper-V (dual-adapter networking) Control node: WSL2 Ubuntu via VS Code


πŸ“‘ Table of Contents


πŸ€” Why Ansible

The project needed a configuration management tool. Ansible won for three reasons:

Agentless. Ansible connects over SSH: no daemon, no client software on the target. The VM needs nothing installed beyond what a minimal OS ships with (plus Python on Arch). This is the same SSH connection I already learned from Born2beRoot.

Declarative YAML. Playbooks describe what the server should look like, not how to get there. You can read an Ansible role without a manual. More importantly, a recruiter or colleague can read it too.

Idempotent. Running the same playbook twice produces zero changes on the second run. This is the single most important property of good infrastructure automation, it means your playbooks are safe to run at any time, against any state, and the result is always the same.

βš–οΈ Ansible vs Alternatives

Ansible Terraform Puppet / Chef
Purpose Configuration management (what's on the server) Infrastructure provisioning (the server itself) Configuration management
Agent required No β€” SSH only No β€” API-based Yes β€” agent on every node
Language YAML HCL Ruby DSL
Learning curve Low Medium High
Idempotent Yes (with proper modules) Yes (declarative by design) Yes
Best for Server configuration, application deployment Cloud resources, VMs, networking Large-scale enterprise fleet management

Ansible handles configuration; Terraform handles provisioning. They complement each other. See the Roadmap for plans to add a Terraform layer.


πŸ—οΈ Architecture

                              β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                              β”‚        HYPER-V HOST          β”‚
                              β”‚        (Windows 11)          β”‚
                              β”‚                              β”‚
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”   SSH   β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚
β”‚   CONTROL NODE    │─────────┼─▢│   forgestack-deb       β”‚ β”‚
β”‚                   β”‚         β”‚  β”‚   Debian 12 Minimal     β”‚ β”‚
β”‚   WSL2 Ubuntu     β”‚         β”‚  β”‚   192.168.50.10         β”‚ β”‚
β”‚   Ansible         β”‚         β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚
β”‚   VS Code + WSL   β”‚         β”‚                              β”‚
β”‚                   β”‚   SSH   β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚
β”‚   ~/forgestack/   │─────────┼─▢│   forgestack-arch      β”‚ β”‚
β”‚     playbooks/    β”‚         β”‚  β”‚   Arch Linux            β”‚ β”‚
β”‚     roles/        β”‚         β”‚  β”‚   192.168.50.20         β”‚ β”‚
β”‚     inventory/    β”‚         β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜         β”‚                              β”‚
                              β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Network Layout:
  Internal Switch (ForgeStack) 192.168.50.0/24 ─── Ansible traffic
  Default Switch (DHCP) ─────────────────────── Internet access

The control node (WSL2) sends Ansible commands over SSH to both targets. Each VM has two network adapters: an Internal Switch with a static IP for Ansible (predictable, never changes) and a Default Switch for internet access (DHCP, used for package downloads). The playbook targets all hosts and uses OS-specific conditionals and group variables to handle differences between distributions.


πŸ“‚ Project Structure

forgestack/
β”œβ”€β”€ ansible.cfg                   # Ansible configuration (roles path, inventory)
β”œβ”€β”€ inventory/
β”‚   β”œβ”€β”€ hosts.yml                 # Multi-OS target inventory
β”‚   └── group_vars/
β”‚       β”œβ”€β”€ all.yml               # Shared variables (users, sudo, password policy)
β”‚       β”œβ”€β”€ debian.yml            # Debian-specific packages
β”‚       └── arch.yml              # Arch-specific packages
β”œβ”€β”€ playbooks/
β”‚   └── site.yml                  # Master playbook β€” targets all hosts
β”œβ”€β”€ roles/
β”‚   β”œβ”€β”€ base/                     # OS updates, essential packages, hostname, timezone
β”‚   β”‚   └── tasks/
β”‚   β”‚       └── main.yml
β”‚   β”œβ”€β”€ users/                    # User creation, sudo, groups, password policy
β”‚   β”‚   β”œβ”€β”€ tasks/
β”‚   β”‚   β”‚   └── main.yml
β”‚   β”‚   └── templates/
β”‚   β”‚       β”œβ”€β”€ sudoers.j2        # Sudo rules template
β”‚   β”‚       └── pwquality.conf.j2 # Password complexity template (Arch)
β”‚   β”œβ”€β”€ ssh/                      # SSH hardening (planned)
β”‚   β”œβ”€β”€ firewall/                 # UFW / iptables rules (planned)
β”‚   β”œβ”€β”€ security/                 # AppArmor enforcement (planned)
β”‚   └── monitoring/               # Cron-based monitoring (planned)
└── README.md

Each role is self-contained with its own tasks, handlers, and templates. This structure is designed for extensibility. Phase 2 roles (docker/, ollama/, reverse_proxy/) will slot in alongside existing roles without modifying them.


βš™οΈ Roles Overview

Role What It Does Multi-OS Born2beRoot Equivalent
base System updates, essential packages, hostname, timezone βœ… apt / pacman conditionals First steps after install
users User accounts, groups, sudo rules, SSH keys, password policy βœ… pam.d / pwquality.conf User & group management, sudoers config
ssh Custom port, key-only auth, root login disabled πŸ”œ Planned SSH hardening on port 4242
firewall Deny-all default, allow only SSH port πŸ”œ Planned UFW configuration
security AppArmor enabled and enforcing at boot πŸ”œ Planned AppArmor setup
monitoring Deploy monitoring script, configure cron schedule πŸ”œ Planned monitoring.sh + cron

πŸ‘₯ User Architecture

The playbook provisions three accounts with scoped access, reflecting real-world access control patterns:

User Groups Purpose
alex sudo, devops, sshusers Primary admin β€” full sudo, SSH access
deploy devops, sshusers Service account for automated deployments (CI/CD)
monitoring logging Restricted local account for monitoring stack β€” no SSH

πŸ“¦ Package Management

Packages are organised by function in OS-specific group variables, with categories for system administration, networking, editors, development tools, and maintenance utilities. The same Ansible tasks install the correct packages per distribution:

Category Debian Arch
Package manager apt pacman
SSH server openssh-server openssh
Build tools build-essential base-devel
Password quality libpam-pwquality pam (built-in)

πŸ”§ Prerequisites

  • Ansible installed on your control node (pip install ansible or apt install ansible)
  • Target machine(s) running Debian 13 and/or Arch Linux minimal with SSH access
  • Python on the target (Debian includes it; Arch requires pacman -S python)
  • An SSH key pair deployed to the target (ssh-copy-id)
  • Git for version control

πŸš€ Getting Started

Clone the repo

git clone git@github.com:Axel92803/forgestack.git
cd forgestack

Configure inventory

Update inventory/hosts.yml with your target IPs:

all:
  children:
    debian:
      hosts:
        forgestack-deb:
          ansible_host: 192.168.50.10
          ansible_user: alex
          ansible_ssh_private_key_file: ~/.ssh/id_ed25519
    arch:
      hosts:
        forgestack-arch:
          ansible_host: 192.168.50.20
          ansible_user: alex
          ansible_ssh_private_key_file: ~/.ssh/id_ed25519

Test connectivity

# All hosts
ansible all -m ping

# Specific OS group
ansible debian -m ping
ansible arch -m ping

Run the playbook

# Provision all hosts
ansible-playbook playbooks/site.yml

# Target a specific group
ansible-playbook playbooks/site.yml --limit debian
ansible-playbook playbooks/site.yml --limit arch

# Dry run (preview changes without applying)
ansible-playbook playbooks/site.yml --check --diff

πŸ” Verification

After a full playbook run, verify the server state:

# SSH into target
ssh alex@192.168.50.10    # Debian
ssh alex@192.168.50.20    # Arch

# Verify users and groups
id alex
id deploy
id monitoring
getent group devops
getent group sshusers
getent group logging

# Verify sudo configuration
sudo cat /etc/sudoers.d/forgestack

# Verify password policy
sudo chage -l alex
cat /etc/login.defs | grep PASS_

# Verify packages
which vim git tmux htop

# The real test β€” run the playbook again
ansible-playbook playbooks/site.yml
# Expected: zero changes

πŸ—ΊοΈ Roadmap

Phase 1 β€” Infrastructure as Code (current)

  • Project scaffold and Ansible connectivity
  • Multi-OS inventory (Debian + Arch Linux)
  • Base system role (updates, packages, hostname, timezone)
  • User and permission management role (users, groups, sudo, password policy)
  • SSH hardening role
  • Firewall configuration role
  • AppArmor enforcement role
  • Monitoring deployment role
  • Full end-to-end validation on fresh VMs

Phase 1 β€” Extras

  • Ansible Vault for secrets management
  • Molecule testing framework
  • CI/CD pipeline via GitHub Actions
  • Terraform provisioning layer
  • Documentation site (MkDocs / GitHub Pages)

Phase 2 β€” Containerised AI Inference Server (planned)

  • Docker role
  • Ollama deployment role
  • Reverse proxy role (Nginx/Caddy + TLS)
  • Authentication layer
  • Health checks and logging

πŸ“š Resources

References


🧠 Design Decisions

Why Hyper-V over VirtualBox? WSL2 requires Hyper-V's hypervisor platform, which degrades VirtualBox to a compatibility mode. Since Hyper-V is already running, using it natively avoids the performance penalty.

Why dual-adapter networking? Internal Switch provides a stable, predictable IP for Ansible (192.168.50.10 / .20). Default Switch provides internet access for package downloads. Clean separation of concerns.

Why Debian and Arch? Debian is the direct continuation from Born2beRoot, familiar territory for learning Ansible. Arch is the opposite extreme: rolling release, no installer, everything manual. Supporting both from one codebase demonstrates real-world multi-platform automation and uses Ansible's when: conditionals and group variables extensively.

Why this role structure? Each role maps to a Born2beRoot requirement, making the progression from manual to automated explicit. Roles are self-contained so Phase 2 roles compose alongside them without refactoring.

Why three users? alex (admin), deploy (CI/CD service account), and monitoring (restricted local account) reflect how real teams structure access. Each has scoped permissions following the principle of least privilege, rather than giving every account full sudo.

Why separate group_vars per OS? Package names, file paths, and service names differ between distributions. Keeping OS-specific values in debian.yml and arch.yml while sharing everything else in all.yml means tasks reference the same variable names regardless of target, the right values get injected based on inventory group membership.


Author: Alex Tanvuia (Ionut Tanvuia)

42 Login: itanvuia

School: 42 London

42 Profile

Built on the foundations of Born2beRoot. Part of my journey through 42 School's peer-learning curriculum. Check out my other projects on my GitHub profile!

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors