This project has been created as a self-directed portfolio project by itanvuia
Infrastructure as Code
"Don't repeat yourself β automate yourself."
β Every engineer who configured the same server twice
ForgeStack takes everything learned in Born2beRoot and encodes it as reproducible, version-controlled automation. Instead of configuring a server by hand (running commands, editing files, hoping you remember every step) you write declarative Ansible playbooks that describe the desired state. One command provisions a fully hardened Linux server from a minimal install.
The project targets both Debian and Arch Linux from a single codebase, using Ansible's conditional task execution and OS-specific group variables to handle the differences between distributions.
This is Phase 1 of a two-phase project. Phase 2 will extend this hardened base to deploy a containerised AI inference server (Ollama + reverse proxy + authentication) on top of the same infrastructure.
Configuration management: Ansible Target OS: Debian 13 (Trixie) + Arch Linux Virtualisation: Hyper-V (dual-adapter networking) Control node: WSL2 Ubuntu via VS Code
- π€ Why Ansible
- ποΈ Architecture
- π Project Structure
- βοΈ Roles Overview
- π§ Prerequisites
- π Getting Started
- π Verification
- πΊοΈ Roadmap
- π Resources
- π§ Design Decisions
The project needed a configuration management tool. Ansible won for three reasons:
Agentless. Ansible connects over SSH: no daemon, no client software on the target. The VM needs nothing installed beyond what a minimal OS ships with (plus Python on Arch). This is the same SSH connection I already learned from Born2beRoot.
Declarative YAML. Playbooks describe what the server should look like, not how to get there. You can read an Ansible role without a manual. More importantly, a recruiter or colleague can read it too.
Idempotent. Running the same playbook twice produces zero changes on the second run. This is the single most important property of good infrastructure automation, it means your playbooks are safe to run at any time, against any state, and the result is always the same.
| Ansible | Terraform | Puppet / Chef | |
|---|---|---|---|
| Purpose | Configuration management (what's on the server) | Infrastructure provisioning (the server itself) | Configuration management |
| Agent required | No β SSH only | No β API-based | Yes β agent on every node |
| Language | YAML | HCL | Ruby DSL |
| Learning curve | Low | Medium | High |
| Idempotent | Yes (with proper modules) | Yes (declarative by design) | Yes |
| Best for | Server configuration, application deployment | Cloud resources, VMs, networking | Large-scale enterprise fleet management |
Ansible handles configuration; Terraform handles provisioning. They complement each other. See the Roadmap for plans to add a Terraform layer.
ββββββββββββββββββββββββββββββββ
β HYPER-V HOST β
β (Windows 11) β
β β
βββββββββββββββββββββ SSH β ββββββββββββββββββββββββββ β
β CONTROL NODE βββββββββββΌββΆβ forgestack-deb β β
β β β β Debian 12 Minimal β β
β WSL2 Ubuntu β β β 192.168.50.10 β β
β Ansible β β ββββββββββββββββββββββββββ β
β VS Code + WSL β β β
β β SSH β ββββββββββββββββββββββββββ β
β ~/forgestack/ βββββββββββΌββΆβ forgestack-arch β β
β playbooks/ β β β Arch Linux β β
β roles/ β β β 192.168.50.20 β β
β inventory/ β β ββββββββββββββββββββββββββ β
βββββββββββββββββββββ β β
ββββββββββββββββββββββββββββββββ
Network Layout:
Internal Switch (ForgeStack) 192.168.50.0/24 βββ Ansible traffic
Default Switch (DHCP) βββββββββββββββββββββββ Internet access
The control node (WSL2) sends Ansible commands over SSH to both targets. Each VM has two network adapters: an Internal Switch with a static IP for Ansible (predictable, never changes) and a Default Switch for internet access (DHCP, used for package downloads). The playbook targets all hosts and uses OS-specific conditionals and group variables to handle differences between distributions.
forgestack/
βββ ansible.cfg # Ansible configuration (roles path, inventory)
βββ inventory/
β βββ hosts.yml # Multi-OS target inventory
β βββ group_vars/
β βββ all.yml # Shared variables (users, sudo, password policy)
β βββ debian.yml # Debian-specific packages
β βββ arch.yml # Arch-specific packages
βββ playbooks/
β βββ site.yml # Master playbook β targets all hosts
βββ roles/
β βββ base/ # OS updates, essential packages, hostname, timezone
β β βββ tasks/
β β βββ main.yml
β βββ users/ # User creation, sudo, groups, password policy
β β βββ tasks/
β β β βββ main.yml
β β βββ templates/
β β βββ sudoers.j2 # Sudo rules template
β β βββ pwquality.conf.j2 # Password complexity template (Arch)
β βββ ssh/ # SSH hardening (planned)
β βββ firewall/ # UFW / iptables rules (planned)
β βββ security/ # AppArmor enforcement (planned)
β βββ monitoring/ # Cron-based monitoring (planned)
βββ README.md
Each role is self-contained with its own tasks, handlers, and templates. This structure is designed for extensibility. Phase 2 roles (docker/, ollama/, reverse_proxy/) will slot in alongside existing roles without modifying them.
| Role | What It Does | Multi-OS | Born2beRoot Equivalent |
|---|---|---|---|
| base | System updates, essential packages, hostname, timezone | β
apt / pacman conditionals |
First steps after install |
| users | User accounts, groups, sudo rules, SSH keys, password policy | β
pam.d / pwquality.conf |
User & group management, sudoers config |
| ssh | Custom port, key-only auth, root login disabled | π Planned | SSH hardening on port 4242 |
| firewall | Deny-all default, allow only SSH port | π Planned | UFW configuration |
| security | AppArmor enabled and enforcing at boot | π Planned | AppArmor setup |
| monitoring | Deploy monitoring script, configure cron schedule | π Planned | monitoring.sh + cron |
The playbook provisions three accounts with scoped access, reflecting real-world access control patterns:
| User | Groups | Purpose |
|---|---|---|
| alex | sudo, devops, sshusers |
Primary admin β full sudo, SSH access |
| deploy | devops, sshusers |
Service account for automated deployments (CI/CD) |
| monitoring | logging |
Restricted local account for monitoring stack β no SSH |
Packages are organised by function in OS-specific group variables, with categories for system administration, networking, editors, development tools, and maintenance utilities. The same Ansible tasks install the correct packages per distribution:
| Category | Debian | Arch |
|---|---|---|
| Package manager | apt |
pacman |
| SSH server | openssh-server |
openssh |
| Build tools | build-essential |
base-devel |
| Password quality | libpam-pwquality |
pam (built-in) |
- Ansible installed on your control node (
pip install ansibleorapt install ansible) - Target machine(s) running Debian 13 and/or Arch Linux minimal with SSH access
- Python on the target (Debian includes it; Arch requires
pacman -S python) - An SSH key pair deployed to the target (
ssh-copy-id) - Git for version control
git clone git@github.com:Axel92803/forgestack.git
cd forgestackUpdate inventory/hosts.yml with your target IPs:
all:
children:
debian:
hosts:
forgestack-deb:
ansible_host: 192.168.50.10
ansible_user: alex
ansible_ssh_private_key_file: ~/.ssh/id_ed25519
arch:
hosts:
forgestack-arch:
ansible_host: 192.168.50.20
ansible_user: alex
ansible_ssh_private_key_file: ~/.ssh/id_ed25519# All hosts
ansible all -m ping
# Specific OS group
ansible debian -m ping
ansible arch -m ping# Provision all hosts
ansible-playbook playbooks/site.yml
# Target a specific group
ansible-playbook playbooks/site.yml --limit debian
ansible-playbook playbooks/site.yml --limit arch
# Dry run (preview changes without applying)
ansible-playbook playbooks/site.yml --check --diffAfter a full playbook run, verify the server state:
# SSH into target
ssh alex@192.168.50.10 # Debian
ssh alex@192.168.50.20 # Arch
# Verify users and groups
id alex
id deploy
id monitoring
getent group devops
getent group sshusers
getent group logging
# Verify sudo configuration
sudo cat /etc/sudoers.d/forgestack
# Verify password policy
sudo chage -l alex
cat /etc/login.defs | grep PASS_
# Verify packages
which vim git tmux htop
# The real test β run the playbook again
ansible-playbook playbooks/site.yml
# Expected: zero changes- Project scaffold and Ansible connectivity
- Multi-OS inventory (Debian + Arch Linux)
- Base system role (updates, packages, hostname, timezone)
- User and permission management role (users, groups, sudo, password policy)
- SSH hardening role
- Firewall configuration role
- AppArmor enforcement role
- Monitoring deployment role
- Full end-to-end validation on fresh VMs
- Ansible Vault for secrets management
- Molecule testing framework
- CI/CD pipeline via GitHub Actions
- Terraform provisioning layer
- Documentation site (MkDocs / GitHub Pages)
- Docker role
- Ollama deployment role
- Reverse proxy role (Nginx/Caddy + TLS)
- Authentication layer
- Health checks and logging
- Ansible Documentation β the canonical reference
- Ansible for DevOps β Jeff Geerling β best practical Ansible book
- Jeff Geerling YouTube β video tutorials from beginner to advanced
- Debian Administrator's Handbook β comprehensive Debian guide
- Arch Wiki β best Linux documentation on the internet, useful regardless of distro
- Ansible Galaxy β community-contributed roles (study well-written ones for patterns)
- Jinja2 Documentation β template engine used by Ansible
Why Hyper-V over VirtualBox? WSL2 requires Hyper-V's hypervisor platform, which degrades VirtualBox to a compatibility mode. Since Hyper-V is already running, using it natively avoids the performance penalty.
Why dual-adapter networking? Internal Switch provides a stable, predictable IP for Ansible (192.168.50.10 / .20). Default Switch provides internet access for package downloads. Clean separation of concerns.
Why Debian and Arch? Debian is the direct continuation from Born2beRoot, familiar territory for learning Ansible. Arch is the opposite extreme: rolling release, no installer, everything manual. Supporting both from one codebase demonstrates real-world multi-platform automation and uses Ansible's when: conditionals and group variables extensively.
Why this role structure? Each role maps to a Born2beRoot requirement, making the progression from manual to automated explicit. Roles are self-contained so Phase 2 roles compose alongside them without refactoring.
Why three users? alex (admin), deploy (CI/CD service account), and monitoring (restricted local account) reflect how real teams structure access. Each has scoped permissions following the principle of least privilege, rather than giving every account full sudo.
Why separate group_vars per OS? Package names, file paths, and service names differ between distributions. Keeping OS-specific values in debian.yml and arch.yml while sharing everything else in all.yml means tasks reference the same variable names regardless of target, the right values get injected based on inventory group membership.
Author: Alex Tanvuia (Ionut Tanvuia)
42 Login: itanvuia
School: 42 London
Built on the foundations of Born2beRoot. Part of my journey through 42 School's peer-learning curriculum. Check out my other projects on my GitHub profile!