Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 9 additions & 2 deletions app/components/AppHeader.vue
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,15 @@ const { header } = useAppConfig()
v-if="header?.logo?.dark || header?.logo?.light || header?.title"
#title
>
<div v-if="header?.logo?.dark || header?.logo?.light" class="flex items-center gap-2 shrink-0">
<img src="/icon.svg" alt="fsbackup icon" class="h-7 w-auto" />
<div
v-if="header?.logo?.dark || header?.logo?.light"
class="flex items-center gap-2 shrink-0"
>
<img
src="/icon.svg"
alt="fsbackup icon"
class="h-7 w-auto"
>
<span class="font-semibold text-base tracking-tight">fsbackup</span>
</div>

Expand Down
1 change: 1 addition & 0 deletions components/OgImage/Docs.vue
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
<!-- eslint-disable vue/multi-word-component-names -->
<script lang="ts" setup>
const props = withDefaults(defineProps<{ title?: string, description?: string, headline?: string }>(), {
title: 'title',
Expand Down
55 changes: 30 additions & 25 deletions content/1.getting-started/1.index.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,43 +4,48 @@ description: An overview of fsbackup — what it does, how it works, and what it
---


fsbackup is an rsync-based snapshot backup system for home labs. It runs as a Docker container and pulls backups from remote hosts over SSH, storing them as hard-linked directory snapshots on a local drive.
fsbackup is a ZFS-native rsync backup system for home labs. It runs directly on the backup server as a systemd service, pulls backups from remote hosts over SSH, and stores them as ZFS snapshots on a local drive.

## How it works

fsbackup uses rsync's `--link-dest` option to create space-efficient snapshots. Each snapshot is a full directory tree, but unchanged files are hard-linked to the previous snapshot — so only new or changed data takes up additional space.
fsbackup rsyncs each target into a dedicated ZFS dataset. After a successful sync, it creates a ZFS snapshot named for the type and date — for example, `@daily-2026-03-23` or `@weekly-2026-W12`. ZFS deduplicates unchanged data at the block level across snapshots, making storage efficient without the fragility of hard-link trees.

Snapshots are organized into tiers:
Snapshots are organized by type:

| Tier | Frequency | Retention |
|------|-----------|-----------|
| Daily | Every day | 14 days |
| Weekly | Promoted on Monday | 8 weeks |
| Monthly | Promoted on the 1st | 12 months |
| Annual | Promoted from December | Indefinite |
| Type | Frequency | Default retention |
|------|-----------|-------------------|
| Daily | Every day | 14 snapshots |
| Weekly | Weekly runner | 8 snapshots |
| Monthly | Monthly runner | 12 snapshots |
| Annual | *(configure as needed)* | Unlimited |

Retention is enforced daily by `fs-retention.sh`, which uses `zfs destroy` to remove the oldest snapshots beyond the configured keep count.

## Data classes

Backup targets are grouped into classes. Each class has its own schedule and can be configured independently:
Backup targets are grouped into classes. Each class has its own schedule and set of systemd timers:

- **class1** — application data, databases, personal files (default: daily)
- **class1** — application data, databases, personal files (default: daily + weekly + monthly)
- **class2** — infrastructure config: Docker stacks, nginx, DNS zones (default: daily)
- **class3** — large archives: photos, video libraries, media collections (default: monthly, mirroring optional)

## What runs in the container
- **class3** — large archives: photos, video libraries, media collections (default: monthly)

The Docker image contains:
## What runs on the server

- All backup scripts (`fs-runner.sh`, `fs-mirror.sh`, `fs-doctor.sh`, etc.)
- [supercronic](https://github.com/aptible/supercronic) for cron scheduling
- A FastAPI + HTMX web UI for monitoring and management
- rsync, age, zstd, and AWS CLI for S3 export
fsbackup runs directly on the backup server as the `fsbackup` system user (UID 993). There is no Docker container. The system consists of:

## What stays on the host
- Bash scripts in `/opt/fsbackup/bin/` and `/opt/fsbackup/utils/`
- systemd service and timer units (one per class and job type)
- A FastAPI + HTMX web UI (`fsbackup-web.service`)

- Config files (`/etc/fsbackup/`)
- SSH keys and AWS credentials (`/var/lib/fsbackup/`)
- Snapshot storage (`/backup/snapshots`, `/backup2/snapshots`)
- Prometheus metrics files (`/var/lib/node_exporter/textfile_collector/`)
## What lives where

All of these are bind-mounted into the container — the container itself is stateless.
| Path | Contents |
|------|----------|
| `/etc/fsbackup/fsbackup.conf` | Main configuration |
| `/etc/fsbackup/targets.yml` | Backup targets |
| `/etc/fsbackup/age.pub` | age public key for S3 encryption |
| `/var/lib/fsbackup/.ssh/` | SSH keys |
| `/var/lib/fsbackup/.aws/` | AWS credentials |
| `/var/lib/fsbackup/log/` | Job logs |
| `/backup/snapshots/<class>/<target>/` | ZFS datasets |
| `/var/lib/node_exporter/textfile_collector/` | Prometheus metrics |
39 changes: 21 additions & 18 deletions content/1.getting-started/2.requirements.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,8 @@ description: What you need to run fsbackup — software prerequisites, disk reco

| Requirement | Notes |
|-------------|-------|
| Linux (Ubuntu/Debian recommended) | The backup server host OS |
| Docker Engine + Docker Compose v2 | Required for the Docker deployment |
| Linux (Ubuntu 22.04+ / Debian 12+) | The backup server host OS |
| ZFS (OpenZFS) | For snapshot storage — `zfsutils-linux` package |
| rsync on remote hosts | Version 3.x recommended |
| SSH access to remote hosts | fsbackup pulls via `backup` user over SSH |
| node_exporter (optional) | For Prometheus metrics |
Expand All @@ -20,29 +20,33 @@ fsbackup is designed to run on a dedicated backup server — a machine that exis

### CPU and RAM

fsbackup is not CPU or RAM intensive. A modest machine (2+ cores, 4GB RAM) is more than sufficient. The bottleneck is almost always disk or network I/O.
fsbackup is not CPU or RAM intensive. A modest machine (2+ cores, 4 GB RAM) is more than sufficient. The bottleneck is almost always disk or network I/O. ZFS ARC will use available RAM for read caching — more RAM helps with frequent restores.

### Storage — primary drive
### Storage — ZFS pool

The primary backup drive is where snapshots are written. **CMR (Conventional Magnetic Recording) drives are strongly recommended** for the primary drive.
All snapshots are stored on a ZFS pool. The recommended topology for a two-disk setup is a **mirrored vdev**: both disks hold identical data, and a single disk failure is fully survivable.

```bash
# Create a mirrored pool on two disks
sudo zpool create -o ashift=12 backup mirror /dev/sdc /dev/sdf
sudo zfs create backup/snapshots
```

**CMR (Conventional Magnetic Recording) drives are strongly recommended.**

::u-callout{icon="i-lucide-alert-triangle" color="orange"}
**Avoid SMR drives for your primary snapshot drive.** SMR (Shingled Magnetic Recording) drives use a write-cache that causes severe performance degradation under rsync's random-write workload, especially as the drive fills. This can cause backup jobs to run for hours instead of minutes.
**Avoid SMR drives for ZFS.** SMR (Shingled Magnetic Recording) drives use a write-cache that causes severe performance degradation under ZFS's random-write workload, especially during resilver and scrub operations. This can cause jobs that should take minutes to run for hours.
::

Recommended drives for primary backup storage:
Recommended drives:
- NAS-rated CMR drives (Seagate IronWolf, WD Red Plus/Pro)
- Any CMR drive 4TB+ for meaningful retention

### Storage — mirror drive

The mirror drive receives a copy of primary snapshots. The same CMR recommendation applies. A smaller drive is acceptable if you only mirror selected classes or tiers.
- Any CMR drive 4 TB+ for meaningful retention depth

### Storage — NOT recommended

- **SMR desktop drives** (Seagate BarraCuda, WD Blue) — avoid for snapshot storage; acceptable for archival/sequential use only
- **USB drives** — too slow for regular rsync workloads
- **Network shares (NFS/SMB)** — not supported as snapshot storage
- **SMR desktop drives** (Seagate BarraCuda, WD Blue) — avoid for ZFS; acceptable for archival/sequential use only
- **USB drives** — too slow and unreliable for regular rsync + ZFS workloads
- **Network shares (NFS/SMB)** — not supported as ZFS storage

## Network

Expand All @@ -56,6 +60,5 @@ The following are not currently supported:

- **Windows hosts as backup targets** — fsbackup uses rsync over SSH; WSL2 can work but is not tested
- **Pushing backups** (agent on source host pushes to fsbackup) — fsbackup is pull-only
- **NFS/CIFS as snapshot storage** — the hard-link snapshot model requires a POSIX filesystem
- **Databases without Docker** — `fs-db-export.sh` uses `docker exec` to dump databases; bare-metal databases require a custom pre-backup script
- **Incremental S3 sync** — S3 export is full snapshot upload per tier/target; no block-level deduplication
- **Non-ZFS snapshot storage** — v2.0 requires ZFS; the old hard-link snapshot model is no longer supported
- **Incremental S3 sync** — S3 export is a full snapshot upload per tier/target; no block-level deduplication
110 changes: 46 additions & 64 deletions content/2.installation/1.quickstart.md
Original file line number Diff line number Diff line change
@@ -1,103 +1,85 @@
---
title: Quick start
description: Get fsbackup running in under 15 minutes with Docker.
description: Get fsbackup running in under 20 minutes with the automated installer.
---


This guide gets fsbackup running with Docker in under 15 minutes.
This guide gets fsbackup running on a bare-metal Linux server using the automated installer.

For a full installation reference see [Docker installation](/installation/docker) or [Bare-metal installation](/installation/bare-metal).
For detailed steps see [Installation](/installation/bare-metal).

## 1. Create the fsbackup user
## Prerequisites

```bash
sudo useradd -r -m --uid 993 -d /var/lib/fsbackup -s /bin/bash fsbackup
```
- Ubuntu 22.04+ or Debian 12+ (bare-metal or VM)
- A ZFS pool with a `snapshots` dataset (see [ZFS pool setup](/installation/zfs-pool))
- SSH access to the machines you want to back up

The UID **must be 993** to match the user baked into the Docker image.

## 2. Generate the SSH keypair
## 1. Run the installer

```bash
sudo -u fsbackup ssh-keygen -t ed25519 \
-f /var/lib/fsbackup/.ssh/id_ed25519_backup -N ""
curl -fsSL https://raw.githubusercontent.com/fsbackup/fsbackup/main/bin/fs-install.sh | sudo bash
```

## 3. Create directories
The installer will:

```bash
sudo mkdir -p /etc/fsbackup/db
sudo mkdir -p /backup/snapshots/{daily,weekly,monthly,annual}
sudo mkdir -p /backup2/snapshots/{daily,weekly,monthly,annual}
sudo chown -R fsbackup:fsbackup /backup/snapshots /backup2/snapshots /var/lib/fsbackup
```
1. Install required packages (`rsync`, `jq`, `yq`, `zstd`, `age`, `awscli`, `zfsutils-linux`)
2. Create the `fsbackup` system user (UID 993)
3. Install scripts to `/opt/fsbackup/`
4. Create config skeleton in `/etc/fsbackup/`
5. Set up ZFS delegation (`zfs allow`)
6. Install and enable systemd units
7. Apply schedule from `fsbackup.conf`
8. Optionally set up the web UI

## 4. Create config files
## 2. Create ZFS datasets for your targets

```bash
sudo mkdir -p /docker/stacks/fsbackup
```

Create `/docker/stacks/fsbackup/docker-compose.yml`:

```yaml
services:
fsbackup:
image: ghcr.io/fsbackup/fsbackup:latest
container_name: fsbackup
restart: unless-stopped
user: "993:993"
ports:
- "8080:8080"
volumes:
- /etc/fsbackup:/etc/fsbackup
- /var/lib/fsbackup:/var/lib/fsbackup
- /backup/snapshots:/backup/snapshots
- /backup2/snapshots:/backup2/snapshots
- /var/lib/node_exporter/textfile_collector:/var/lib/node_exporter/textfile_collector
sudo /opt/fsbackup/bin/fs-provision.sh --dry-run # preview
sudo /opt/fsbackup/bin/fs-provision.sh # create
```

Create `/etc/fsbackup/fsbackup.conf`:
## 3. Edit config

```bash
SNAPSHOT_ROOT="/backup/snapshots"
SNAPSHOT_MIRROR_ROOT="/backup2/snapshots"
MIRROR_SKIP_CLASSES=""
sudo nano /etc/fsbackup/fsbackup.conf # set S3_BUCKET, schedules, retention
sudo nano /etc/fsbackup/targets.yml # define your backup targets
```

## 5. Start the container
See [fsbackup.conf reference](/configuration/fsbackup-conf) and [Targets](/configuration/targets) for details.

```bash
cd /docker/stacks/fsbackup
docker compose up -d
```

## 6. Initialize remote hosts
## 4. Trust remote hosts

On each machine you want to back up, run:
For each host you want to back up:

```bash
sudo ./remote/fsbackup_remote_init.sh \
--pubkey-file /var/lib/fsbackup/.ssh/id_ed25519_backup.pub
sudo /opt/fsbackup/utils/fs-trust-host.sh <hostname>
```

## 7. Trust SSH host keys
## 5. Apply ACLs for local paths

If any targets are local paths (e.g. Docker volumes on the same machine):

```bash
docker exec -it fsbackup /opt/fsbackup/utils/fs-trust-host.sh <hostname>
sudo /opt/fsbackup/bin/fs-fix-permissions.sh
```

## 8. Verify and run
::u-callout{icon="i-lucide-alert-triangle" color="orange"}
`fs-fix-permissions.sh` applies `setfacl -R -m u:fsbackup:rX` to each local source path. Review each path before running — on Docker volume paths this grants the fsbackup user read access to all files inside the volume, including any secrets your containers may store there.
::

## 6. Verify and run

```bash
# Check all targets are reachable
docker exec -it fsbackup /opt/fsbackup/bin/fs-doctor.sh --class class1
# Check target health
sudo -u fsbackup /opt/fsbackup/bin/fs-doctor.sh --class class1

# Dry run first
docker exec -it fsbackup /opt/fsbackup/bin/fs-runner.sh daily --class class1 --dry-run
# Dry run
sudo -u fsbackup /opt/fsbackup/bin/fs-runner.sh daily --class class1 --dry-run

# Run for real
docker exec -it fsbackup /opt/fsbackup/bin/fs-runner.sh daily --class class1
# First real backup
sudo -u fsbackup /opt/fsbackup/bin/fs-runner.sh daily --class class1
```

The web UI is now available at `http://<host>:8080`.
## 7. Check the web UI

The web UI runs on port 8080. Open `http://<your-server>:8080` in a browser.
Loading