-
Notifications
You must be signed in to change notification settings - Fork 0
Description
Proof of concept
Observing how brittle stack_deploy.py can be, I was curious whether there's an even faster way than one-click-apps to spin up an entire stack on CapRover. I was able to do this by abusing the "Restore from backup" feature of CapRover (which is still "experimental").
-
From an existing demo deployment, I generated a backup .tar

$ tree -L 4 caprover-backup-2025_08_06-14_46_08-1754491568402-ip-127_0_0_1 caprover-backup-2025_08_06-14_46_08-1754491568402-ip-127_0_0_1 ├── data │ ├── config-captain.json │ ├── letencrypt │ │ ├── etc │ │ │ ├── accounts │ │ │ ├── archive │ │ │ ├── live │ │ │ ├── renewal │ │ │ └── renewal-hooks │ │ └── lib │ ├── nginx-shared │ │ └── dhparam.pem │ ├── registry │ └── shared-logs └── meta └── backup.json -
In
data/config-captain.jsonI did string search/replace of the domain names, to use my new deployment's domain.- I did not disable SSL (
hasSsl, hasDefaultSubDomainSsl, forceSsl).
- I did not disable SSL (
-
I did not try to change
meta/backup.jsonornginx-shared/dhparam.pem... worth investigating further. -
The
data/letsencryptpath is important, even if you were to disable SSL. I was unable to get caprover to start if I messed too much with these. What I did end up doing is:- Deleted all files under
letsencrypt/etc/renewal/andletsencrypt/etc/accounts/. - renamed the
archive/PEMs, and similarly renamed thelive/cert symlinks to match new domain names and pointed them to existing archived PEMs.
- Deleted all files under
-
Follow the instructions per https://caprover.com/docs/backup-and-restore.html#restoration-process ; it took a 3-5 minutes to pull docker images and renew SSL certs, but a quick smoketest shows apps are running and accessible (except for windmill & superset, because i never created their logical databases)
It works! It automatically sets up (restores):
- maintenance schedule (cleaning docker image)
- custom docker container registry
- SSL and custom domain names
It does not carry over data (i.e. volumes) from the old deployment, so this is just like a fresh install from zero.
To discuss
Ideally new users can customize their initial stack and therefore use a custom .tar (e.g. opt-out of some services; services use unique passwords from other deployments). Is it really an improvement to maintain code that dynamically generates a new .tar, over maintaining code that uses CapRover-API?
- I think it would be, because it eliminates (from our script, anyway) the whole class of dynamic failures from the Caprover server. Although, it just punts them to later.
Advantages for the user:
- Conceptually rolls together CapRover install with GC Stack install. (in practice I don't know what the step(s) look like)
Limitations
- From the user's point of view, this is slightly more limiting than
stack_deploy. You have to do this restore-from-backup as part of initial caprover server startup. You cannot use it if already running a CapRover. - Does not allow us to drop one-click-apps: I imagine one-click-apps are still necessary for every new app installation after initial spin-up.
- Cannot handle required database setup (before installing windmill, superset)