This document will explain how to set up Wikijump on your machine for local development.
The install folder has everything you need to run a local Wikijump install either in a container or on metal or a VM.
The recommended way to install Wikijump is via Docker. Docker is a means of containerizing, or in the case of Windows or Mac, also virtualizing Linux images. It lets you easily create and destroy different Wikijump builds, and it also acts like a sandbox to protect the rest of your system from dependency pollution.
For Windows, you will need WSL2, a way of running a Linux distribution simultaneously with Windows. You will need Windows 10 or 11 for this. The only alternative to WSL2 would be using a Linux virtual machine such as VirtualBox, which is not covered by this document.
It is recommended that you use Ubuntu for the WSL2 distribution. Ubuntu in particular has considerations made for WSL2 use and in general will be the most reliable way forward.
You will need Docker and Docker Compose installed. See Docker's documentation on installation.
Once it's installed, ensure it is running:
| systemd Distros | WSL2 |
|---|---|
$ sudo systemctl enable --now docker.service |
$ sudo service docker start |
The helper script install/local/deploy.py is the primary way to manage a local Wikijump installation.
Under the hood, it is running the following:
$ [sudo] docker-compose -p wikijump -f docker-compose.yaml [-f docker-compose.dev.yaml] <action>This uses the specifications in install/local/docker-compose.yaml (and by default, also install/local/docker-compose.dev.yaml) to start up a series of Docker containers containing each of the components for Wikijump.
A few notes:
- On some systems,
sudois required to run Docker, but on others it is not. - The
-f docker-compose.dev.yamlconfiguration file provides container bindings for development. For instance, if you modifydeepwell/src/files locally, then those changes will be reflected in the container.
The "action" corresponds to actions that docker-compose can do. Some common actions include:
build— Build new Docker images for each of Wikijump's containers.up— Create and start containers for Wikijump. Ifbuildhas not been previously run, then is executed first.up --build— Likeup, but always rebuilds first.start— Start any already-existing containers for Wikijump.stop— Stop currently-running containers for Wikijump.down— Stop and delete any containers for Wikijump.
Note that in docker-compose.yaml, there are configuration options for the domains to use. For development purposes, these are set to wikijump.localhost. This is the domain you will be connecting to, e.g. https://www.wikijump.localhost. The TLD .localhost is just like the usual localhost domain. Even when running locally, HTTPS is used. Because this certificate is self-signed, you will need to dismiss the certificate warning.
Thus, you can run the following to start a local instance:
$ install/local/deploy.py up
Because this first needs to build the images and sources, this will take some time. Docker's build cache will prevent future rebuilds from taking as long, though there are circumstances where a full rebuild will be necessary (such as updated dependencies).
You may encounter various errors involving file permissions if using Windows-based tools alongside WSL2. It is recommend that you fix these issues by either granting the correct file permissions to any that Windows may have modified, or by re-cloning the repository purely within WSL2.
Once everything has started, you can connect to http://www.wikijump.localhost/. Changes you make to the codebase should automatically be applied to the containers, as your machine's filesystem has been "bound" to the containers' filesystem (see docker-compose.dev.yaml). This is one-way, so a container can't modify your filesystem. Note that adding new dependencies will require a rebuild.
You can kill the terminal (CTRL + C usually) when you want to stop the server.
If you want to entirely reset the containers, as their data is otherwise persistent even across restarts, you can run the following:
$ install/local/deploy.py downIt's useful to keep track of existing Docker images and containers, and destroy them when you no longer need them, so you don't waste space rebuilding the same image over and over. If you are using Docker Desktop, you can manage containers and images from the GUI. Otherwise, using the command line:
$ docker container ls # List containers
$ docker rm [ID] # Destroy the container with this ID
$ docker images # List images
$ docker rmi [ID] # Remove the image with this ID
You can also use docker system prune, which deletes everything unused, though this should be used sparingly.
If you're developing software, you will need utilities associated with the relevant project(s) you're working on:
- Rust (and Cargo), stable (if developing
deepwellorwws) - NodeJS (and NPM), v15 or greater (if developing
framerail) - PNPM v6 (if developing
framerail)
These are what is used in the Docker images to build and run their respective services, and enables you to run the autoformatter, build on your machine if you have the right dependencies, etc.
While you can run services in Docker, it is possible to run services directly on your local machine. One requirement for doing so is initializing needed configuration files.
For deepwell, this is your config.toml and .env files. You can glance through config.example.toml and .env.example to see what is expected.
If you want to enter a container to make temporary changes, you can do so by entering it with a CLI. From Docker Desktop, after running the containers, find the Wikijump app and within it the container you wish to enter, then click the 'CLI' button. Or from the command line:
$ docker exec -it [container id] sh
...where [container id] is the ID of the corresponding container from docker ps. (sh can be replaced with a different command.)
One reason you may need to enter the container is to manually adjust the Wikijump config. For example, if you use a port other than 80 for your Docker container, you will need to edit site.custom_domain to add the port number (e.g. "www.wikijump.localhost:8080"). Alternatively, use curl to set the domain directly (e.g. "-H 'www.wikijump.localhost'")
Sometimes when starting a container locally, it will report being "unhealthy" and cause your service to be nonfunctional. There are a few reasons to check for when troubleshooting:
- If the container is not
deepwell, it may be thatdeepwellis unhealthy and a dependent container is unable to connect to it. - Since builds are done in runtime, the initial build a container does (before it can do faster incremental builds) can take a long time. In some cases, docker will decide that the service has taken too long and is unhealthy. Try restarting it to allow the build to finish.
- If the build fails (say due to a compiler error), the service will not be healthy. If the failure happened a while ago, the logs will not be visible at the bottom of your screen; check
./deploy logs [container]to see what the issue may be. - The service may be trying to access a file which is unavailable. Check that the file hasn't been removed locally, or that if you are not using local binding (i.e.
--no-dev), that the files have been manually installed into the container.
Once you have a local instance of Wikijump running, you may wish to make curl requests against it to test various pieces of its functionality. However there are a few considerations to be had given the deployment situation:
- Wikijump is host-sensitive. Hitting the same web route with two different domains will result in different content (e.g.
scp-sandbox-3.wikidot.comandscp-jp.wikidot.comare different sites). - Caddy local serves self-signed certificates. Naturally, as this is not deployed on the open web, there are no "real" TLS certificates, but nonetheless Wikijump is designed to be HTTPS-only so we must use even locally.
- Svelte does not assume any accepted content types. Any framerail calls need this, but wjfiles does not.
Thus, the "standard" curl request for a web page would look something like the following:
$ curl -i -k -H 'Host: scpwiki.localhost' -H 'Accept: text/html' https://localhost/scp-002
The -k argument addresses point 2, the Host header addresses point 1, and the Accept header addresses point 3. This request effectively fetches https://scpwiki.localhost/scp-002 for you. Naturally, you can replace the route after localhost with whatever you are requesting (say /-/file/scp-001/fractal-mka.jpeg).
If you enable multi-factor authentication on a local container you may find that
the clock drift is too great for TOTP codes to work. In docker-compose (wsl -d docker-compose)
you can enter this command to sync your time up:
$ ntpd -d -q -n -p 0.pool.ntp.org