Repository folders structure follows:
training-umbrella
├── components "Created by `clone_components.sh`"
│ └── .gitignore "Git ignoring full folder"
│
├── docs
│ ├── architecture "Architecture documentation"
│ ├── development "Development documentation"
│ └── domain "Business Domain documentation"
│
├── tools
│ ├── pgadmin "PostgreSQL management tool Compose"
│ ├── portainer "Docker management tool Compose"
│ ├── python "Python scripts for shell CLI tools"
│ └── *.sh "Shell script tools"
│
├── .gitignore "Git ignored files definition"
├── .gitmodules "Git submodules definition"
├── CHANGELOG "Project's changelog file"
├── Pipfile "Python environment Requirements file"
├── Pipfile.lock "Python environment Version Lock file"
├── README.md "This File"
├── docker-compose-prod.yaml "Production Compose"
├── docker-compose-prod-rep.yaml "Production Compose with Replication"
├── docker-compose.yaml "Development Compose"
├── nginx.conf "Nginx configuration file"
├── setup.cfg "Project's config file"
└── template.env "Predefined '.env' file example"The components folder and subfolders are not present on the Github repository.
This is intended as they will be created by cloning the different components own repositories during the "Cloning project repository and sub component's projects" step later.
The components folder itself will become git ignored and thus all the internal
components changes will not be seen by Git nor pushed to the umbrella repository.
This way we maintain the independence of each component's own repository making this
umbrella repository to act solely as a centralizing hub for development environment,
full deployments and common documentation.
This section covers the required steps to get the project's development environment deployed on your local system.
This section covers the installation of several general prerequisites to deploy and run the development environment.
-
Brew: (For macOS)
Brew is the macOS package manager from the Homebrew project, is a common first installment when rigging a Mac for development and it will be used for the installation of several other requisites and tools.
More information about the Homebrew project can be found at their website:
Install with the following command:
/bin/bash -c \ "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" -
Git:
Git is the source version control system used for the project's repositories.
It's currently the most common source version control technology in use and the one generally provided by the major cloud based source control services, in our case GitHub
Install it with brew package manager:
brew install git
A Git GUI client is unnecessary but usually recommended, a selection can be found here:
Also some
VSCodeplugins, in case this is the IDE you'r using, asGitLens,GitHub ActionsandGit Graphmay come quite handy.VSCoderecommended plugins will be covered later.
-
Docker Desktop: (For MAC or WINDOWS)
Docker Desktop is an "easy-to-install" Docker management GUI application.
WARNING!
A Docker Desktop paid license is required since 31-1-2022
The main reason for installing Docker Desktop is It's the easiest and cleaner way to install Docker-Engine (AKA
Docker) and Docker-Compose on macOS.As Docker-Engine does not runs natively on macOS this tool installment hides the complexity of the installation and management of an embedded virtualized Linux machine inside macOS and the required riggings to allow a proper port linking, files and folders sharing, etc.
Follow instruction from https://docs.docker.com/docker-for-mac/install/
Mind the CPU architecture of your machine (
X64orAPPLE / ARM).NOTE
After installation go to Docker Desktop application
preferences/generaland uncheck theUse Docker Compose V2checkbox.This does not actually prevent the usage of Docker Compose V2, just avoid the
docker-compose(Think of it as Docker Compose V1) command to be overridden by V2docker compose(mind the space instead of'-'). -
Docker Engine:
Docker Engine is an open source containerization technology for building and containerize applications.
Already installed with
Docker Desktop. -
Docker Compose:
Docker Compose is a tool for defining and running multi-container Docker applications.
Already installed with
Docker Desktop.
First step to deploy locally is to clone the project's source from the GIT repository:
git clone https://github.com/Carbaz/training-umbrella.gitThen make it clone all the project's component repositories:
This step is handled with a convenient shell script under tools\ folder:
tools/clone_components.shThis will complete the previous project folders structure as shown in the previous section.
For each component's specific configuration and set up steps refer to it's own
README.md found on each component's root folder.
Key Point: We do not use actual Git Submodules feature but it's configuration file provides different GUIs and IDEs information about project's structure.
The Git Submodules feature allows Git and 'Git host' services as Github to automate several actions when checking out and working with nested repositories.
The problem with it is the way this behaves is not much intuitive, prone to errors and
seems designed for a quite specific use case which does not matches our intentions here,
so we will just use the .gitmodules file specification to make the IDEs know this is a
nested repositories project so they may behave adequately without actually activating
the Git Submodules feature.
On an standard usage the .gitmodules file will be autogenerated in the process of
initializing the Git Submodules feature.
So, in summary, do not rely on any documentation or knowledge of this Git feature,
we are not using it. Just treat each component's subfolder as a separated git
project, the components folder has been accordingly ".gitignored" to avoid
interferences among the nested repositories and the root one.
All this provides us a common point where to access different components to make full product local development, or testing or demo, deployment while keeping all repositories independent.
The dockerized deployment is the recommended way for testing tasks, both for external components integration and internal component services testing.
Requires to have Docker and Docker-Compose installed on your system.
This deployment will run all the required services, component services themselves and third parties mocks, needed to test and interact with the product in a local development environment.
For ease the build of the required infrastructure we use Docker Compose,
a docker-compose.yaml file on the root of the project describes the services and
configurations to launch.
The first time we will run the following command, on the project's root
folder, to "build" a local development image of the services.
docker-compose buildWhere possible the development image, defined on the Dockerfile files, does not
contains the service code itself but just its dependencies, the actual service code
will be injected by mounting the local code folders as volumes so code changes made
locally will be seen by the service running inside the container.
This way we may get an "auto reload" behavior making unnecessary to "rebuild and restart" the Docker containers after code changes or updates.
(Exceptions may apply, see each component's Software architecture documentation)
Once they are built we can start them up by running, on the project's root folder, the following command:
docker-compose up -dThe -d option detaches from the services and returns to the commandline prompt,
if omitted it will get attached to the services seeing their logs and if we press
ctrl + c the whole infrastructure will be stopped.
As convenience we can use always the following command:
docker-compose up -d --buildSo before starting or restarting the services will be built if required with a minimal boot up time impact otherwise.
The docker-compose commandline command allow to control the running status of the
infrastructure, full documentation can be found at it's own web site:
https://docs.docker.com/compose/reference/, mind we currently use V1 so don't follow
directions to V2 instead.
For a fast reminder, you can just run the following command.
docker-compose -hIf for some reason you want to pause the infrastructure without loosing it's memory
state you can manage with the following commands:
docker-compose pause
docker-compose unpauseThe following commands will stop and restart the services, thus loosing it's current
memory state, but they will keep it's local persistence: Any files created, modified,
etc.
Warning: They will not update the loaded docker images so even if you have already build or downloaded new images they will not be executed.
docker-compose stop
docker-compose start
docker-compose restartAny time you want to fully stop the infrastructure to be able to run new image builds the following command must be issued:
docker-compose downUsing down instead of stop ensures the images are "ejected" so when you run up
again the latest available (downloaded/built) ones are used.
In case we want to remove the full volume persistence of the services use:
docker-compose down -vThis will erase the full Postgres database and any services stored data, use with care.
In summary:
| Command | Keeps Volumes | Keeps Loaded Images | Keeps Memory Status |
|---|---|---|---|
docker-compose pause |
X | X | X |
docker-compose stop |
X | X | |
docker-compose down |
X | ||
docker-compose down -v |
|||
Apart from this commands the recommended way to manage the infrastructure one service at a time is using a Docker GUI interface.
The Portainer one is provided for this purpose. Also PgAdmin a web based postgres
client, is provided. Both services are found under the /tools folder and its usage
will covered later in this documentation.
It's recommended to keep the extra services updated, may it be for bugfixes, security fixes or even new functionalities, this is achieved just by making Docker to download the latests available images on the Docker Hub, the official Docker images repository.
Docker does not do this automatically by itself, not even check for updates, although some services, like pgAdmin or Portainer, may show you a notification when accessed through the browser to advise you about available updates.
The way we require Docker to download those images is as simple as running the following command:
docker-compose pullThis will check for new images for each service and download them if available.
NOTE:
You may get a message like:
WARNING: Some service image(s) must be built from source by running: docker compose build backend frontendThis is perfectly normal, it's just reminding you that those service are not downloaded from the hub and had to be built locally to update them.
Instead, if that WARNING comes followed by an ERROR and breaks the download is because you missed the step: "uncheck the
Use Docker Compose V2" step from the Requisites section above.
Once the downloads are finished command docker to down the containers to ensure old
images are Ejected and new ones are loaded wen you run up again.
docker-compose down
docker-compose up -dNOTE:
Old unused Docker images, both downloaded and obsolete project's components builds, are not automatically removed from your system and they may lead to a shortage of disk space, images may range from a few hundred Megabytes to Gigas.
Cleanup of obsolete images will be covered on the
Portainersection.
Usually services management and usage may require some commands to be executed inside the service's docker containers.
This is because those commands require some of the dependencies of the project to be already installed on the command's execution environment.
NOTE:
The scrips are minded to be run always from the projects root folder, running them from any other route, like inside the
/toolsfolder directly, will probably lead to some required resources not found.
Running, execute, a command inside a container is straightforward given you know the actual container's name inside Docker.
To ease this operation the project's container names are set on the docker-compose.yaml
file so, to run a command inside a given container, just run the following command
replacing the Container_Name and command_to_run placeholders with the required
values.
docker exec -it Container_Name command_to_runTo gain access to a container's commandline shell for an interactive session just run the following command:
docker exec -it Container_Name bashSome services may lack an installment of
bashshell, on those cases try usingshinstead.
For the following section's commands you may find the service's Container_Name
you want to interact with on the following table:
| Service | Container_Name |
|---|---|
| Backend Init | Training_Backend_Init |
| Backend | Training_Backend |
| Frontend | Training_Frontend |
| Postgres | Training_Postgres |
Accessing a running container's logs is as simple as running the following command:
docker logs -f Container_NameThis will keep following the logs as they occur, you can disconnect by pressing
ctrl + c.
You can even connect to an already finished container to see their logs, in this case,
even with the -f, option you'll get returned to your prompt directly.
Alternatively you may want to attach to the container, the main difference is you'll
be able to send signals to the container.
For example: by pressing ctrl + c, the container will get the SIG-TERM signal
stopping it's running service and closing, leaving all the remaining infrastructure
services running and ready to be used by a commandline running service.
docker attach Container_NameAdditionally to the standard docker-compose.yaml to deploy the dockerized environment
an extra one, based on the production docker images, is provided at
docker-compose-prod.yaml.
This deployment will not be running current code on components folders, instead will
load the production images, building them if --build is used, so we can test a much
more close to production scenario.
As this is based on the built images it will not track actual code changes so the auto reload feature is missing here, as intended.
In this special case the .env file will be used to configure the running services for
flexibility instead of injection the environment values directly on the deployment, by
writing them on the compose file, as it would be done on a real production deployment.
(See Services environment variables managementsection below for more details).
To launch this alternative deployment just run:
docker-compose -f docker-compose-prod.yaml up -d --buildNote: The
-f file_nameoption must be before theupcommand, all the remaining ones must go after.
At all effects the infrastructure will behave the same, with the mentioned exception of not reloading upon local code changes.
Additionally to the docker-compose-prod.yaml to deploy the dockerized environment
an extra one, with replicated Backend services and an Nginx service acting as a load
balancer is provided at docker-compose-prod-rep.yaml.
This compose is production images based so all that applies to the Production images based compose commented before also applies here.
To launch this alternative deployment just run:
docker-compose -f docker-compose-prod-rep.yaml up -d --buildNote: The
-f file_nameoption must be before theupcommand, all the remaining ones must go after.
At all effects the infrastructure will behave the same, despite of being internally concurrent and parallel, but some small differences apply:
-
To shut dow properly this deployment the
--remove-orphansoption must be used:docker-compose down --remove-orphans
Otherwise the
Nginxservice may get stuck unresponsive and prevent the network to be properly disposed avoiding a proper redeployment later.If the option is missed, the
downcommand can be rerun with it later without problems. (It may even be suggested bydockerto do so). -
As the
Backendservice is replicated there is not a single container name to access it, instead services are enumerated as follows: (Given the projects directory is named astraining-backend)- training-backend_backend_1
- training-backend_backend_2
- ...
Docker compose design prevents to enumerate spawn names with a desired name prefix as
Training_Backend_1,Training_Backend_2, ... and automatically uses current folder's name. -
Same way, and for same reason, the standard access port
15345, that is assigned to theNginxservice, will be redirected to a randombackend container.They are also given random specific ports than can be known by running
docker pscommand or, much easier, using a GUI like Portainer.
This section describes the usage and management of each deployed service, it spawns in two main sections, one for the project's specific services and the other for the extra services which mocks the external services it will use on production.
-
Backend Init as
Training_Backend_InitThe
Backend Initservice is an ephemeral initialization container which runs before theBackendservice is booted up, it stops after executing the initialization script and thus it has no service exposed at any port. -
Backend as
Training_Backendat http://localhost:15345The
Backendservice is the functional and data backend of the project.It's not expected to be accessed directly by the final user but for a development environment it offers several features to help testing the service.
It also provides a mean to test functionalities avoiding the
Frontendintermediation to ease the tracking and detection of failures. -
Frontend as
Training_Frontendat http://localhost:3000The
Frontendservice is the expected entry point for a user for the web based features of the project.
Their minimal configuration values are defined on the project's docker-compose.yaml as
environment variables.
Do no modify this docker-compose.yaml configuration, for specific local configuration
values use a .env file.
See later section about environment variable management.
Any extra insight of its usage and management refer to each one's own documentation on
their respective repository under the /components folder.
Several extra services, mocking actual infrastructure that would be available while deployed on cloud, are also launched along with the project's infrastructure locally:
-
Postgres as
Training_Postgresat http://localhost:31457PostgreSQL is an open source relational database and the one chosen for the project to store all the project's
Backenddata.Project's home can be found at https://www.postgresql.org/
It's internally exposed at http://postgres:5432 on the compose network.
NOTE: Docker compose creates a virtual network for all the services launched so they can reference among themselves using it's service name and internally exposed ports.
This internal address is the one other services will use to reach the database.
Its data storage relies on a Docker volume,
postgres, so all the data stored on the database is kept stored and persist through restarts.Its configuration is defined on the project's
docker-compose.yamlas environment variables.
Services will read any variable defined on the .env file on the services's root
folder if that's not been already found as a system environment variable.
This way the environment variables passed to the services, and thus its configuration,
can be controlled there, on the .env file, avoiding the need to edit the Compose
files or by any other mean.
The .env file is git ignored so you may keep your personal configurations without
risk overriding those of other members of the team.
A template.env file is provided as an example, to use several of the current provided
configurations just copy as .env and uncomment the required lines, do the opposite,
comment, to disable them and fall to the service defaults.
Take care not to have same variables duplicated by uncommenting both options, where provided, in case that happens the system will get the latest defined one, but it may happen unnoticed so best practice is to keep it clean and unique.
When running a service inside a container two main concerns must be taken into account regarding environment variables:
-
When we define, statically or dynamically, an environment variable on the compose file that variable is injected into the containers system so it will prevail over any one found on the
.envfile.Currently only the connection credentials for the local database and services cross reference addresses are hardcoded on the composes so services behaviour configuration will be taken from the
.envfiles. -
Docker images, as defined on the
Dockerfilewill not include any.envfile, this is intended as.envfiles are not the proper way to configure a production service.When running the default
docker-compose.yamlfile, as exposed before, we mount the service's component folder as a volume inside the container, this have the side effect of including the.envfile, if present, so the configuration taken will be the one present on the components root folder:training-umbrella\components\training-xxx\.env
When running the
docker-compose-prod.yamlor thedocker-compose-prod-rep.yamlcomposes we will, for local convenience, mount the Umbrella's root folder's.envfile inside the container, so all the services will see the same unique.envfile and the required configuration for the different services must be declared there together:training-umbrella\.env
On a proper
Productiondeployment there will be no.envfile at all and all the required configuration values must be injected as proper environment variables by the containers orchestrator and being sourced on a safe way: Passwords and sensible ones on a secrets service, keep all the configurations used versioned or in some way tracked, etc...
Several convenient management and development tools are provided on the tools/folder:
Some dockerized tool services are provided for convenient management of infrastructure services.
-
pgAdmin at http://localhost:5050
pgAdmin is a management tool for Postgres, it's not required for the project's infrastructure to work at all but provided as a proven useful tool to test and check the Backend service database data and status.
Project's home can be found at https://www.pgadmin.org/
Its data storage relies on a Docker volume,
data, so all the database connections and UI configuration are kept stored and persist through restarts.It comes unconfigured so on the first access setting a master password will be required and database connections will have to be created.
In order to connect to the project's Postgres database follow the next steps:
-
On the first access to http://localhost:5050 set the master password as required.
-
Once logged in click the Add New Server icon, look for it on the middle "Quick Links" dashboard.
-
On the General tab, the default one opened, fill in the server name you want to recognize it later.
-
On the connection tab:
pgAdmin, being on run on a different compose virtual network as the other services, included Postgres DB, will not see the internal addresses to connect to the database.
We need to point it to the host system, our computer's system, which should have an exposed instance of Postgres, the one we already deploy with the project's
docker-compose.yaml:- Host name/address:
host.docker.internalWe direct it to our own system's host. - Port:
31457 - Maintenance database:
postgres - Username:
postgres_admin_user - Password:
postgres_admin_pass
- Host name/address:
-
Click
Savebutton and it's done, you'll see the new added server listed on the left side bar, there you can expand sections until you get to thedatabases/db_name/schemas/public/tables.
-
-
Portainer (Community edition) at http://localhost:9000
A web based Docker management tool. It helps on management of images, volumes and containers running on the local Docker Engine.
Project's home can be found at https://github.com/portainer/portainer
Its data storage relies on a Docker volume,
data, so the created user and UI settings are kept stored and persist through restarts.It comes unconfigured so on the first access setting an Admin User and password will be required, once done it will automatically connect to the internal docker engine running service and provide all the (Community edition) features.
-
Components clone:
This tools is used during the deployment to clone all project's component's repositories.
It can be accessed at:
tools/clone_components.sh -
Services Status checker:
This tool serves as a health check tester for the project's deployed environments.
It can be accessed at:
tools/status_check.sh -
Python environment upgrade:
This tool, for MacOS, upgrades required Python environment applications to their's latest versions.
It can be accessed at:
tools/upgrade_mac_python_env_tools.sh -
Infrastructure "relaunch":
This script turns down current running infrastructure, launch a new one, rebuilding if required, and starts tracking the
Backendservice logs.It's usage may seem quite specific for
Backendtesting but it provides an example on how to do this actions and is easily adaptable for other components.It can be accessed at:
tools/relaunch.sh -
Docker volumes cloning tool:
This tool allows to clone a volume already stored on the docker engine cache, this may be helpful to create a backup of a given state to recover later or to pass status from one stack volume to another.
Mind launching composes inside components subfolders and from compose folder creates different stacks, thus different volumes with different contents.
It can be accessed at:
tools/docker_clone_volume.sh
Several documentation files can be found on the /docs folder, this documentation is
organized inside different sub folders resembling different documentation sections.
-
Avoiding binary format for documents.
All documents are written in markdown, this format is automatically rendered when seen on Github and several different formats, as PDF, HTML, ..., can be generated from them easily.
We will store no static binary formats for documents on the repositories as they are not properly versioned, take too much space and are prone to become obsolete if not manually kept updated on each commit.
In case there is a need of any static binary format to be delivered it can be generated by several tools, for example the VSCode plugin: "Markdown PDF".
-
Diagrams version management.
The
/diagramsfolders on different sections contains the .png diagrams used on that specific section documents and also it's.drawiosource files.Despite previous "rule" we will store rendered binary .png diagrams and pictures in order to allow the automated markdown rendering on the Github website.
To work with this files use the Draw.io service.
Don't link to Github when prompted, just opt for work on "device".
-
Architecture:
Documentation related to Project's Systems Architecture and deployment processes and strategies.
-
Development:
Documentation regarding management and maintenance of the Development Environment and required tools.
-
Domain:
Documentation regarding Project's Business Domain model, requisites, features, etc.
-
Development tools, app like, plugins:
-
Development helpers, linters, plugins:
-
Infrastructure management tools, app like, plugins: