-
Notifications
You must be signed in to change notification settings - Fork 0
Home
In this tutorial you’ll explore how you can set up your board farm using Raspberry Pis as Agents and Nordic boards as device under test. In the first tutorial multiple testdevices where connected to a single runner. This time each test device gets an agent node. This tutorial shows the upsides of this approach and addresses the challenges that comes with it.
Topics covered:
- Ansible: To commission the Raspberry Pis
- Network Boot: The Raspberry Pi boot from an image.
- GitHub Actions: Automate the build and test process.
- Self-hosted GitHub runners: Use Raspberry Pis to flash and test.
- GitHub CLI: Clone repositories, and check GitHub runners.
- Yocto: Build an embedded Linux image.
- tbot: A Python-based test framework to test embedded Linux
- A Debian or macOS-based host machine
- A Debian based server
- One Raspberry Pi 4, incl power supply, sd-card
- One USB-Ethernet adapter
- A Toradex Verdin development board
- Basic shell experience
- Basic software development experience
The Toradex Verdin board is used as reference Embedded Linux board as a substitution to the device under test and will be further referred to as test device.
The Raspberry Pi is used as an agent to flash and test the test-device, it will be referred to as agent node. This tutorial uses only one agent node and one test device, but the rather complex technologies and approaches are chosen to make the setup scalable to dozens of even hundreds of test-devices.
The server will be used to build the firmware but also to control the agent nodes. In will be refered to as control node.
Requirements:
- Root access to the control node
- The control node must be in the same network as all agent node.
- The test device USB ports
X34andX66are connected to the agent-node - The test device ethernet interface
X35is connected to the agent-node
Fork the board_farm_tutorial_3 repository and clone it to your host machine:
gh repo fork bitcrushtesting/board_farm_tutorial_3
gh repo clone board_farm_tutorial_3
cd board_farm_tutorial_3How to install GitHub cli: GitHub
Execute the start_tutorial.sh shell script. It will purge the files you will create yourself and does some checks on your operating system.
The purpose of the control node is to manage the agent nodes. In this example it is also used to host a runner that is powerfull enough to build the firmware for the test device, plus to build the custom images for the agent-nodes.
Packages required:
- uuu NXP I.MX image deploy tool
- git
- docker
- repo
- avahi-tools
Tip
A shell script is provided that installs the required packages. File: control-node/install-dependencies.sh
4.2 Setup a Self-Hosted Runner
You need to create a container with the appropriate environment variables to register your runner with GitHub.
Get the GitHub Runner Token:
- Navigate to your GitHub repository.
- Go to Settings > Actions > Runners.
- Click on Add New Runner.
- Select Linux as the operating system and x64 for architecture.
- Copy the TOKEN from the provided command after clicking on "Register a runner".
Run the Docker Container:
Create and start the container using the following command, replacing the placeholders with your actual values:
docker run -d --name github-runner \
-e REPO_URL="https://github.com/your-username/your-repo" \
-e RUNNER_NAME="your-runner-name" \
-e RUNNER_TOKEN="your-token" \
-e RUNNER_WORKDIR="/tmp/github-runner" \
-e RUNNER_GROUP="your-group" \
-e LABELS="self-hosted,Linux,X64" \
-v /var/run/docker.sock:/var/run/docker.sock \
myoung34/github-runner:latest
Replace the following in the command:
- your-username/your-repo with your GitHub repository's path.
- your-runner-name with a desired name for the runner.
- your-token with the token you got from GitHub.
This command starts the container with your self-hosted runner, which will automatically register itself with your GitHub repository.
Verify the Runner is Registered
Go back to your GitHub repository's Settings > Actions > Runners section. You should see your runner listed there as online and ready to use.
A custom Raspberry Pi OS allows you to tailor the operating system to meet specific project requirements. It offers the ability to pre-configure settings, software, and security measures, providing faster deployment.
A custom OS can also enhance development experience by incorporating unique interfaces, drivers, or applications tailored to your needs.
One of the customization is that we are going to install avahi-daemon. This enables us to discover the available agent-nodes. A more sophisticated version would be to install a client that registers on a board-farm control server. Anyway, in this example zeroconf is used to create the Ansible inventory.
Tip
To skip this step you get a pre-build image here: node-agent.img
Note
The following commands are to be executed on the control-node.
3.1 Clone the pi-gen Repository
The pi-gen tool is an official Raspberry Pi Foundation project for creating custom Raspbian images. Clone the repository to your local machine:
cd agent-node
git clone https://github.com/RPi-Distro/pi-gen.git
cd pi-gen3.2 Configure the Build
Create a file config, and add the following lines:
IMG_NAME=agent-node
FIRST_USER_NAME="agent"
FIRST_USER_PASS="agent"
ENABLE_SSH=1
LOCALE_DEFAULT='de_DE.UTF-8'
KEYBOARD_KEYMAP='de'
KEYBOARD_LAYOUT='German'
TIMEZONE_DEFAULT='Europe/Berlin'
DEPLOY_COMPRESSION=none
STAGE_LIST="stage0 stage1 stage2 custom-stage"
Custmize the locale, keyboard layout and timezone to your needs or remove these lines to use the Rasperry Pi OS defaults.
File: config
Learn more on building Raspberry Pi images: pi-gen
3.3 Build the Custom Raspbian Image
A build scripts helps with the build of the image. It adds a custom build stage, removes not required stages and uses the prepared config file.
Build the image with the following command:
tmux new -s agent-node-os
./build-image.sh
exitThe command tmux enables to detach and reattach the terminal session. With tmux the build will continue even when the session gets aborted e.g. by connection loss.
File: build-pi-image.sh
The steps for this job:
- Get the source code
- Run the build script
- Upload the image
Define the workflow name:
---
name: Yocto CI
Execute the script only on pushes to the main branch:
on:
push:
branches:
- main
Define the job name build and the GitHub runner:
jobs:
build:
runs-on: control-node
Define the steps that are executed:
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Run the build script
run: |
echo "This is running on the control node"
./firmware/build-image.sh
- name: Upload image
uses: actions/upload-artifact@v4
with:
name: firmware-name
path: ./deploy/image/
if-no-files-found: error
Note
Because it uses a self-hosted runner 'act' cannot be used to pre-test the workflow.
Build the Linux image
This steps is supposed to be executed on the control node. It builds the image by using the Yocto Project.
More: Toradex
Flash the bootloader
File: config.txt
Deploy Keys
Login to the agent-node:
ssh agent@agent-node.localGet the image:
Flash the image:
uuuSerial console:
minicom verdinNote
The agent-node image comes with minicom and its configuration file /etc/minicom/minirc.verdin preinstalled.
The steps for this job:
- Get the image
- Flash the test device
- Run the tests
Define the additional job in ci.yml:
test:
runs-on: agent-node
Add the steps:
steps:
- name: Get the image
uses:
- name: Flash the test device
uses:
- name:
uses:
For testing embedded Linux a great framework named tbot is available.
Learn more about tbot: tbot.tools
File: ci.yml
For now, there are no more tutorials available.
Happy farming!
We would love to hear back from you. Helps us improve our tutorials and file the evaluation form:
If you find this tutorial helpful, please consider ⭐️ring or share it! Star this repository
Email: contact@bitcrushtesting.com
LinkedIn: Bitcrush Testing