Skip to content
Boernsman edited this page Aug 7, 2024 · 1 revision

Board Farm Tutorial 3 - Embedded Linux

Work in Progress ...

Check Wiki

In this tutorial you’ll explore how you can set up your board farm using Raspberry Pis as Agents and Nordic boards as device under test. In the first tutorial multiple testdevices where connected to a single runner. This time each test device gets an agent node. This tutorial shows the upsides of this approach and addresses the challenges that comes with it.

Topics covered:

  • Ansible: To commission the Raspberry Pis
  • Network Boot: The Raspberry Pi boot from an image.
  • GitHub Actions: Automate the build and test process.
  • Self-hosted GitHub runners: Use Raspberry Pis to flash and test.
  • GitHub CLI: Clone repositories, and check GitHub runners.
  • Yocto: Build an embedded Linux image.
  • tbot: A Python-based test framework to test embedded Linux

Prerequisites

  • A Debian or macOS-based host machine
  • A Debian based server
  • One Raspberry Pi 4, incl power supply, sd-card
  • One USB-Ethernet adapter
  • A Toradex Verdin development board
  • Basic shell experience
  • Basic software development experience

Embedded Linux Farming


STEP 1: Gather your Hardware

The Toradex Verdin board is used as reference Embedded Linux board as a substitution to the device under test and will be further referred to as test device.

The Raspberry Pi is used as an agent to flash and test the test-device, it will be referred to as agent node. This tutorial uses only one agent node and one test device, but the rather complex technologies and approaches are chosen to make the setup scalable to dozens of even hundreds of test-devices.

The server will be used to build the firmware but also to control the agent nodes. In will be refered to as control node.

Requirements:

  • Root access to the control node
  • The control node must be in the same network as all agent node.
  • The test device USB ports X34 and X66 are connected to the agent-node
  • The test device ethernet interface X35 is connected to the agent-node

STEP 2: Create a Repository

Fork the board_farm_tutorial_3 repository and clone it to your host machine:

gh repo fork bitcrushtesting/board_farm_tutorial_3
gh repo clone board_farm_tutorial_3
cd board_farm_tutorial_3

How to install GitHub cli: GitHub

Execute the start_tutorial.sh shell script. It will purge the files you will create yourself and does some checks on your operating system.


STEP 3: Prepare the Control Node

The purpose of the control node is to manage the agent nodes. In this example it is also used to host a runner that is powerfull enough to build the firmware for the test device, plus to build the custom images for the agent-nodes.

Packages required:

  • uuu NXP I.MX image deploy tool
  • git
  • docker
  • repo
  • avahi-tools

Tip

A shell script is provided that installs the required packages. File: control-node/install-dependencies.sh

4.2 Setup a Self-Hosted Runner

You need to create a container with the appropriate environment variables to register your runner with GitHub.

Get the GitHub Runner Token:

  1. Navigate to your GitHub repository.
  2. Go to Settings > Actions > Runners.
  3. Click on Add New Runner.
  4. Select Linux as the operating system and x64 for architecture.
  5. Copy the TOKEN from the provided command after clicking on "Register a runner".

Run the Docker Container:

Create and start the container using the following command, replacing the placeholders with your actual values:

docker run -d --name github-runner \
  -e REPO_URL="https://github.com/your-username/your-repo" \
  -e RUNNER_NAME="your-runner-name" \
  -e RUNNER_TOKEN="your-token" \
  -e RUNNER_WORKDIR="/tmp/github-runner" \
  -e RUNNER_GROUP="your-group" \
  -e LABELS="self-hosted,Linux,X64" \
  -v /var/run/docker.sock:/var/run/docker.sock \
  myoung34/github-runner:latest

Replace the following in the command:

  • your-username/your-repo with your GitHub repository's path.
  • your-runner-name with a desired name for the runner.
  • your-token with the token you got from GitHub.

This command starts the container with your self-hosted runner, which will automatically register itself with your GitHub repository.

Verify the Runner is Registered

Go back to your GitHub repository's Settings > Actions > Runners section. You should see your runner listed there as online and ready to use.


STEP 4: Agend Node OS

A custom Raspberry Pi OS allows you to tailor the operating system to meet specific project requirements. It offers the ability to pre-configure settings, software, and security measures, providing faster deployment.

A custom OS can also enhance development experience by incorporating unique interfaces, drivers, or applications tailored to your needs.

One of the customization is that we are going to install avahi-daemon. This enables us to discover the available agent-nodes. A more sophisticated version would be to install a client that registers on a board-farm control server. Anyway, in this example zeroconf is used to create the Ansible inventory.

Tip

To skip this step you get a pre-build image here: node-agent.img

Note

The following commands are to be executed on the control-node.

3.1 Clone the pi-gen Repository

The pi-gen tool is an official Raspberry Pi Foundation project for creating custom Raspbian images. Clone the repository to your local machine:

cd agent-node
git clone https://github.com/RPi-Distro/pi-gen.git
cd pi-gen

3.2 Configure the Build

Create a file config, and add the following lines:

IMG_NAME=agent-node
FIRST_USER_NAME="agent"
FIRST_USER_PASS="agent"
ENABLE_SSH=1
LOCALE_DEFAULT='de_DE.UTF-8'
KEYBOARD_KEYMAP='de'
KEYBOARD_LAYOUT='German'
TIMEZONE_DEFAULT='Europe/Berlin'
DEPLOY_COMPRESSION=none
STAGE_LIST="stage0 stage1 stage2 custom-stage"

Custmize the locale, keyboard layout and timezone to your needs or remove these lines to use the Rasperry Pi OS defaults.

File: config

Learn more on building Raspberry Pi images: pi-gen

3.3 Build the Custom Raspbian Image

A build scripts helps with the build of the image. It adds a custom build stage, removes not required stages and uses the prepared config file.

Build the image with the following command:

tmux new -s agent-node-os
./build-image.sh
exit

The command tmux enables to detach and reattach the terminal session. With tmux the build will continue even when the session gets aborted e.g. by connection loss.

File: build-pi-image.sh


STEP 5: Workflow Build Job

The steps for this job:

  1. Get the source code
  2. Run the build script
  3. Upload the image

Define the workflow name:

---
name: Yocto CI

Execute the script only on pushes to the main branch:

on:
  push:
    branches:
      - main

Define the job name build and the GitHub runner:

jobs:
  build:
    runs-on: control-node

Define the steps that are executed:

    steps:
    - name: Checkout code
      uses: actions/checkout@v4

    - name: Run the build script
      run: |
      		echo "This is running on the control node"
      		./firmware/build-image.sh
      		
    - name: Upload image
      uses: actions/upload-artifact@v4
      with:
        name: firmware-name
        path: ./deploy/image/
        if-no-files-found: error

Note

Because it uses a self-hosted runner 'act' cannot be used to pre-test the workflow.

Build the Linux image

This steps is supposed to be executed on the control node. It builds the image by using the Yocto Project.

More: Toradex


STEP 5: Prepare the Agent Nodes

Flash the bootloader

File: config.txt

Deploy Keys

STEP 6: Flash the Test Device

Login to the agent-node:

ssh agent@agent-node.local

Get the image:

Flash the image:

uuu

Serial console:

minicom verdin

Note

The agent-node image comes with minicom and its configuration file /etc/minicom/minirc.verdin preinstalled.


STEP 6: Workflow Test Job

The steps for this job:

  1. Get the image
  2. Flash the test device
  3. Run the tests

Define the additional job in ci.yml:

  test:
    runs-on: agent-node

Add the steps:

    steps:
    - name: Get the image
      uses: 
      
    - name: Flash the test device
      uses:
      
    - name:  
      uses:

STEP 7: Develop Automated Test Scripts

For testing embedded Linux a great framework named tbot is available.


Learn more about tbot: tbot.tools


STEP 8: Extend the Workflow

File: ci.yml


STEP 9: Monitor and Maintain Your Board Farm

Arduino CI

🤷‍♂️ What's next?

For now, there are no more tutorials available.

Happy farming!

♥️ Did you like the tutorial?

We would love to hear back from you. Helps us improve our tutorials and file the evaluation form:

Tutorial Evaluation Form

If you find this tutorial helpful, please consider ⭐️ring or share it! Star this repository

GitHub stars

📫 Contact Us

www.bitcrushtesting.com

Email: contact@bitcrushtesting.com

LinkedIn: Bitcrush Testing