Skip to content

acceleratescience/llms-for-pi

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GPLv3 License Issues GitHub contributors GitHub pull requests PR's Welcome
GitHub stars GitHub watchers GitHub forks GitHub followers Twitter Follow


Logo

LLMs for Pi

This is the material for the LLMs for Pi workshop run for the Sutton Trust Summer School in the Computer Lab at the University of Cambridge.

Table of Contents
  1. Prerequisites
  2. Setup
  3. Part 1: The notebook
  4. Part 2: Ollama
  5. Further Reading
  6. Contributing
  7. License

(back to top)

Prerequisites

This project requires only a Raspberry Pi and the ability to follow instructions.

Setup

To get going with this project first clone the repo:

git clone https://github.com/acceleratescience/llms-for-pi.git
cd llms-for-pi

Now run the setup script

./setup.sh

To see what the script does, look in the setup.sh file. But in short, it will install all the necessary packages, download the Qwen/Qwen2.5-0.5B model, and install Ollama.

(back to top)

The only thing that you need to do is activate the virtual environment that was installed:

source venv/bin/activate

Part 1: The notebook

Open In Colab

In the notebook intro-to-qwen.ipynb you will find a walkthrough in how to get models from Hugging Face. To run this notebook, run in the terminal:

jupyter lab

This will open Jupyter Lab, and you can find the notebook (among other things), in the file explorer on the left. If you're running this in VSCode, you can access the notebook directly, without running Jupyter Lab. The main reason for not using VSCode, or some other IDE, is due to the memory constraints of the Pi (assuming you're using the 4GB model).

(back to top)

Part 2: Ollama

Open In Colab

In this part, we will run our model in the command line. To chat with the model, run in the command line

ollama run qwen2.5:0.5b

The model parameters were already downloaded during the setup stage, but to run other models, you can

You might also be able to get away with running the 1.5B parameter model:

ollama run qwen2.5:1.5b

(back to top)

Further Reading

To browse other models, head over to Hugging Face.

The also have a fantastic selection of training courses.

Contributing

Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

(back to top)

License

Distributed under the Apache 2.0 License. See LICENSE for more information.

(back to top)

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published