diff --git a/Lab 1/README.md b/Lab 1/README.md index cbc6dfa745..75bac2cf24 100644 --- a/Lab 1/README.md +++ b/Lab 1/README.md @@ -2,7 +2,7 @@ # Staging Interaction -\*\***NAME OF COLLABORATOR HERE**\*\* + In the original stage production of Peter Pan, Tinker Bell was represented by a darting light created by a small handheld mirror off-stage, reflecting a little circle of light from a powerful lamp. Tinkerbell communicates her presence through this light to the other characters. See more info [here](https://en.wikipedia.org/wiki/Tinker_Bell). @@ -64,33 +64,60 @@ To stage an interaction with your interactive device, think about: _Setting:_ Where is this interaction happening? (e.g., a jungle, the kitchen) When is it happening? +In a workplace. + _Players:_ Who is involved in the interaction? Who else is there? If you reflect on the design of current day interactive devices like the Amazon Alexa, it’s clear they didn’t take into account people who had roommates, or the presence of children. Think through all the people who are in the setting. +A poor student with too much work to do. + _Activity:_ What is happening between the actors? +The device reminds the student to drink certain amount of water. + _Goals:_ What are the goals of each player? (e.g., jumping to a tree, opening the fridge). +To drink enough water and prepare enough water for the next round + The interactive device can be anything *except* a computer, a tablet computer or a smart phone, but the main way it interacts needs to be using light. \*\***Describe your setting, players, activity and goals here.**\*\* +When a poor student is too addicted to school work, he or she may forget to take in enough water. The device will measure the remaining amount of water and order the student to drink periodically and refill the bottle for future drinking. + Storyboards are a tool for visually exploring a users interaction with a device. They are a fast and cheap method to understand user flow, and iterate on a design before attempting to build on it. Take some time to read through this explanation of [storyboarding in UX design](https://www.smashingmagazine.com/2017/10/storyboarding-ux-design/). Sketch seven storyboards of the interactions you are planning. **It does not need to be perfect**, but must get across the behavior of the interactive device and the other characters in the scene. \*\***Include pictures of your storyboards here**\*\* +![4a2a7bd6bb5ce869a0873717aeff236f](https://github.com/user-attachments/assets/863c91d3-744b-441c-9d8b-4128a125dba2) + + + + Present your ideas to the other people in your breakout room (or in small groups). You can just get feedback from one another or you can work together on the other parts of the lab. \*\***Summarize feedback you got here.**\*\* +Jade: Sounds healthy +Karl: Cool idea ## Part B. Act out the Interaction + +https://github.com/user-attachments/assets/6046a4f5-557a-473f-9951-0a82c72dda10 + + Try physically acting out the interaction you planned. For now, you can just pretend the device is doing the things you’ve scripted for it. + + \*\***Are there things that seemed better on paper than acted out?**\*\* +It is more eye-catching than I thought. + \*\***Are there new ideas that occur to you or your collaborator that come up from the acting?**\*\* +Maybe the color should change by stages not gradually to emphasize on periods. + ## Part C. Prototype the device @@ -104,12 +131,20 @@ If you run into technical issues with this tool, you can also use a light switch \*\***Give us feedback on Tinkerbelle.**\*\* +Really cool! I have never tried to control one computing device with another! + ## Part D. Wizard the device Take a little time to set up the wizarding set-up that allows for someone to remotely control the device while someone acts with it. Hint: You can use Zoom to record videos, and you can pin someone’s video feed if that is the scene which you want to record. \*\***Include your first attempts at recording the set-up video here.**\*\* + + +https://github.com/user-attachments/assets/295cf934-57ba-4f47-859c-d50ff5eb5c85 + + + Now, change the goal within the same setting, and update the interaction with the paper prototype. \*\***Show the follow-up work here.**\*\* @@ -123,16 +158,30 @@ Think about the setting of the device: is the environment a place where the devi \*\***Include sketches of what your devices might look like here.**\*\* +![1676fa1c4f0087b5c68cc6720bea3847](https://github.com/user-attachments/assets/85ab013b-9665-404a-a125-bd2a03416f80) +![a02b364798ecfefda73321d87ee4d666](https://github.com/user-attachments/assets/5ef2f0b5-946f-477a-839d-1292a179f2df) + + + \*\***What concerns or opportunitities are influencing the way you've designed the device to look?**\*\* +The device can be easily exposed to water. I have two plans to issue this. First, use a light sensor to sense the water volume from the outside of a transparent bottle. Second, craft a bottle with a built-in force sensor to sense the gravity of the water, thus estimating the volume. + ## Part F. Record \*\***Take a video of your prototyped interaction.**\*\* + + +https://github.com/user-attachments/assets/65811893-f958-42ce-81ed-7dbb2a9288cf + + + + \*\***Please indicate who you collaborated with on this Lab.**\*\* Be generous in acknowledging their contributions! And also recognizing any other influences (e.g. from YouTube, Github, Twitter) that informed your design. - +No one. I may find a helper later. # Staging Interaction, Part 2 @@ -145,6 +194,9 @@ This describes the second week's work for this lab activity. You will be assigned three partners from other groups. Go to their github pages, view their videos, and provide them with reactions, suggestions & feedback: explain to them what you saw happening in their video. Guess the scene and the goals of the character. Ask them about anything that wasn’t clear. \*\***Summarize feedback from your partners here.**\*\* +Karl: what if I take alcohol +Jade: you really love drinking water + ## Make it your own @@ -154,3 +206,11 @@ Do last week’s assignment again, but this time: 3) We will be grading with an emphasis on creativity. \*\***Document everything here. (Particularly, we would like to see the storyboard and video, although photos of the prototype are also great.)**\*\* +![55908a4ca247093ce42e62f5f445d605](https://github.com/user-attachments/assets/2303d6c5-1523-4fb0-a5a3-c4e3f9bec1b4) + + +https://github.com/user-attachments/assets/326198fe-f96e-4ca9-b4b9-cc287deb8fd8 + + + + diff --git a/Lab 2/README.md b/Lab 2/README.md index fdf299cbbf..550c52d981 100644 --- a/Lab 2/README.md +++ b/Lab 2/README.md @@ -1,4 +1,4 @@ -# Interactive Prototyping: The Clock of Pi +[screen_clock.py](https://github.com/user-attachments/files/22322685/screen_clock.py)# Interactive Prototyping: The Clock of Pi **NAMES OF COLLABORATORS HERE** Does it feel like time is moving strangely during this semester? @@ -199,6 +199,8 @@ Pro Tip: Using tools like [code-server](https://coder.com/docs/code-server/lates 2. Look at and give feedback on the Part G. for at least 2 other people in the class (and get 2 people to comment on your Part G!) + Karl: I used to do this before important appointments. + Jade: I will definitely buy this clock. # Lab 2 Part 2 ## Assignment that was formerly Lab 2 Part E. @@ -210,17 +212,180 @@ Can you make time interactive? You can look in `screen_test.py` for examples for Please sketch/diagram your clock idea. (Try using a [Verplank diagram](https://ccrma.stanford.edu/courses/250a-fall-2004/IDSketchbok.pdf))! +![44234b402c39a7e125e42abfa205d0d8](https://github.com/user-attachments/assets/7293c20d-6e9d-4b08-b5ca-399acbbf7eee) + +I don't know why but the verplank link leads to a gambling website, so I used some traditional sketching skills. + **We strongly discourage and will reject the results of literal digital or analog clock display.** \*\*\***A copy of your code should be in your Lab 2 Github repo.**\*\*\* +↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓CODE↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓ +``` +import time +import subprocess +import digitalio +import board +from PIL import Image, ImageDraw, ImageFont +import adafruit_rgb_display.st7789 as st7789 + +# Configuration for CS and DC pins (these are FeatherWing defaults on M0/M4): +cs_pin = digitalio.DigitalInOut(board.D5) +dc_pin = digitalio.DigitalInOut(board.D25) +reset_pin = None + +# Config for display baudrate (default max is 24mhz): +BAUDRATE = 64000000 + +# Setup SPI bus using hardware SPI: +spi = board.SPI() + +# Create the ST7789 display: +disp = st7789.ST7789( + spi, + cs=cs_pin, + dc=dc_pin, + rst=reset_pin, + baudrate=BAUDRATE, + width=135, + height=240, + x_offset=53, + y_offset=40, +) + +buttonA = digitalio.DigitalInOut(board.D23) #GPI023 (PIN 16) +buttonB = digitalio.DigitalInOut(board.D24) #GPI024 (PIN 18) +# Use internal pull-ups; buttons then read LOW when pressed. + + +buttonA.switch_to_input(pull=digitalio.Pull.UP) +buttonB.switch_to_input(pull=digitalio.Pull.UP) + +diff = 0 #calculate the difference caused by pressing buttons + +# Create blank image for drawing. +# Make sure to create image with mode 'RGB' for full color. +height = disp.width # we swap height/width to rotate it to landscape! +width = disp.height +image = Image.new("RGB", (width, height)) +rotation = 90 + +# Get drawing object to draw on image. +draw = ImageDraw.Draw(image) + +# Draw a black filled box to clear the image. +draw.rectangle((0, 0, width, height), outline=0, fill=(0, 0, 0)) +disp.image(image, rotation) +# Draw some shapes. +# First define some constants to allow easy resizing of shapes. +padding = -2 +top = padding +bottom = height - padding +# Move left to right keeping track of the current x position for drawing shapes. +x = 0 + +# Alternatively load a TTF font. Make sure the .ttf font file is in the +# same directory as the python script! +# Some other nice fonts to try: http://www.dafont.com/bitmap.php +time_font = ImageFont.truetype("/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf",32) +date_font = ImageFont.truetype("/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf", 18) + +# Turn on the backlight +backlight = digitalio.DigitalInOut(board.D22) +backlight.switch_to_output() +backlight.value = True + +while True: + # Draw a black filled box to clear the image. + draw.rectangle((0, 0, width, height), outline=0, fill=0) + + #TODO: Lab 2 part D work should be filled in here. You should be able to look in cli_clock.py and stats.py + a_pressed = (buttonA.value == False) + b_pressed = (buttonB.value == False) + + if a_pressed and b_pressed: + diff = 0 + elif a_pressed: + diff += 5 + elif b_pressed: + diff -= 5 + + t = time.localtime() + hour, min, sec = t.tm_hour, t.tm_min, t.tm_sec + min = min + diff + hour_adj = min // 60 + hour = hour + hour_adj + min = min % 60 + current_time = f"{hour:02d}:{min:02d}:{sec:02d}" + current_date = time.strftime("%m-%d-%Y") + alabel = "B:-5min" + blabel = "A:+5min" + + if diff > 0: + hint = f"Hurry! +{diff}" + elif diff < 0: + hint = f"Easy! -{-diff}" + else: + hint = "0" + + time_bbox = draw.textbbox((0,0), current_time, font=time_font) + time_width = time_bbox[2] - time_bbox[0] + time_height = time_bbox[3] - time_bbox[0] + + date_bbox = draw.textbbox((0,0), current_date, font=date_font) + date_width = date_bbox[2] - date_bbox[0] + date_height = date_bbox[3] - date_bbox[0] + + alabel_bbox = draw.textbbox((0,0), alabel, font=date_font) + alabel_width = alabel_bbox[2] - alabel_bbox[0] + alabel_height = alabel_bbox[3] - alabel_bbox[0] + + blabel_bbox = draw.textbbox((0,0), blabel, font=date_font) + blabel_width = blabel_bbox[2] - blabel_bbox[0] + blabel_height = blabel_bbox[3] - blabel_bbox[0] + + hint_bbox = draw.textbbox((0,0), hint, font=date_font) + hint_width = hint_bbox[2] - hint_bbox[0] + hint_height = hint_bbox[3] - hint_bbox[0] + + draw.text((width//2 - time_width//2, height//2 - time_height//2 - 20), current_time, font=time_font, fill="#FFFFFF") + draw.text((width//2 - date_width//2, height//2 + 10), current_date, font=date_font, fill="#FFFFFF") + draw.text((0, height - alabel_height), alabel, font=date_font, fill="#FFFFFF") + draw.text((0, 0), blabel, font=date_font, fill="#FFFFFF") + draw.text((width - hint_width, height - hint_height//2 - 10), hint, font=date_font, fill="#FFFFFF") + + + + # Display image. + disp.image(image, rotation) + time.sleep(0.1) +``` + + +↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑CODE↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑ + + + + ## Assignment that was formerly Part F. ## Make a short video of your modified barebones PiClock \*\*\***Take a video of your PiClock.**\*\*\* +↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓VIDEO↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓ + + +https://github.com/user-attachments/assets/cef7d98a-c059-4e17-a64e-a1015bab4c73 + + + +[If the video cannot broadcast normally, please use the .mp4 file from this link](https://github.com/Junxiong-Chen/Interactive-Lab-Hub/blob/Fall2025/Lab%202/pull_updates/1.1.mp4) + + +↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑VIDEO↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑ + After you edit and work on the scripts for Lab 2, the files should be upload back to your own GitHub repo! You can push to your personal github repo by adding the files here, commiting and pushing. ``` diff --git a/Lab 2/pull_updates/1.1.mp4 b/Lab 2/pull_updates/1.1.mp4 new file mode 100644 index 0000000000..913a5c90e5 Binary files /dev/null and b/Lab 2/pull_updates/1.1.mp4 differ diff --git a/Lab 2/screen_clock.py b/Lab 2/screen_clock.py index aa3bfb93ec..df29ad4e58 100644 --- a/Lab 2/screen_clock.py +++ b/Lab 2/screen_clock.py @@ -29,6 +29,16 @@ y_offset=40, ) +buttonA = digitalio.DigitalInOut(board.D23) #GPI023 (PIN 16) +buttonB = digitalio.DigitalInOut(board.D24) #GPI024 (PIN 18) +# Use internal pull-ups; buttons then read LOW when pressed. + + +buttonA.switch_to_input(pull=digitalio.Pull.UP) +buttonB.switch_to_input(pull=digitalio.Pull.UP) + +diff = 0 #calculate the difference caused by pressing buttons + # Create blank image for drawing. # Make sure to create image with mode 'RGB' for full color. height = disp.width # we swap height/width to rotate it to landscape! @@ -53,7 +63,8 @@ # Alternatively load a TTF font. Make sure the .ttf font file is in the # same directory as the python script! # Some other nice fonts to try: http://www.dafont.com/bitmap.php -font = ImageFont.truetype("/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf", 18) +time_font = ImageFont.truetype("/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf",32) +date_font = ImageFont.truetype("/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf", 18) # Turn on the backlight backlight = digitalio.DigitalInOut(board.D22) @@ -62,10 +73,65 @@ while True: # Draw a black filled box to clear the image. - draw.rectangle((0, 0, width, height), outline=0, fill=400) + draw.rectangle((0, 0, width, height), outline=0, fill=0) #TODO: Lab 2 part D work should be filled in here. You should be able to look in cli_clock.py and stats.py + a_pressed = (buttonA.value == False) + b_pressed = (buttonB.value == False) + + if a_pressed and b_pressed: + diff = 0 + elif a_pressed: + diff += 5 + elif b_pressed: + diff -= 5 + + t = time.localtime() + hour, min, sec = t.tm_hour, t.tm_min, t.tm_sec + min = min + diff + hour_adj = min // 60 + hour = hour + hour_adj + min = min % 60 + current_time = f"{hour:02d}:{min:02d}:{sec:02d}" + current_date = time.strftime("%m-%d-%Y") + alabel = "B:-5min" + blabel = "A:+5min" + + if diff > 0: + hint = f"Hurry! +{diff}" + elif diff < 0: + hint = f"Easy! -{-diff}" + else: + hint = "0" + + time_bbox = draw.textbbox((0,0), current_time, font=time_font) + time_width = time_bbox[2] - time_bbox[0] + time_height = time_bbox[3] - time_bbox[0] + + date_bbox = draw.textbbox((0,0), current_date, font=date_font) + date_width = date_bbox[2] - date_bbox[0] + date_height = date_bbox[3] - date_bbox[0] + + alabel_bbox = draw.textbbox((0,0), alabel, font=date_font) + alabel_width = alabel_bbox[2] - alabel_bbox[0] + alabel_height = alabel_bbox[3] - alabel_bbox[0] + + blabel_bbox = draw.textbbox((0,0), blabel, font=date_font) + blabel_width = blabel_bbox[2] - blabel_bbox[0] + blabel_height = blabel_bbox[3] - blabel_bbox[0] + + hint_bbox = draw.textbbox((0,0), hint, font=date_font) + hint_width = hint_bbox[2] - hint_bbox[0] + hint_height = hint_bbox[3] - hint_bbox[0] + + draw.text((width//2 - time_width//2, height//2 - time_height//2 - 20), current_time, font=time_font, fill="#FFFFFF") + draw.text((width//2 - date_width//2, height//2 + 10), current_date, font=date_font, fill="#FFFFFF") + draw.text((0, height - alabel_height), alabel, font=date_font, fill="#FFFFFF") + draw.text((0, 0), blabel, font=date_font, fill="#FFFFFF") + draw.text((width - hint_width, height - hint_height//2 - 10), hint, font=date_font, fill="#FFFFFF") + + # Display image. disp.image(image, rotation) - time.sleep(1) + time.sleep(0.1) diff --git a/Lab 3/README.md b/Lab 3/README.md index 25c6970386..0c03fb2e9b 100644 --- a/Lab 3/README.md +++ b/Lab 3/README.md @@ -1,40 +1,10 @@ # Chatterboxes -**NAMES OF COLLABORATORS HERE** -[![Watch the video](https://user-images.githubusercontent.com/1128669/135009222-111fe522-e6ba-46ad-b6dc-d1633d21129c.png)](https://www.youtube.com/embed/Q8FWzLMobx0?start=19) - -In this lab, we want you to design interaction with a speech-enabled device--something that listens and talks to you. This device can do anything *but* control lights (since we already did that in Lab 1). First, we want you first to storyboard what you imagine the conversational interaction to be like. Then, you will use wizarding techniques to elicit examples of what people might say, ask, or respond. We then want you to use the examples collected from at least two other people to inform the redesign of the device. - -We will focus on **audio** as the main modality for interaction to start; these general techniques can be extended to **video**, **haptics** or other interactive mechanisms in the second part of the Lab. - -## Prep for Part 1: Get the Latest Content and Pick up Additional Parts - -Please check instructions in [prep.md](prep.md) and complete the setup before class on Wednesday, Sept 23rd. - -### Pick up Web Camera If You Don't Have One - -Students who have not already received a web camera will receive their [Logitech C270 Webcam](https://www.amazon.com/Logitech-Desktop-Widescreen-Calling-Recording/dp/B004FHO5Y6/ref=sr_1_3?crid=W5QN79TK8JM7&dib=eyJ2IjoiMSJ9.FB-davgIQ_ciWNvY6RK4yckjgOCrvOWOGAG4IFaH0fczv-OIDHpR7rVTU8xj1iIbn_Aiowl9xMdeQxceQ6AT0Z8Rr5ZP1RocU6X8QSbkeJ4Zs5TYqa4a3C_cnfhZ7_ViooQU20IWibZqkBroF2Hja2xZXoTqZFI8e5YnF_2C0Bn7vtBGpapOYIGCeQoXqnV81r2HypQNUzFQbGPh7VqjqDbzmUoloFA2-QPLa5lOctA.L5ztl0wO7LqzxrIqDku9f96L9QrzYCMftU_YeTEJpGA&dib_tag=se&keywords=webcam%2Bc270&qid=1758416854&sprefix=webcam%2Bc270%2Caps%2C125&sr=8-3&th=1) and bluetooth speaker on Wednesday at the beginning of lab. If you cannot make it to class this week, please contact the TAs to ensure you get these. - -### Get the Latest Content - -As always, pull updates from the class Interactive-Lab-Hub to both your Pi and your own GitHub repo. There are 2 ways you can do so: - -**\[recommended\]**Option 1: On the Pi, `cd` to your `Interactive-Lab-Hub`, pull the updates from upstream (class lab-hub) and push the updates back to your own GitHub repo. You will need the *personal access token* for this. - -``` -pi@ixe00:~$ cd Interactive-Lab-Hub -pi@ixe00:~/Interactive-Lab-Hub $ git pull upstream Fall2025 -pi@ixe00:~/Interactive-Lab-Hub $ git add . -pi@ixe00:~/Interactive-Lab-Hub $ git commit -m "get lab3 updates" -pi@ixe00:~/Interactive-Lab-Hub $ git push -``` - -Option 2: On your your own GitHub repo, [create pull request](https://github.com/FAR-Lab/Developing-and-Designing-Interactive-Devices/blob/2022Fall/readings/Submitting%20Labs.md) to get updates from the class Interactive-Lab-Hub. After you have latest updates online, go on your Pi, `cd` to your `Interactive-Lab-Hub` and use `git pull` to get updates from your own GitHub repo. +Cast of the video: (poor guys tortured by the aferMATH deployer) +Estelle Zhang, Zirui Han ## Part 1. ### Setup - -Activate your virtual environment - +Activate your virtual environment. (I retain this because I will always use it) ``` pi@ixe00:~$ cd Interactive-Lab-Hub pi@ixe00:~/Interactive-Lab-Hub $ cd Lab\ 3 @@ -42,220 +12,133 @@ pi@ixe00:~/Interactive-Lab-Hub/Lab 3 $ python3 -m venv .venv pi@ixe00:~/Interactive-Lab-Hub $ source .venv/bin/activate (.venv)pi@ixe00:~/Interactive-Lab-Hub $ ``` - -Run the setup script -```(.venv)pi@ixe00:~/Interactive-Lab-Hub $ pip install -r requirements.txt ``` - -Next, run the setup script to install additional text-to-speech dependencies: -``` -(.venv)pi@ixe00:~/Interactive-Lab-Hub/Lab 3 $ ./setup.sh -``` - ### Text to Speech - -In this part of lab, we are going to start peeking into the world of audio on your Pi! - -We will be using the microphone and speaker on your webcamera. In the directory is a folder called `speech-scripts` containing several shell scripts. `cd` to the folder and list out all the files by `ls`: - -``` -pi@ixe00:~/speech-scripts $ ls -Download festival_demo.sh GoogleTTS_demo.sh pico2text_demo.sh -espeak_demo.sh flite_demo.sh lookdave.wav -``` - -You can run these shell files `.sh` by typing `./filename`, for example, typing `./espeak_demo.sh` and see what happens. Take some time to look at each script and see how it works. You can see a script by typing `cat filename`. For instance: - +\*\***Write your own shell file to use your favorite of these TTS engines to have your Pi greet you by name.**\*\* ``` -pi@ixe00:~/speech-scripts $ cat festival_demo.sh #from: https://elinux.org/RPi_Text_to_Speech_(Speech_Synthesis)#Festival_Text_to_Speech +echo "Good morning, oharyo, Zao Shang Hao, Gootten morgen, Eric Chen" | festival --tts ``` -You can test the commands by running -``` -echo "Just what do you think you're doing, Dave?" | festival --tts -``` - -Now, you might wonder what exactly is a `.sh` file? -Typically, a `.sh` file is a shell script which you can execute in a terminal. The example files we offer here are for you to figure out the ways to play with audio on your Pi! - -You can also play audio files directly with `aplay filename`. Try typing `aplay lookdave.wav`. - -\*\***Write your own shell file to use your favorite of these TTS engines to have your Pi greet you by name.**\*\* -(This shell file should be saved to your own repo for this lab.) - --- -Bonus: -[Piper](https://github.com/rhasspy/piper) is another fast neural based text to speech package for raspberry pi which can be installed easily through python with: -``` -pip install piper-tts -``` -and used from the command line. Running the command below the first time will download the model, concurrent runs will be faster. -``` -echo 'Welcome to the world of speech synthesis!' | piper \ - --model en_US-lessac-medium \ - --output_file welcome.wav -``` -Check the file that was created by running `aplay welcome.wav`. Many more languages are supported and audio can be streamed dirctly to an audio output, rather than into an file by: -``` -echo 'This sentence is spoken first. This sentence is synthesized while the first sentence is spoken.' | \ - piper --model en_US-lessac-medium --output-raw | \ - aplay -r 22050 -f S16_LE -t raw - -``` ### Speech to Text - -Next setup speech to text. We are using a speech recognition engine, [Vosk](https://alphacephei.com/vosk/), which is made by researchers at Carnegie Mellon University. Vosk is amazing because it is an offline speech recognition engine; that is, all the processing for the speech recognition is happening onboard the Raspberry Pi. - -Make sure you're running in your virtual environment with the dependencies already installed: -``` -source .venv/bin/activate -``` - -Test if vosk works by transcribing text: - -``` -vosk-transcriber -i recorded_mono.wav -o test.txt +\*\***Write your own shell file that verbally asks for a numerical based input (such as a phone number, zipcode, number of pets, etc) and records the answer the respondent provides.**\*\* ``` +#!/bin/bash +# ask_number_vosk.sh +# Ask a number, record it, and transcribe with Vosk -You can use vosk with the microphone by running -``` -python test_microphone.py -m en -``` +QUESTION="Please say a number, such as your phone number or zipcode." +DURATION=5 # seconds to record +OUTFILE="answer.wav" +RESULT="result.txt" ---- -Bonus: -[Whisper](https://openai.com/index/whisper/) is a neural network–based speech-to-text (STT) model developed and open-sourced by OpenAI. Compared to Vosk, Whisper generally achieves higher accuracy, particularly on noisy audio and diverse accents. It is available in multiple model sizes; for edge devices such as the Raspberry Pi 5 used in this class, the tiny.en model runs with reasonable latency even without a GPU. +# Speak the question +espeak "$QUESTION" -By contrast, Vosk is more lightweight and optimized for running efficiently on low-power devices like the Raspberry Pi. The choice between Whisper and Vosk depends on your scenario: if you need higher accuracy and can afford slightly more compute, Whisper is preferable; if your priority is minimal resource usage, Vosk may be a better fit. +echo "Recording for $DURATION seconds..." +arecord -d $DURATION -f cd -t wav "$OUTFILE" -In this class, we provide two Whisper options: A quantized 8-bit faster-whisper model for speed, and the standard Whisper model. Try them out and compare the trade-offs. +echo "Transcribing with Vosk..." +vosk-transcriber -i "$OUTFILE" -o "$RESULT" -Make sure you're in the Lab 3 directory with your virtual environment activated: -``` -cd ~/Interactive-Lab-Hub/Lab\ 3/speech-scripts -source ../.venv/bin/activate +echo "Transcription saved to $RESULT" +echo "Detected text:" +cat "$RESULT" ``` -Then test the Whisper models: -``` -python whisper_try.py -``` -and - -``` -python faster_whisper_try.py -``` -\*\***Write your own shell file that verbally asks for a numerical based input (such as a phone number, zipcode, number of pets, etc) and records the answer the respondent provides.**\*\* ### 🤖 NEW: AI-Powered Conversations with Ollama - -Want to add intelligent conversation capabilities to your voice projects? **Ollama** lets you run AI models locally on your Raspberry Pi for sophisticated dialogue without requiring internet connectivity! - #### Quick Start with Ollama - -**Installation** (takes ~5 minutes): -```bash -# Install Ollama -curl -fsSL https://ollama.com/install.sh | sh - -# Download recommended model for Pi 5 -ollama pull phi3:mini - -# Install system dependencies for audio (required for pyaudio) -sudo apt-get update -sudo apt-get install -y portaudio19-dev python3-dev - -# Create separate virtual environment for Ollama (due to pyaudio conflicts) -cd ollama/ -python3 -m venv ollama_venv -source ollama_venv/bin/activate - -# Install Python dependencies in separate environment -pip install -r ollama_requirements.txt -``` #### Ready-to-Use Scripts -We've created three Ollama integration scripts for different use cases: - -**1. Basic Demo** - Learn how Ollama works: -```bash -python3 ollama_demo.py -``` - -**2. Voice Assistant** - Full speech-to-text + AI + text-to-speech: -```bash -python3 ollama_voice_assistant.py -``` - -**3. Web Interface** - Beautiful web-based chat with voice options: -```bash -python3 ollama_web_app.py -# Then open: http://localhost:5000 -``` - #### Integration in Your Projects - -Simple example to add AI to any project: -```python -import requests - -def ask_ai(question): - response = requests.post( - "http://localhost:11434/api/generate", - json={"model": "phi3:mini", "prompt": question, "stream": False} - ) - return response.json().get('response', 'No response') - -# Use it anywhere! -answer = ask_ai("How should I greet users?") -``` - -**📖 Complete Setup Guide**: See `OLLAMA_SETUP.md` for detailed instructions, troubleshooting, and advanced usage! - \*\***Try creating a simple voice interaction that combines speech recognition, Ollama processing, and text-to-speech output. Document what you built and how users responded to it.**\*\* - -### Serving Pages - -In Lab 1, we served a webpage with flask. In this lab, you may find it useful to serve a webpage for the controller on a remote device. Here is a simple example of a webserver. - ``` -pi@ixe00:~/Interactive-Lab-Hub/Lab 3 $ python server.py - * Serving Flask app "server" (lazy loading) - * Environment: production - WARNING: This is a development server. Do not use it in a production deployment. - Use a production WSGI server instead. - * Debug mode: on - * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit) - * Restarting with stat - * Debugger is active! - * Debugger PIN: 162-573-883 +import speech_recognition as sr +import requests +import pyttsx3 + +# URL of the local Ollama API +OLLAMA_URL = "http://localhost:11434/api/generate" +# Name of the model you have pulled (change if needed) +MODEL = "llama3" + +def listen(): + """Capture microphone input and convert to text using Google Speech Recognition.""" + r = sr.Recognizer() + with sr.Microphone() as source: + print("Speak now...") + r.adjust_for_ambient_noise(source) # reduce background noise + audio = r.listen(source) + try: + text = r.recognize_google(audio) + print("You said:", text) + return text + except Exception as e: + print("Speech recognition failed:", e) + return "" + +def query_ollama(prompt): + """Send the recognized text to Ollama and collect the streamed response.""" + data = {"model": MODEL, "prompt": prompt} + resp = requests.post(OLLAMA_URL, json=data, stream=True) + reply = "" + for line in resp.iter_lines(): + if not line: + continue + part = line.decode("utf-8") + # Ollama streams JSON lines; extract the "response" field + if '"response":"' in part: + reply += part.split('"response":"')[1].split('"')[0] + print("Ollama:", reply) + return reply + +def speak(text): + """Speak the response aloud using offline TTS.""" + engine = pyttsx3.init() + engine.say(text) + engine.runAndWait() + +# Main loop: continuously listen → process → speak +print("Start talking (say 'exit' or 'quit' to stop)") +while True: + user_text = listen() + if not user_text: + continue + if user_text.lower() in ["exit", "quit", "stop"]: + break + answer = query_ollama(user_text) + speak(answer) ``` -From a remote browser on the same network, check to make sure your webserver is working by going to `http://:5000`. You should be able to see "Hello World" on the webpage. ### Storyboard Storyboard and/or use a Verplank diagram to design a speech-enabled device. (Stuck? Make a device that talks for dogs. If that is too stupid, find an application that is better than that.) \*\***Post your storyboard and diagram here.**\*\* +![633ed1c52f3e6f34551f16214cfc09c1](https://github.com/user-attachments/assets/82466cbe-d8f7-40f5-9147-08a31e8b3b93) + Write out what you imagine the dialogue to be. Use cards, post-its, or whatever method helps you develop alternatives or group responses. \*\***Please describe and document your process.**\*\* +![95918264d08586473d2e751919934381](https://github.com/user-attachments/assets/54358e9c-3650-4f6b-a449-78afd5768293) + ### Acting out the dialogue Find a partner, and *without sharing the script with your partner* try out the dialogue you've designed, where you (as the device designer) act as the device you are designing. Please record this interaction (for example, using Zoom's record feature). - +[Acting out](https://drive.google.com/video/captions/edit?id=1U9JBEsrq14Mv1NO6AZO3RXu17p5BgyGX) \*\***Describe if the dialogue seemed different than what you imagined when it was acted out, and how.**\*\* +The participant seems not to enjoy the arithmetics (that's their problems >:( ) and they some times will react with a mixture of words and numbers, because they are calculating, or they are initially confused by the question. -### Wizarding with the Pi (optional) -In the [demo directory](./demo), you will find an example Wizard of Oz project. In that project, you can see how audio and sensor data is streamed from the Pi to a wizard controller that runs in the browser. You may use this demo code as a template. By running the `app.py` script, you can see how audio and sensor data (Adafruit MPU-6050 6-DoF Accel and Gyro Sensor) is streamed from the Pi to a wizard controller that runs in the browser `http://:5000`. You can control what the system says from the controller as well! - -\*\***Describe if the dialogue seemed different than what you imagined, or when acted out, when it was wizarded, and how.**\*\* # Lab 3 Part 2 For Part 2, you will redesign the interaction with the speech-enabled device using the data collected, as well as feedback from part 1. +![ceddb4077bf1370bf7647e50cdba6596](https://github.com/user-attachments/assets/6795ee23-605c-4058-b945-5428bea37b84) + ## Prep for Part 2 @@ -263,49 +146,202 @@ For Part 2, you will redesign the interaction with the speech-enabled device usi 2. What are other modes of interaction _beyond speech_ that you might also use to clarify how to interact? 3. Make a new storyboard, diagram and/or script based on these reflections. -## Prototype your system - -The system should: -* use the Raspberry Pi -* use one or more sensors -* require participants to speak to it. - -*Document how the system works* +``` +# -*- coding: utf-8 -*- +import sounddevice as sd +import numpy as np +import json +from vosk import Model, KaldiRecognizer +import re +import time +import random +import pandas as pd +import subprocess +import os +from gtts import gTTS +import warnings +from word2number import w2n + +warnings.filterwarnings("ignore", message="FP16 is not supported on CPU") + +digits_vocab = [ + "zero", "one", "two", "three", "four", "five", "six", "seven", "eight", "nine", + "ten", "eleven", "twelve", "thirteen", "fourteen", "fifteen", "sixteen", + "seventeen", "eighteen", "nineteen", "twenty", "thirty", "forty", "fifty", + "sixty", "seventy", "eighty", "ninety"] # The stt models are all so bad that I have to use a limited word list + +# Settings +SAMPLE_RATE = 16000 # Whisper recommended sample rate +ANSWER_DURATION = 30 # max recording time in seconds + +# load vosk model once +model = Model("model") +rec = KaldiRecognizer(model, 16000, json.dumps(digits_vocab)) + +# Load CSV (must have 2 columns: Question, Answer) +questions = pd.read_csv("questions.csv") + +# Counters +count = 0 +win = 0 + +def record_audio(duration=ANSWER_DURATION): + # Record audio for a fixed duration + print(f"Recording for {duration} seconds...") + audio = sd.rec(int(duration * SAMPLE_RATE), + samplerate=SAMPLE_RATE, + channels=1, + dtype='int16') + sd.wait() + print("Recording finished.") + return np.squeeze(audio) + +def transcribe_and_extract_numbers(audio): + # Transcribe speech to text and extract numbers + print("Transcribing with Vosk...") + + # Ensure audio is PCM bytes (int16) + audio_bytes = audio.tobytes() + + if rec.AcceptWaveform(audio_bytes): + result = json.loads(rec.Result()) + else: + result = json.loads(rec.PartialResult()) + + text = result.get("text", "") + # Extract integers from the text + try: + number = w2n.word_to_num(text) + return number + except: + return None + +def ask_question(count): + """Select a random question and return index, question text, correct answer""" + if count < 6: + i = random.randint(0, 50) + d = 10 + elif count < 10: + i = random.randint(51, 99) + d= 15 + else: + i = 99 + d = 25 + q = questions.iloc[i, 0] # Question text + a = questions.iloc[i, 1] # Correct answer (as string) + return i, q, a, d + +def speak(text, filename="output_padded.mp3"): + """Speak text using gTTS, prepend 0.3s silence with ffmpeg, and play with mpg123""" + # Save raw mp3 + tts = gTTS(text=text, lang="en") + tts.save("output_raw.mp3") + + # Prepend 2s silence + subprocess.run([ + "ffmpeg", "-y", + "-f", "lavfi", "-i", "anullsrc=r=16000:cl=mono:d=2", + "-i", "output_raw.mp3", + "-filter_complex", "[0:a][1:a]concat=n=2:v=0:a=1[out]", + "-map", "[out]", + filename + ], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL) + + # Play with mpg123 + subprocess.run(["mpg123", filename], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL) + +def main(): + global count, win + speak("Hahahaha, I am After Math Deployer! Now torture begins!") + while True: + count += 1 + # Select a question + i, q, a, d= ask_question(count) + + # Speak the question + print(f"Question: {q}") + speak(q) + + # Record answer + audio = record_audio(d) + nums = transcribe_and_extract_numbers(audio) + + # Evaluate + print(nums) + if nums == a: # compare the extracted number + win += 1 + print("Correct!") + speak("Oh Pity! You made it right.") + else: + print(f"Wrong. Correct answer was: {a}") + speak("Heeheehee, Booboo! Wrong Answer, Hahaha!") + + print(f"Score: {win}/{count}") + + if count == 10: + print("Game over.") + speak("Enough for today!") + break + +if __name__ == "__main__": + main() +``` -*Include videos or screencaptures of both the system and the controller.* +## Prototype your system -
- Submission Cleanup Reminder (Click to Expand) +[Myself testing the prototype](https://drive.google.com/video/captions/edit?id=1auHtaVb_D5dP7Y4rtOu4LpyDfaS5jWwF) - **Before submitting your README.md:** - - This readme.md file has a lot of extra text for guidance. - - Remove all instructional text and example prompts from this file. - - You may either delete these sections or use the toggle/hide feature in VS Code to collapse them for a cleaner look. - - Your final submission should be neat, focused on your own work, and easy to read for grading. - - This helps ensure your README.md is clear professional and uniquely yours! -
## Test the system Try to get at least two people to interact with your system. (Ideally, you would inform them that there is a wizard _after_ the interaction, but we recognize that can be hard.) +[Test Video, starred by Estelle](https://drive.google.com/video/captions/edit?id=1HtOWd5qiqxzV8Nk49G2B0hetfA2mPaQ_) + +[Test Video, starred by Zirui](https://drive.google.com/video/captions/edit?id=1pMhvXTp2bB1YKboAv1vLF-kOwPaE2E9H) + Answer the following: ### What worked well about the system and what didn't? -\*\**your answer here*\*\* +√ The questions are broadcast well out. + +√ The numbers are abstracted from complicated sentences. + +X The numbers are misrecognized when the participant does not stay close to the mic. + +X Some initial words are not completely spoken by the tts. + ### What worked well about the controller and what didn't? +√ Everything goes automatically well, according to the difficulty level. -\*\**your answer here*\*\* +X Sometimes the final question show up earlier, and sometimes the question repeats. ### What lessons can you take away from the WoZ interactions for designing a more autonomous version of the system? -\*\**your answer here*\*\* - +There could be more interactions like inserted dialogues. eg. "Pardon?", "What's the correct answer?" ### How could you use your system to create a dataset of interaction? What other sensing modalities would make sense to capture? +answer : video and audio abstract + +action : continue, stop, repeat, give correct answers... + +mood analysis : freq analysis of audio... + +A video camera can be used to read the answer, since sometimes participants read when they valculate and may affect the stt process. + + + + + + + + + + + + + -\*\**your answer here*\*\* diff --git a/Lab 4/README.md b/Lab 4/README.md index afbb46ed98..4ce6d3f0ed 100644 --- a/Lab 4/README.md +++ b/Lab 4/README.md @@ -1,498 +1,599 @@ - -# Ph-UI!!! - -
- Instructions for Students (Click to Expand) - - **Submission Cleanup Reminder:** - - This README.md contains extra instructional text for guidance. - - Before submitting, remove all instructional text and example prompts from this file. - - You may delete these sections or use the toggle/hide feature in VS Code to collapse them for a cleaner look. - - Your final submission should be neat, focused on your own work, and easy to read for grading. - - This helps ensure your README.md is clear, professional, and uniquely yours! -
- ---- - -## Lab 4 Deliverables - -### Part 1 (Week 1) -**Submit the following for Part 1:** -*️⃣ **A. Capacitive Sensing** - - Photos/videos of your Twizzler (or other object) capacitive sensor setup - - Code and terminal output showing touch detection - -*️⃣ **B. More Sensors** - - Photos/videos of each sensor tested (light/proximity, rotary encoder, joystick, distance sensor) - - Code and terminal output for each sensor - -*️⃣ **C. Physical Sensing Design** - - 5 sketches of different ways to use your chosen sensor - - Written reflection: questions raised, what to prototype - - Pick one design to prototype and explain why - -*️⃣ **D. Display & Housing** - - 5 sketches for display/button/knob positioning - - Written reflection: questions raised, what to prototype - - Pick one display design to integrate - - Rationale for design - - Photos/videos of your cardboard prototype - ---- - -### Part 2 (Week 2) -**Submit the following for Part 2:** -*️⃣ **E. Multi-Device Demo** - - Code and video for your multi-input multi-output demo (e.g., chaining Qwiic buttons, servo, GPIO expander, etc.) - - Reflection on interaction effects and chaining - -*️⃣ **F. Final Documentation** - - Photos/videos of your final prototype - - Written summary: what it looks like, works like, acts like - - Reflection on what you learned and next steps - ---- - -## Lab Overview -**NAMES OF COLLABORATORS HERE** - - -For lab this week, we focus both on sensing, to bring in new modes of input into your devices, as well as prototyping the physical look and feel of the device. You will think about the physical form the device needs to perform the sensing as well as present the display or feedback about what was sensed. - -## Part 1 Lab Preparation - -### Get the latest content: -As always, pull updates from the class Interactive-Lab-Hub to both your Pi and your own GitHub repo. As we discussed in the class, there are 2 ways you can do so: - - -Option 1: On the Pi, `cd` to your `Interactive-Lab-Hub`, pull the updates from upstream (class lab-hub) and push the updates back to your own GitHub repo. You will need the personal access token for this. -``` -pi@ixe00:~$ cd Interactive-Lab-Hub -pi@ixe00:~/Interactive-Lab-Hub $ git pull upstream Fall2025 -pi@ixe00:~/Interactive-Lab-Hub $ git add . -pi@ixe00:~/Interactive-Lab-Hub $ git commit -m "get lab4 content" -pi@ixe00:~/Interactive-Lab-Hub $ git push -``` - -Option 2: On your own GitHub repo, [create pull request](https://github.com/FAR-Lab/Developing-and-Designing-Interactive-Devices/blob/2021Fall/readings/Submitting%20Labs.md) to get updates from the class Interactive-Lab-Hub. After you have latest updates online, go on your Pi, `cd` to your `Interactive-Lab-Hub` and use `git pull` to get updates from your own GitHub repo. - -Option 3: (preferred) use the Github.com interface to update the changes. - -### Start brainstorming ideas by reading: - -* [What do prototypes prototype?](https://www.semanticscholar.org/paper/What-do-Prototypes-Prototype-Houde-Hill/30bc6125fab9d9b2d5854223aeea7900a218f149) -* [Paper prototyping](https://www.uxpin.com/studio/blog/paper-prototyping-the-practical-beginners-guide/) is used by UX designers to quickly develop interface ideas and run them by people before any programming occurs. -* [Cardboard prototypes](https://www.youtube.com/watch?v=k_9Q-KDSb9o) help interactive product designers to work through additional issues, like how big something should be, how it could be carried, where it would sit. -* [Tips to Cut, Fold, Mold and Papier-Mache Cardboard](https://makezine.com/2016/04/21/working-with-cardboard-tips-cut-fold-mold-papier-mache/) from Make Magazine. -* [Surprisingly complicated forms](https://www.pinterest.com/pin/50032245843343100/) can be built with paper, cardstock or cardboard. The most advanced and challenging prototypes to prototype with paper are [cardboard mechanisms](https://www.pinterest.com/helgangchin/paper-mechanisms/) which move and change. -* [Dyson Vacuum Cardboard Prototypes](http://media.dyson.com/downloads/JDF/JDF_Prim_poster05.pdf) -

- -### Gathering materials for this lab: - -* Cardboard (start collecting those shipping boxes!) -* Found objects and materials--like bananas and twigs. -* Cutting board -* Cutting tools -* Markers - - -(We do offer shared cutting board, cutting tools, and markers on the class cart during the lab, so do not worry if you don't have them!) - -## Deliverables \& Submission for Lab 4 - -The deliverables for this lab are, writings, sketches, photos, and videos that show what your prototype: -* "Looks like": shows how the device should look, feel, sit, weigh, etc. -* "Works like": shows what the device can do. -* "Acts like": shows how a person would interact with the device. - -For submission, the readme.md page for this lab should be edited to include the work you have done: -* Upload any materials that explain what you did, into your lab 4 repository, and link them in your lab 4 readme.md. -* Link your Lab 4 readme.md in your main Interactive-Lab-Hub readme.md. -* Labs are due on Mondays, make sure to submit your Lab 4 readme.md to Canvas. - - -## Lab Overview - -A) [Capacitive Sensing](#part-a) - -B) [OLED screen](#part-b) - -C) [Paper Display](#part-c) - -D) [Materiality](#part-d) - -E) [Servo Control](#part-e) - -F) [Record the interaction](#part-f) - - -## The Report (Part 1: A-D, Part 2: E-F) - -### Quick Start: Python Environment Setup - -1. **Create and activate a virtual environment in Lab 4:** - ```bash - cd ~/Interactive-Lab-Hub/Lab\ 4 - python3 -m venv .venv - source .venv/bin/activate - ``` -2. **Install all Lab 4 requirements:** - ```bash - pip install -r requirements2025.txt - ``` -3. **Check CircuitPython Blinka installation:** - ```bash - python blinkatest.py - ``` - If you see "Hello blinka!", your setup is correct. If not, follow the troubleshooting steps in the file or ask for help. - -### Part A -### Capacitive Sensing, a.k.a. Human-Twizzler Interaction - -We want to introduce you to the [capacitive sensor](https://learn.adafruit.com/adafruit-mpr121-gator) in your kit. It's one of the most flexible input devices we are able to provide. At boot, it measures the capacitance on each of the 12 contacts. Whenever that capacitance changes, it considers it a user touch. You can attach any conductive material. In your kit, you have copper tape that will work well, but don't limit yourself! In the example below, we use Twizzlers--you should pick your own objects. - - -

- - -

- -Plug in the capacitive sensor board with the QWIIC connector. Connect your Twizzlers with either the copper tape or the alligator clips (the clips work better). Install the latest requirements from your working virtual environment: - -These Twizzlers are connected to pads 6 and 10. When you run the code and touch a Twizzler, the terminal will print out the following - -``` -(circuitpython) pi@ixe00:~/Interactive-Lab-Hub/Lab 4 $ python cap_test.py -Twizzler 10 touched! -Twizzler 6 touched! -``` - -### Part B -### More sensors - -#### Light/Proximity/Gesture sensor (APDS-9960) - -We here want you to get to know this awesome sensor [Adafruit APDS-9960](https://www.adafruit.com/product/3595). It is capable of sensing proximity, light (also RGB), and gesture! - - - - -Connect it to your pi with Qwiic connector and try running the three example scripts individually to see what the sensor is capable of doing! - -``` -(circuitpython) pi@ixe00:~/Interactive-Lab-Hub/Lab 4 $ python proximity_test.py -... -(circuitpython) pi@ixe00:~/Interactive-Lab-Hub/Lab 4 $ python gesture_test.py -... -(circuitpython) pi@ixe00:~/Interactive-Lab-Hub/Lab 4 $ python color_test.py -... -``` - -You can go the the [Adafruit GitHub Page](https://github.com/adafruit/Adafruit_CircuitPython_APDS9960) to see more examples for this sensor! - -#### Rotary Encoder - -A rotary encoder is an electro-mechanical device that converts the angular position to analog or digital output signals. The [Adafruit rotary encoder](https://www.adafruit.com/product/4991#technical-details) we ordered for you came with separate breakout board and encoder itself, that is, they will need to be soldered if you have not yet done so! We will be bringing the soldering station to the lab class for you to use, also, you can go to the MakerLAB to do the soldering off-class. Here is some [guidance on soldering](https://learn.adafruit.com/adafruit-guide-excellent-soldering/preparation) from Adafruit. When you first solder, get someone who has done it before (ideally in the MakerLAB environment). It is a good idea to review this material beforehand so you know what to look at. - -

- - - - -

- -Connect it to your pi with Qwiic connector and try running the example script, it comes with an additional button which might be useful for your design! - -``` -(circuitpython) pi@ixe00:~/Interactive-Lab-Hub/Lab 4 $ python encoder_test.py -``` - -You can go to the [Adafruit Learn Page](https://learn.adafruit.com/adafruit-i2c-qt-rotary-encoder/python-circuitpython) to learn more about the sensor! The sensor actually comes with an LED (neo pixel): Can you try lighting it up? - -#### Joystick - - -A [joystick](https://www.sparkfun.com/products/15168) can be used to sense and report the input of the stick for it pivoting angle or direction. It also comes with a button input! - -

- -

- -Connect it to your pi with Qwiic connector and try running the example script to see what it can do! - -``` -(circuitpython) pi@ixe00:~/Interactive-Lab-Hub/Lab 4 $ python joystick_test.py -``` - -You can go to the [SparkFun GitHub Page](https://github.com/sparkfun/Qwiic_Joystick_Py) to learn more about the sensor! - -#### Distance Sensor - - -Earlier we have asked you to play with the proximity sensor, which is able to sense objects within a short distance. Here, we offer [Sparkfun Proximity Sensor Breakout](https://www.sparkfun.com/products/15177), With the ability to detect objects up to 20cm away. - -

- - -

- -Connect it to your pi with Qwiic connector and try running the example script to see how it works! - -``` -(circuitpython) pi@ixe00:~/Interactive-Lab-Hub/Lab 4 $ python qwiic_distance.py -``` - -You can go to the [SparkFun GitHub Page](https://github.com/sparkfun/Qwiic_Proximity_Py) to learn more about the sensor and see other examples - -### Part C -### Physical considerations for sensing - - -Usually, sensors need to be positioned in specific locations or orientations to make them useful for their application. Now that you've tried a bunch of the sensors, pick one that you would like to use, and an application where you use the output of that sensor for an interaction. For example, you can use a distance sensor to measure someone's height if you position it overhead and get them to stand under it. - - -**\*\*\*Draw 5 sketches of different ways you might use your sensor, and how the larger device needs to be shaped in order to make the sensor useful.\*\*\*** - -**\*\*\*What are some things these sketches raise as questions? What do you need to physically prototype to understand how to anwer those questions?\*\*\*** - -**\*\*\*Pick one of these designs to prototype.\*\*\*** - - -### Part D -### Physical considerations for displaying information and housing parts - - - -Here is a Pi with a paper faceplate on it to turn it into a display interface: - - - - - -This is fine, but the mounting of the display constrains the display location and orientation a lot. Also, it really only works for applications where people can come and stand over the Pi, or where you can mount the Pi to the wall. - -Here is another prototype for a paper display: - - - - -Your kit includes these [SparkFun Qwiic OLED screens](https://www.sparkfun.com/products/17153). These use less power than the MiniTFTs you have mounted on the GPIO pins of the Pi, but, more importantly, they can be more flexibly mounted elsewhere on your physical interface. The way you program this display is almost identical to the way you program a Pi display. Take a look at `oled_test.py` and some more of the [Adafruit examples](https://github.com/adafruit/Adafruit_CircuitPython_SSD1306/tree/master/examples). - -

- - -

- - -It holds a Pi and usb power supply, and provides a front stage on which to put writing, graphics, LEDs, buttons or displays. - -This design can be made by scoring a long strip of corrugated cardboard of width X, with the following measurements: - -| Y height of box
- thickness of cardboard | Z depth of box
- thickness of cardboard | Y height of box | Z depth of box | H height of faceplate
* * * * * (don't make this too short) * * * * *| -| --- | --- | --- | --- | --- | - -Fold the first flap of the strip so that it sits flush against the back of the face plate, and tape, velcro or hot glue it in place. This will make a H x X interface, with a box of Z x X footprint (which you can adapt to the things you want to put in the box) and a height Y in the back. - -Here is an example: - - - -Think about how you want to present the information about what your sensor is sensing! Design a paper display for your project that communicates the state of the Pi and a sensor. Ideally you should design it so that you can slide the Pi out to work on the circuit or programming, and then slide it back in and reattach a few wires to be back in operation. - -**\*\*\*Sketch 5 designs for how you would physically position your display and any buttons or knobs needed to interact with it.\*\*\*** - -**\*\*\*What are some things these sketches raise as questions? What do you need to physically prototype to understand how to anwer those questions?\*\*\*** - -**\*\*\*Pick one of these display designs to integrate into your prototype.\*\*\*** - -**\*\*\*Explain the rationale for the design.\*\*\*** (e.g. Does it need to be a certain size or form or need to be able to be seen from a certain distance?) - -Build a cardboard prototype of your design. - - -**\*\*\*Document your rough prototype.\*\*\*** - - -# LAB PART 2 - -### Part 2 - -Following exploration and reflection from Part 1, complete the "looks like," "works like" and "acts like" prototypes for your design, reiterated below. - - - -### Part E - -#### Chaining Devices and Exploring Interaction Effects - -For Part 2, you will design and build a fun interactive prototype using multiple inputs and outputs. This means chaining Qwiic and STEMMA QT devices (e.g., buttons, encoders, sensors, servos, displays) and/or combining with traditional breadboard prototyping (e.g., LEDs, buzzers, etc.). - -**Your prototype should:** -- Combine at least two different types of input and output devices, inspired by your physical considerations from Part 1. -- Be playful, creative, and demonstrate multi-input/multi-output interaction. - -**Document your system with:** -- Code for your multi-device demo -- Photos and/or video of the working prototype in action -- A simple interaction diagram or sketch showing how inputs and outputs are connected and interact -- Written reflection: What did you learn about multi-input/multi-output interaction? What was fun, surprising, or challenging? - -**Questions to consider:** -- What new types of interaction become possible when you combine two or more sensors or actuators? -- How does the physical arrangement of devices (e.g., where the encoder or sensor is placed) change the user experience? -- What happens if you use one device to control or modulate another (e.g., encoder sets a threshold, sensor triggers an action)? -- How does the system feel if you swap which device is "primary" and which is "secondary"? - -Try chaining different combinations and document what you discover! - -See encoder_accel_servo_dashboard.py in the Lab 4 folder for an example of chaining together three devices. - -**`Lab 4/encoder_accel_servo_dashboard.py`** - -#### Using Multiple Qwiic Buttons: Changing I2C Address (Physically & Digitally) - -If you want to use more than one Qwiic Button in your project, you must give each button a unique I2C address. There are two ways to do this: - -##### 1. Physically: Soldering Address Jumpers - -On the back of the Qwiic Button, you'll find four solder jumpers labeled A0, A1, A2, and A3. By bridging these with solder, you change the I2C address. Only one button on the chain can use the default address (0x6F). - -**Address Table:** - -| A3 | A2 | A1 | A0 | Address (hex) | -|----|----|----|----|---------------| -| 0 | 0 | 0 | 0 | 0x6F | -| 0 | 0 | 0 | 1 | 0x6E | -| 0 | 0 | 1 | 0 | 0x6D | -| 0 | 0 | 1 | 1 | 0x6C | -| 0 | 1 | 0 | 0 | 0x6B | -| 0 | 1 | 0 | 1 | 0x6A | -| 0 | 1 | 1 | 0 | 0x69 | -| 0 | 1 | 1 | 1 | 0x68 | -| 1 | 0 | 0 | 0 | 0x67 | -| ...| ...| ...| ... | ... | - -For example, if you solder A0 closed (leave A1, A2, A3 open), the address becomes 0x6E. - -**Soldering Tips:** -- Use a small amount of solder to bridge the pads for the jumper you want to close. -- Only one jumper needs to be closed for each address change (see table above). -- Power cycle the button after changing the jumper. - -##### 2. Digitally: Using Software to Change Address - -You can also change the address in software (temporarily or permanently) using the example script `qwiic_button_ex6_changeI2CAddress.py` in the Lab 4 folder. This is useful if you want to reassign addresses without soldering. - -Run the script and follow the prompts: -```bash -python qwiic_button_ex6_changeI2CAddress.py -``` -Enter the new address (e.g., 5B for 0x5B) when prompted. Power cycle the button after changing the address. - -**Note:** The software method is less foolproof and you need to make sure to keep track of which button has which address! - - -##### Using Multiple Buttons in Code - -After setting unique addresses, you can use multiple buttons in your script. See these example scripts in the Lab 4 folder: - -- **`qwiic_1_button.py`**: Basic example for reading a single Qwiic Button (default address 0x6F). Run with: - ```bash - python qwiic_1_button.py - ``` - -- **`qwiic_button_led_demo.py`**: Demonstrates using two Qwiic Buttons at different addresses (e.g., 0x6F and 0x6E) and controlling their LEDs. Button 1 toggles its own LED; Button 2 toggles both LEDs. Run with: - ```bash - python qwiic_button_led_demo.py - ``` - -Here is a minimal code example for two buttons: -```python -import qwiic_button - -# Default button (0x6F) -button1 = qwiic_button.QwiicButton() -# Button with A0 soldered (0x6E) -button2 = qwiic_button.QwiicButton(0x6E) - -button1.begin() -button2.begin() - -while True: - if button1.is_button_pressed(): - print("Button 1 pressed!") - if button2.is_button_pressed(): - print("Button 2 pressed!") -``` - -For more details, see the [Qwiic Button Hookup Guide](https://learn.sparkfun.com/tutorials/qwiic-button-hookup-guide/all#i2c-address). - ---- - -### PCF8574 GPIO Expander: Add More Pins Over I²C - -Sometimes your Pi’s header GPIO pins are already full (e.g., with a display or HAT). That’s where an I²C GPIO expander comes in handy. - -We use the Adafruit PCF8574 I²C GPIO Expander, which gives you 8 extra digital pins over I²C. It’s a great way to prototype with LEDs, buttons, or other components on the breadboard without worrying about pin conflicts—similar to how Arduino users often expand their pinouts when prototyping physical interactions. - -**Why is this useful?** -- You only need two wires (I²C: SDA + SCL) to unlock 8 extra GPIOs. -- It integrates smoothly with CircuitPython and Blinka. -- It allows a clean prototyping workflow when the Pi’s 40-pin header is already occupied by displays, HATs, or sensors. -- Makes breadboard setups feel more like an Arduino-style prototyping environment where it’s easy to wire up interaction elements. - -**Demo Script:** `Lab 4/gpio_expander.py` - -

- GPIO Expander LED Demo -

- -We connected 8 LEDs (through 220 Ω resistors) to the expander and ran a little light show. The script cycles through three patterns: -- Chase (one LED at a time, left to right) -- Knight Rider (back-and-forth sweep) -- Disco (random blink chaos) - -Every few runs, the script swaps to the next pattern automatically: -```bash -python gpio_expander.py -``` - -This is a playful way to visualize how the expander works, but the same technique applies if you wanted to prototype buttons, switches, or other interaction elements. It’s a lightweight, flexible addition to your prototyping toolkit. - ---- - -### Servo Control with SparkFun Servo pHAT -For this lab, you will use the **SparkFun Servo pHAT** to control a micro servo (such as the Miuzei MS18 or similar 9g servo). The Servo pHAT stacks directly on top of the Adafruit Mini PiTFT (135×240) display without pin conflicts: -- The Mini PiTFT uses SPI (GPIO22, 23, 24, 25) for display and buttons ([SPI pinout](https://pinout.xyz/pinout/spi)). -- The Servo pHAT uses I²C (GPIO2 & 3) for the PCA9685 servo driver ([I2C pinout](https://pinout.xyz/pinout/i2c)). -- Since SPI and I²C are separate buses, you can use both boards together. -**⚡ Power:** -- Plug a USB-C cable into the Servo pHAT to provide enough current for the servos. The Pi itself should still be powered by its own USB-C supply. Do NOT power servos from the Pi’s 5V rail. - -

- Servo pHAT Demo -

- -**Basic Python Example:** -We provide a simple example script: `Lab 4/pi_servo_hat_test.py` (requires the `pi_servo_hat` Python package). -Run the example: -``` -python pi_servo_hat_test.py -``` -For more details and advanced usage, see the [official SparkFun Servo pHAT documentation](https://learn.sparkfun.com/tutorials/pi-servo-phat-v2-hookup-guide/all#resources-and-going-further). -A servo motor is a rotary actuator that allows for precise control of angular position. The position is set by the width of an electrical pulse (PWM). You can read [this Adafruit guide](https://learn.adafruit.com/adafruit-arduino-lesson-14-servo-motors/servo-motors) to learn more about how servos work. - ---- - - -### Part F - -### Record - -Document all the prototypes and iterations you have designed and worked on! Again, deliverables for this lab are writings, sketches, photos, and videos that show what your prototype: -* "Looks like": shows how the device should look, feel, sit, weigh, etc. -* "Works like": shows what the device can do -* "Acts like": shows how a person would interact with the device - +# Ph-UI!!! + +Team Members: JUNXIONG CHEN (jc3828), Chiahsuan Chang (cc2952) + + +## Part 1 +**C. Physical Sensing Design** +- 5 sketches of different ways to use your chosen sensor +- Written reflection: questions raised, what to prototype +- Pick one design to prototype and explain why + ![eed8b497daff0d0dfb379a166a68d68d](https://github.com/user-attachments/assets/c49ca762-9313-4a61-b83e-e84d9570ae66) + ![fa1d78fa387783a4802c4149ea772e10](https://github.com/user-attachments/assets/86ea7c25-36e7-49be-94aa-8180014b4e43) + +Sketch 1: I love to play old games, but it's not always goos using simulator on PC can be difficult because PC has no joysticks and keys are arranged different from gamepads. However, old game pads have various forms and buying all kinds of them can be time- and money-consuming. So I design this all-in-one gamepad that can reflect to 2-, 4- and 6-button old game pads at the same time. + +Sketch 2: As a fan of iM@s, I really want to create a machine that can summon my favorite idols whenever I want, so I design this "magic mirror". It is activated when I approach, and can broadcast different voices and songs subject to my manipulation. + +Sketch 3: Fighting games are hard to begin with, and I can not always live in an arcade and spend infinite money to the game box, so there should be a simulator that can train my combo at home. + +Sketch 4: I can't play piano well, and but most other common instruments have only one part. This is an easy keyboard that can play with two parts: when a key is pressed, the chord (determined by rotary encoder) and the melody (determined by twizzer touches) will be broadcast at the same time. + +Sketch 5: I want to play Taiko no Tatsujin at home. That's it. + +[Prototype Sketch2](https://drive.google.com/video/captions/edit?id=1lF1JMMllSmGIzXxutQTKquUQsPW7Cff6) + + **D. Display & Housing** +- 5 sketches for display/button/knob positioning +- Written reflection: questions raised, what to prototype +- Pick one display design to integrate +- Rationale for design +- Photos/videos of your cardboard prototype +![7cea37049b2e1ca64f6724db852bdf66](https://github.com/user-attachments/assets/3b888171-1b8d-48bc-8418-62dc93fc4963) + + +## Part 2 +Document all the prototypes and iterations you have designed and worked on! Again, deliverables for this lab are writings, sketches, photos, and videos that show what your prototype: +* "Looks like": shows how the device should look, feel, sit, weigh, etc. +* "Works like": shows what the device can do +* "Acts like": shows how a person would interact with the device + + +![game_rule.JPG](game_rule.JPG) +![cardboard.jpg](cardboard.jpg) +implementation: ./elemental_fury.py + +### Elemental Fury: +#### Goal + +The Attacker (Player 1) tries to defeat the Defender (Player 2). The Defender's goal is to survive as long as possible. + +#### How to Play + +This is a 2-player attack and defend game. + +Player 1 chooses an attack (Fire, Lightning, Earthquake, or Slime). + +Player 2 must perform the correct defense action using the matching sensor in time. + +If Player 2 fails to defend, they lose 1 HP. + +The game continues until Player 2 is defeated. + +#### Attacks & Defenses + +* Attack: Fireball, Defense: Player 2 must push the Joystick to evade the fireball. + +* Attack: Earthquake, Defense: Player 2 must lift the "castle" piece off the Proximity Sensor. + +* Attack: Lightning, Defense: Player 2 must cover the Light Sensor (APDS-9960) to block the strike. + +* Attack: Slime, Defense: Player 2 must quickly turn the Rotary Encoder to shake off the slime. + +How to Win + +Player 1 (Attacker) wins when Player 2's HP reaches 0. + +Player 2 (Defender) loses when their HP reaches 0. + + + +[Video 1: Attacker & Deffense](https://drive.google.com/file/d/1CRKOgDtgnIum2S0XdNr4v78AApqGrhei/view?usp=sharing) + + +[Video 2: Deffense](https://drive.google.com/file/d/1phGfNs0n2r9mEY8ZtBd8uP5Aua77Bdg_/view?usp=sharing) + +
+ Instructions for Students (Click to Expand) + + **Submission Cleanup Reminder:** + - This README.md contains extra instructional text for guidance. + - Before submitting, remove all instructional text and example prompts from this file. + - You may delete these sections or use the toggle/hide feature in VS Code to collapse them for a cleaner look. + - Your final submission should be neat, focused on your own work, and easy to read for grading. + + This helps ensure your README.md is clear, professional, and uniquely yours! +
+ +
+ +--- + +## Lab 4 Deliverables + +### Part 1 (Week 1) +**Submit the following for Part 1:** +*️⃣ **A. Capacitive Sensing** + - Photos/videos of your Twizzler (or other object) capacitive sensor setup + - Code and terminal output showing touch detection + +*️⃣ **B. More Sensors** + - Photos/videos of each sensor tested (light/proximity, rotary encoder, joystick, distance sensor) + - Code and terminal output for each sensor + +*️⃣ **C. Physical Sensing Design** + - 5 sketches of different ways to use your chosen sensor + - Written reflection: questions raised, what to prototype + - Pick one design to prototype and explain why + ![eed8b497daff0d0dfb379a166a68d68d](https://github.com/user-attachments/assets/c49ca762-9313-4a61-b83e-e84d9570ae66) + ![fa1d78fa387783a4802c4149ea772e10](https://github.com/user-attachments/assets/86ea7c25-36e7-49be-94aa-8180014b4e43) + +Sketch 1: I love to play old games, but it's not always goos using simulator on PC can be difficult because PC has no joysticks and keys are arranged different from gamepads. However, old game pads have various forms and buying all kinds of them can be time- and money-consuming. So I design this all-in-one gamepad that can reflect to 2-, 4- and 6-button old game pads at the same time. + +Sketch 2: As a fan of iM@s, I really want to create a machine that can summon my favorite idols whenever I want, so I design this "magic mirror". It is activated when I approach, and can broadcast different voices and songs subject to my manipulation. + +Sketch 3: Fighting games are hard to begin with, and I can not always live in an arcade and spend infinite money to the game box, so there should be a simulator that can train my combo at home. + +Sketch 4: I can't play piano well, and but most other common instruments have only one part. This is an easy keyboard that can play with two parts: when a key is pressed, the chord (determined by rotary encoder) and the melody (determined by twizzer touches) will be broadcast at the same time. + +Sketch 5: I want to play Taiko no Tatsujin at home. That's it. + +[Prototype Sketch2](https://drive.google.com/video/captions/edit?id=1lF1JMMllSmGIzXxutQTKquUQsPW7Cff6) + +Reason for S2: Nobody should miss this tremendous iM@s Project. Besides, this includes various ways for interaction and remains much room for extension (eg. Twizzer buttons rearranged for body touch just like arcade iM@s games, oled for subtitle) + + +*️⃣ **D. Display & Housing** + - 5 sketches for display/button/knob positioning + - Written reflection: questions raised, what to prototype + - Pick one display design to integrate + - Rationale for design + - Photos/videos of your cardboard prototype +![7cea37049b2e1ca64f6724db852bdf66](https://github.com/user-attachments/assets/3b888171-1b8d-48bc-8418-62dc93fc4963) + +--- + +### Part 2 (Week 2) +**Submit the following for Part 2:** +*️⃣ **E. Multi-Device Demo** + - Code and video for your multi-input multi-output demo (e.g., chaining Qwiic buttons, servo, GPIO expander, etc.) + - Reflection on interaction effects and chaining + +*️⃣ **F. Final Documentation** + - Photos/videos of your final prototype + - Written summary: what it looks like, works like, acts like + - Reflection on what you learned and next steps + +--- + +## Lab Overview +Collab: Jade Chang, Karl Muller + +For lab this week, we focus both on sensing, to bring in new modes of input into your devices, as well as prototyping the physical look and feel of the device. You will think about the physical form the device needs to perform the sensing as well as present the display or feedback about what was sensed. + +## Part 1 Lab Preparation + +### Get the latest content: +As always, pull updates from the class Interactive-Lab-Hub to both your Pi and your own GitHub repo. As we discussed in the class, there are 2 ways you can do so: + + +Option 1: On the Pi, `cd` to your `Interactive-Lab-Hub`, pull the updates from upstream (class lab-hub) and push the updates back to your own GitHub repo. You will need the personal access token for this. +``` +pi@ixe00:~$ cd Interactive-Lab-Hub +pi@ixe00:~/Interactive-Lab-Hub $ git pull upstream Fall2025 +pi@ixe00:~/Interactive-Lab-Hub $ git add . +pi@ixe00:~/Interactive-Lab-Hub $ git commit -m "get lab4 content" +pi@ixe00:~/Interactive-Lab-Hub $ git push +``` + +Option 2: On your own GitHub repo, [create pull request](https://github.com/FAR-Lab/Developing-and-Designing-Interactive-Devices/blob/2021Fall/readings/Submitting%20Labs.md) to get updates from the class Interactive-Lab-Hub. After you have latest updates online, go on your Pi, `cd` to your `Interactive-Lab-Hub` and use `git pull` to get updates from your own GitHub repo. + +Option 3: (preferred) use the Github.com interface to update the changes. + +### Start brainstorming ideas by reading: + +* [What do prototypes prototype?](https://www.semanticscholar.org/paper/What-do-Prototypes-Prototype-Houde-Hill/30bc6125fab9d9b2d5854223aeea7900a218f149) +* [Paper prototyping](https://www.uxpin.com/studio/blog/paper-prototyping-the-practical-beginners-guide/) is used by UX designers to quickly develop interface ideas and run them by people before any programming occurs. +* [Cardboard prototypes](https://www.youtube.com/watch?v=k_9Q-KDSb9o) help interactive product designers to work through additional issues, like how big something should be, how it could be carried, where it would sit. +* [Tips to Cut, Fold, Mold and Papier-Mache Cardboard](https://makezine.com/2016/04/21/working-with-cardboard-tips-cut-fold-mold-papier-mache/) from Make Magazine. +* [Surprisingly complicated forms](https://www.pinterest.com/pin/50032245843343100/) can be built with paper, cardstock or cardboard. The most advanced and challenging prototypes to prototype with paper are [cardboard mechanisms](https://www.pinterest.com/helgangchin/paper-mechanisms/) which move and change. +* [Dyson Vacuum Cardboard Prototypes](http://media.dyson.com/downloads/JDF/JDF_Prim_poster05.pdf) +

+ +### Gathering materials for this lab: + +* Cardboard (start collecting those shipping boxes!) +* Found objects and materials--like bananas and twigs. +* Cutting board +* Cutting tools +* Markers + + +(We do offer shared cutting board, cutting tools, and markers on the class cart during the lab, so do not worry if you don't have them!) + +## Deliverables \& Submission for Lab 4 + +The deliverables for this lab are, writings, sketches, photos, and videos that show what your prototype: +* "Looks like": shows how the device should look, feel, sit, weigh, etc. +* "Works like": shows what the device can do. +* "Acts like": shows how a person would interact with the device. + +For submission, the readme.md page for this lab should be edited to include the work you have done: +* Upload any materials that explain what you did, into your lab 4 repository, and link them in your lab 4 readme.md. +* Link your Lab 4 readme.md in your main Interactive-Lab-Hub readme.md. +* Labs are due on Mondays, make sure to submit your Lab 4 readme.md to Canvas. + + +## Lab Overview + +A) [Capacitive Sensing](#part-a) + +B) [OLED screen](#part-b) + +C) [Paper Display](#part-c) + +D) [Materiality](#part-d) + +E) [Servo Control](#part-e) + +F) [Record the interaction](#part-f) + + +## The Report (Part 1: A-D, Part 2: E-F) + +### Quick Start: Python Environment Setup + +1. **Create and activate a virtual environment in Lab 4:** + ```bash + cd ~/Interactive-Lab-Hub/Lab\ 4 + python3 -m venv .venv + source .venv/bin/activate + ``` +2. **Install all Lab 4 requirements:** + ```bash + pip install -r requirements2025.txt + ``` +3. **Check CircuitPython Blinka installation:** + ```bash + python blinkatest.py + ``` + If you see "Hello blinka!", your setup is correct. If not, follow the troubleshooting steps in the file or ask for help. + +### Part A +### Capacitive Sensing, a.k.a. Human-Twizzler Interaction + +We want to introduce you to the [capacitive sensor](https://learn.adafruit.com/adafruit-mpr121-gator) in your kit. It's one of the most flexible input devices we are able to provide. At boot, it measures the capacitance on each of the 12 contacts. Whenever that capacitance changes, it considers it a user touch. You can attach any conductive material. In your kit, you have copper tape that will work well, but don't limit yourself! In the example below, we use Twizzlers--you should pick your own objects. + + +

+ + +

+ +Plug in the capacitive sensor board with the QWIIC connector. Connect your Twizzlers with either the copper tape or the alligator clips (the clips work better). Install the latest requirements from your working virtual environment: + +These Twizzlers are connected to pads 6 and 10. When you run the code and touch a Twizzler, the terminal will print out the following + +``` +(circuitpython) pi@ixe00:~/Interactive-Lab-Hub/Lab 4 $ python cap_test.py +Twizzler 10 touched! +Twizzler 6 touched! +``` + +### Part B +### More sensors + +#### Light/Proximity/Gesture sensor (APDS-9960) + +We here want you to get to know this awesome sensor [Adafruit APDS-9960](https://www.adafruit.com/product/3595). It is capable of sensing proximity, light (also RGB), and gesture! + + + + +Connect it to your pi with Qwiic connector and try running the three example scripts individually to see what the sensor is capable of doing! + +``` +(circuitpython) pi@ixe00:~/Interactive-Lab-Hub/Lab 4 $ python proximity_test.py +... +(circuitpython) pi@ixe00:~/Interactive-Lab-Hub/Lab 4 $ python gesture_test.py +... +(circuitpython) pi@ixe00:~/Interactive-Lab-Hub/Lab 4 $ python color_test.py +... +``` + +You can go the the [Adafruit GitHub Page](https://github.com/adafruit/Adafruit_CircuitPython_APDS9960) to see more examples for this sensor! + +#### Rotary Encoder + +A rotary encoder is an electro-mechanical device that converts the angular position to analog or digital output signals. The [Adafruit rotary encoder](https://www.adafruit.com/product/4991#technical-details) we ordered for you came with separate breakout board and encoder itself, that is, they will need to be soldered if you have not yet done so! We will be bringing the soldering station to the lab class for you to use, also, you can go to the MakerLAB to do the soldering off-class. Here is some [guidance on soldering](https://learn.adafruit.com/adafruit-guide-excellent-soldering/preparation) from Adafruit. When you first solder, get someone who has done it before (ideally in the MakerLAB environment). It is a good idea to review this material beforehand so you know what to look at. + +

+ + + + +

+ +Connect it to your pi with Qwiic connector and try running the example script, it comes with an additional button which might be useful for your design! + +``` +(circuitpython) pi@ixe00:~/Interactive-Lab-Hub/Lab 4 $ python encoder_test.py +``` + +You can go to the [Adafruit Learn Page](https://learn.adafruit.com/adafruit-i2c-qt-rotary-encoder/python-circuitpython) to learn more about the sensor! The sensor actually comes with an LED (neo pixel): Can you try lighting it up? + +#### Joystick + + +A [joystick](https://www.sparkfun.com/products/15168) can be used to sense and report the input of the stick for it pivoting angle or direction. It also comes with a button input! + +

+ +

+ +Connect it to your pi with Qwiic connector and try running the example script to see what it can do! + +``` +(circuitpython) pi@ixe00:~/Interactive-Lab-Hub/Lab 4 $ python joystick_test.py +``` + +You can go to the [SparkFun GitHub Page](https://github.com/sparkfun/Qwiic_Joystick_Py) to learn more about the sensor! + +#### Distance Sensor + + +Earlier we have asked you to play with the proximity sensor, which is able to sense objects within a short distance. Here, we offer [Sparkfun Proximity Sensor Breakout](https://www.sparkfun.com/products/15177), With the ability to detect objects up to 20cm away. + +

+ + +

+ +Connect it to your pi with Qwiic connector and try running the example script to see how it works! + +``` +(circuitpython) pi@ixe00:~/Interactive-Lab-Hub/Lab 4 $ python qwiic_distance.py +``` + +You can go to the [SparkFun GitHub Page](https://github.com/sparkfun/Qwiic_Proximity_Py) to learn more about the sensor and see other examples + +### Part C +### Physical considerations for sensing + + +Usually, sensors need to be positioned in specific locations or orientations to make them useful for their application. Now that you've tried a bunch of the sensors, pick one that you would like to use, and an application where you use the output of that sensor for an interaction. For example, you can use a distance sensor to measure someone's height if you position it overhead and get them to stand under it. + + +**\*\*\*Draw 5 sketches of different ways you might use your sensor, and how the larger device needs to be shaped in order to make the sensor useful.\*\*\*** + +**\*\*\*What are some things these sketches raise as questions? What do you need to physically prototype to understand how to anwer those questions?\*\*\*** + +**\*\*\*Pick one of these designs to prototype.\*\*\*** + + +### Part D +### Physical considerations for displaying information and housing parts + + + +Here is a Pi with a paper faceplate on it to turn it into a display interface: + + + + + +This is fine, but the mounting of the display constrains the display location and orientation a lot. Also, it really only works for applications where people can come and stand over the Pi, or where you can mount the Pi to the wall. + +Here is another prototype for a paper display: + + + + +Your kit includes these [SparkFun Qwiic OLED screens](https://www.sparkfun.com/products/17153). These use less power than the MiniTFTs you have mounted on the GPIO pins of the Pi, but, more importantly, they can be more flexibly mounted elsewhere on your physical interface. The way you program this display is almost identical to the way you program a Pi display. Take a look at `oled_test.py` and some more of the [Adafruit examples](https://github.com/adafruit/Adafruit_CircuitPython_SSD1306/tree/master/examples). + +

+ + +

+ + +It holds a Pi and usb power supply, and provides a front stage on which to put writing, graphics, LEDs, buttons or displays. + +This design can be made by scoring a long strip of corrugated cardboard of width X, with the following measurements: + +| Y height of box
- thickness of cardboard | Z depth of box
- thickness of cardboard | Y height of box | Z depth of box | H height of faceplate
* * * * * (don't make this too short) * * * * *| +| --- | --- | --- | --- | --- | + +Fold the first flap of the strip so that it sits flush against the back of the face plate, and tape, velcro or hot glue it in place. This will make a H x X interface, with a box of Z x X footprint (which you can adapt to the things you want to put in the box) and a height Y in the back. + +Here is an example: + + + +Think about how you want to present the information about what your sensor is sensing! Design a paper display for your project that communicates the state of the Pi and a sensor. Ideally you should design it so that you can slide the Pi out to work on the circuit or programming, and then slide it back in and reattach a few wires to be back in operation. + +**\*\*\*Sketch 5 designs for how you would physically position your display and any buttons or knobs needed to interact with it.\*\*\*** + +**\*\*\*What are some things these sketches raise as questions? What do you need to physically prototype to understand how to anwer those questions?\*\*\*** + +**\*\*\*Pick one of these display designs to integrate into your prototype.\*\*\*** + +**\*\*\*Explain the rationale for the design.\*\*\*** (e.g. Does it need to be a certain size or form or need to be able to be seen from a certain distance?) + +Build a cardboard prototype of your design. + + +**\*\*\*Document your rough prototype.\*\*\*** + + +# LAB PART 2 + +### Part 2 + +Following exploration and reflection from Part 1, complete the "looks like," "works like" and "acts like" prototypes for your design, reiterated below. + + + +### Part E + +#### Chaining Devices and Exploring Interaction Effects + +For Part 2, you will design and build a fun interactive prototype using multiple inputs and outputs. This means chaining Qwiic and STEMMA QT devices (e.g., buttons, encoders, sensors, servos, displays) and/or combining with traditional breadboard prototyping (e.g., LEDs, buzzers, etc.). + +**Your prototype should:** +- Combine at least two different types of input and output devices, inspired by your physical considerations from Part 1. +- Be playful, creative, and demonstrate multi-input/multi-output interaction. + +**Document your system with:** +- Code for your multi-device demo +- Photos and/or video of the working prototype in action +- A simple interaction diagram or sketch showing how inputs and outputs are connected and interact +- Written reflection: What did you learn about multi-input/multi-output interaction? What was fun, surprising, or challenging? + +**Questions to consider:** +- What new types of interaction become possible when you combine two or more sensors or actuators? +- How does the physical arrangement of devices (e.g., where the encoder or sensor is placed) change the user experience? +- What happens if you use one device to control or modulate another (e.g., encoder sets a threshold, sensor triggers an action)? +- How does the system feel if you swap which device is "primary" and which is "secondary"? + +Try chaining different combinations and document what you discover! + +See encoder_accel_servo_dashboard.py in the Lab 4 folder for an example of chaining together three devices. + +**`Lab 4/encoder_accel_servo_dashboard.py`** + +#### Using Multiple Qwiic Buttons: Changing I2C Address (Physically & Digitally) + +If you want to use more than one Qwiic Button in your project, you must give each button a unique I2C address. There are two ways to do this: + +##### 1. Physically: Soldering Address Jumpers + +On the back of the Qwiic Button, you'll find four solder jumpers labeled A0, A1, A2, and A3. By bridging these with solder, you change the I2C address. Only one button on the chain can use the default address (0x6F). + +**Address Table:** + +| A3 | A2 | A1 | A0 | Address (hex) | +|----|----|----|----|---------------| +| 0 | 0 | 0 | 0 | 0x6F | +| 0 | 0 | 0 | 1 | 0x6E | +| 0 | 0 | 1 | 0 | 0x6D | +| 0 | 0 | 1 | 1 | 0x6C | +| 0 | 1 | 0 | 0 | 0x6B | +| 0 | 1 | 0 | 1 | 0x6A | +| 0 | 1 | 1 | 0 | 0x69 | +| 0 | 1 | 1 | 1 | 0x68 | +| 1 | 0 | 0 | 0 | 0x67 | +| ...| ...| ...| ... | ... | + +For example, if you solder A0 closed (leave A1, A2, A3 open), the address becomes 0x6E. + +**Soldering Tips:** +- Use a small amount of solder to bridge the pads for the jumper you want to close. +- Only one jumper needs to be closed for each address change (see table above). +- Power cycle the button after changing the jumper. + +##### 2. Digitally: Using Software to Change Address + +You can also change the address in software (temporarily or permanently) using the example script `qwiic_button_ex6_changeI2CAddress.py` in the Lab 4 folder. This is useful if you want to reassign addresses without soldering. + +Run the script and follow the prompts: +```bash +python qwiic_button_ex6_changeI2CAddress.py +``` +Enter the new address (e.g., 5B for 0x5B) when prompted. Power cycle the button after changing the address. + +**Note:** The software method is less foolproof and you need to make sure to keep track of which button has which address! + + +##### Using Multiple Buttons in Code + +After setting unique addresses, you can use multiple buttons in your script. See these example scripts in the Lab 4 folder: + +- **`qwiic_1_button.py`**: Basic example for reading a single Qwiic Button (default address 0x6F). Run with: + ```bash + python qwiic_1_button.py + ``` + +- **`qwiic_button_led_demo.py`**: Demonstrates using two Qwiic Buttons at different addresses (e.g., 0x6F and 0x6E) and controlling their LEDs. Button 1 toggles its own LED; Button 2 toggles both LEDs. Run with: + ```bash + python qwiic_button_led_demo.py + ``` + +Here is a minimal code example for two buttons: +```python +import qwiic_button + +# Default button (0x6F) +button1 = qwiic_button.QwiicButton() +# Button with A0 soldered (0x6E) +button2 = qwiic_button.QwiicButton(0x6E) + +button1.begin() +button2.begin() + +while True: + if button1.is_button_pressed(): + print("Button 1 pressed!") + if button2.is_button_pressed(): + print("Button 2 pressed!") +``` + +For more details, see the [Qwiic Button Hookup Guide](https://learn.sparkfun.com/tutorials/qwiic-button-hookup-guide/all#i2c-address). + +--- + +### PCF8574 GPIO Expander: Add More Pins Over I²C + +Sometimes your Pi’s header GPIO pins are already full (e.g., with a display or HAT). That’s where an I²C GPIO expander comes in handy. + +We use the Adafruit PCF8574 I²C GPIO Expander, which gives you 8 extra digital pins over I²C. It’s a great way to prototype with LEDs, buttons, or other components on the breadboard without worrying about pin conflicts—similar to how Arduino users often expand their pinouts when prototyping physical interactions. + +**Why is this useful?** +- You only need two wires (I²C: SDA + SCL) to unlock 8 extra GPIOs. +- It integrates smoothly with CircuitPython and Blinka. +- It allows a clean prototyping workflow when the Pi’s 40-pin header is already occupied by displays, HATs, or sensors. +- Makes breadboard setups feel more like an Arduino-style prototyping environment where it’s easy to wire up interaction elements. + +**Demo Script:** `Lab 4/gpio_expander.py` + +

+ GPIO Expander LED Demo +

+ +We connected 8 LEDs (through 220 Ω resistors) to the expander and ran a little light show. The script cycles through three patterns: +- Chase (one LED at a time, left to right) +- Knight Rider (back-and-forth sweep) +- Disco (random blink chaos) + +Every few runs, the script swaps to the next pattern automatically: +```bash +python gpio_expander.py +``` + +This is a playful way to visualize how the expander works, but the same technique applies if you wanted to prototype buttons, switches, or other interaction elements. It’s a lightweight, flexible addition to your prototyping toolkit. + +--- + +### Servo Control with SparkFun Servo pHAT +For this lab, you will use the **SparkFun Servo pHAT** to control a micro servo (such as the Miuzei MS18 or similar 9g servo). The Servo pHAT stacks directly on top of the Adafruit Mini PiTFT (135×240) display without pin conflicts: +- The Mini PiTFT uses SPI (GPIO22, 23, 24, 25) for display and buttons ([SPI pinout](https://pinout.xyz/pinout/spi)). +- The Servo pHAT uses I²C (GPIO2 & 3) for the PCA9685 servo driver ([I2C pinout](https://pinout.xyz/pinout/i2c)). +- Since SPI and I²C are separate buses, you can use both boards together. +**⚡ Power:** +- Plug a USB-C cable into the Servo pHAT to provide enough current for the servos. The Pi itself should still be powered by its own USB-C supply. Do NOT power servos from the Pi’s 5V rail. + +

+ Servo pHAT Demo +

+ +**Basic Python Example:** +We provide a simple example script: `Lab 4/pi_servo_hat_test.py` (requires the `pi_servo_hat` Python package). +Run the example: +``` +python pi_servo_hat_test.py +``` +For more details and advanced usage, see the [official SparkFun Servo pHAT documentation](https://learn.sparkfun.com/tutorials/pi-servo-phat-v2-hookup-guide/all#resources-and-going-further). +A servo motor is a rotary actuator that allows for precise control of angular position. The position is set by the width of an electrical pulse (PWM). You can read [this Adafruit guide](https://learn.adafruit.com/adafruit-arduino-lesson-14-servo-motors/servo-motors) to learn more about how servos work. + +--- + + +### Part F + +### Record + +Document all the prototypes and iterations you have designed and worked on! Again, deliverables for this lab are writings, sketches, photos, and videos that show what your prototype: +* "Looks like": shows how the device should look, feel, sit, weigh, etc. +* "Works like": shows what the device can do +* "Acts like": shows how a person would interact with the device + diff --git a/Lab 4/accel_test.py b/Lab 4/accel_test.py index 6cfa481b1d..07337b4cff 100644 --- a/Lab 4/accel_test.py +++ b/Lab 4/accel_test.py @@ -1,20 +1,20 @@ -# SPDX-FileCopyrightText: Copyright (c) 2022 Edrig -# -# SPDX-License-Identifier: MIT -import time - -import board - -from adafruit_lsm6ds.lsm6ds3 import LSM6DS3 - -i2c = board.I2C() # uses board.SCL and board.SDA -# i2c = board.STEMMA_I2C() # For using the built-in STEMMA QT connector on a microcontroller -sensor = LSM6DS3(i2c) - -while True: - accel_x, accel_y, accel_z = sensor.acceleration - print(f"Acceleration: X:{accel_x:.2f}, Y: {accel_y:.2f}, Z: {accel_z:.2f} m/s^2") - gyro_x, gyro_y, gyro_z = sensor.gyro - print(f"Gyro X:{gyro_x:.2f}, Y: {gyro_y:.2f}, Z: {gyro_z:.2f} radians/s") - print("") +# SPDX-FileCopyrightText: Copyright (c) 2022 Edrig +# +# SPDX-License-Identifier: MIT +import time + +import board + +from adafruit_lsm6ds.lsm6ds3 import LSM6DS3 + +i2c = board.I2C() # uses board.SCL and board.SDA +# i2c = board.STEMMA_I2C() # For using the built-in STEMMA QT connector on a microcontroller +sensor = LSM6DS3(i2c) + +while True: + accel_x, accel_y, accel_z = sensor.acceleration + print(f"Acceleration: X:{accel_x:.2f}, Y: {accel_y:.2f}, Z: {accel_z:.2f} m/s^2") + gyro_x, gyro_y, gyro_z = sensor.gyro + print(f"Gyro X:{gyro_x:.2f}, Y: {gyro_y:.2f}, Z: {gyro_z:.2f} radians/s") + print("") time.sleep(0.5) \ No newline at end of file diff --git a/Lab 4/blinkatest.py b/Lab 4/blinkatest.py index 82f85093eb..ec79483000 100644 --- a/Lab 4/blinkatest.py +++ b/Lab 4/blinkatest.py @@ -1,19 +1,18 @@ -import board -import digitalio -import busio - -print("Hello, blinka!") - -# Try to create a Digital input -pin = digitalio.DigitalInOut(board.D4) -print("Digital IO ok!") - -# Try to create an I2C device -i2c = busio.I2C(board.SCL, board.SDA) -print("I2C ok!") - -# Try to create an SPI device -spi = busio.SPI(board.SCLK, board.MOSI, board.MISO) -print("SPI ok!") - +import board +import digitalio +import busio + +print("Hello, blinka!") + +# Try to create a Digital input +pin = digitalio.DigitalInOut(board.D4) +print("Digital IO ok!") + +# Try to create an I2C device +i2c = busio.I2C(board.SCL, board.SDA) +print("I2C ok!") + +# Try to create an SPI device +spi = busio.SPI(board.SCLK, board.MOSI, board.MISO) +print("SPI ok!") print("done!") \ No newline at end of file diff --git a/Lab 4/camera_test.py b/Lab 4/camera_test.py index 7cb64f1e08..fe680d635c 100644 --- a/Lab 4/camera_test.py +++ b/Lab 4/camera_test.py @@ -1,68 +1,68 @@ -import cv2 -import pyaudio -import wave -import pygame - -def test_camera(): - cap = cv2.VideoCapture(0) # Change 0 to 1 or 2 if your camera does not show up. - while True: - ret, frame = cap.read() - cv2.imshow('Camera Test', frame) - if cv2.waitKey(1) & 0xFF == ord('q'): - break - cap.release() - cv2.destroyAllWindows() - -def test_microphone(): - p = pyaudio.PyAudio() - stream = p.open(format=pyaudio.paInt16, channels=1, rate=44100, input=True, frames_per_buffer=1024) - frames = [] - print("Recording...") - for i in range(0, int(44100 / 1024 * 2)): - data = stream.read(1024) - frames.append(data) - print("Finished recording.") - stream.stop_stream() - stream.close() - p.terminate() - wf = wave.open('test.wav', 'wb') - wf.setnchannels(1) - wf.setsampwidth(p.get_sample_size(pyaudio.paInt16)) - wf.setframerate(44100) - wf.writeframes(b''.join(frames)) - wf.close() - print("Saved as test.wav") - -def test_speaker(): - p = pyaudio.PyAudio() - - # List all audio output devices - info = p.get_host_api_info_by_index(0) - numdevices = info.get('deviceCount') - for i in range(0, numdevices): - if (p.get_device_info_by_host_api_device_index(0, i).get('maxOutputChannels')) > 0: - print("Output Device id ", i, " - ", p.get_device_info_by_host_api_device_index(0, i).get('name')) - - device_index = int(input("Enter the Output Device id to use: ")) # Enter the id of your USB audio device - - wf = wave.open('test.wav', 'rb') - stream = p.open(format=p.get_format_from_width(wf.getsampwidth()), - channels=wf.getnchannels(), - rate=wf.getframerate(), - output=True, - output_device_index=device_index) # specify your device index here - - data = wf.readframes(1024) - while data: - stream.write(data) - data = wf.readframes(1024) - - stream.stop_stream() - stream.close() - p.terminate() - - -if __name__ == "__main__": - test_camera() - test_microphone() - test_speaker() +import cv2 +import pyaudio +import wave +import pygame + +def test_camera(): + cap = cv2.VideoCapture(0) # Change 0 to 1 or 2 if your camera does not show up. + while True: + ret, frame = cap.read() + cv2.imshow('Camera Test', frame) + if cv2.waitKey(1) & 0xFF == ord('q'): + break + cap.release() + cv2.destroyAllWindows() + +def test_microphone(): + p = pyaudio.PyAudio() + stream = p.open(format=pyaudio.paInt16, channels=1, rate=44100, input=True, frames_per_buffer=1024) + frames = [] + print("Recording...") + for i in range(0, int(44100 / 1024 * 2)): + data = stream.read(1024) + frames.append(data) + print("Finished recording.") + stream.stop_stream() + stream.close() + p.terminate() + wf = wave.open('test.wav', 'wb') + wf.setnchannels(1) + wf.setsampwidth(p.get_sample_size(pyaudio.paInt16)) + wf.setframerate(44100) + wf.writeframes(b''.join(frames)) + wf.close() + print("Saved as test.wav") + +def test_speaker(): + p = pyaudio.PyAudio() + + # List all audio output devices + info = p.get_host_api_info_by_index(0) + numdevices = info.get('deviceCount') + for i in range(0, numdevices): + if (p.get_device_info_by_host_api_device_index(0, i).get('maxOutputChannels')) > 0: + print("Output Device id ", i, " - ", p.get_device_info_by_host_api_device_index(0, i).get('name')) + + device_index = int(input("Enter the Output Device id to use: ")) # Enter the id of your USB audio device + + wf = wave.open('test.wav', 'rb') + stream = p.open(format=p.get_format_from_width(wf.getsampwidth()), + channels=wf.getnchannels(), + rate=wf.getframerate(), + output=True, + output_device_index=device_index) # specify your device index here + + data = wf.readframes(1024) + while data: + stream.write(data) + data = wf.readframes(1024) + + stream.stop_stream() + stream.close() + p.terminate() + + +if __name__ == "__main__": + test_camera() + test_microphone() + test_speaker() diff --git a/Lab 4/cap_test.py b/Lab 4/cap_test.py index cdb7f6037a..f193985b7b 100644 --- a/Lab 4/cap_test.py +++ b/Lab 4/cap_test.py @@ -1,16 +1,16 @@ - -import time -import board -import busio - -import adafruit_mpr121 - -i2c = busio.I2C(board.SCL, board.SDA) - -mpr121 = adafruit_mpr121.MPR121(i2c) - -while True: - for i in range(12): - if mpr121[i].value: - print(f"Twizzler {i} touched!") - time.sleep(0.25) # Small delay to keep from spamming output messages. + +import time +import board +import busio + +import adafruit_mpr121 + +i2c = busio.I2C(board.SCL, board.SDA) + +mpr121 = adafruit_mpr121.MPR121(i2c) + +while True: + for i in range(12): + if mpr121[i].value: + print(f"Twizzler {i} touched!") + time.sleep(0.25) # Small delay to keep from spamming output messages. diff --git a/Lab 4/cardboard.jpg b/Lab 4/cardboard.jpg new file mode 100644 index 0000000000..ecfdac4b8d Binary files /dev/null and b/Lab 4/cardboard.jpg differ diff --git a/Lab 4/color_test.py b/Lab 4/color_test.py index 57fa7a79fe..993cc6ddf7 100644 --- a/Lab 4/color_test.py +++ b/Lab 4/color_test.py @@ -1,30 +1,30 @@ -# SPDX-FileCopyrightText: 2021 ladyada for Adafruit Industries -# SPDX-License-Identifier: MIT - -import time -import board -from adafruit_apds9960.apds9960 import APDS9960 -from adafruit_apds9960 import colorutility - -i2c = board.I2C() -apds = APDS9960(i2c) -apds.enable_color = True - - -while True: - # create some variables to store the color data in - - # wait for color data to be ready - while not apds.color_data_ready: - time.sleep(0.005) - - # get the data and print the different channels - r, g, b, c = apds.color_data - print("red: ", r) - print("green: ", g) - print("blue: ", b) - print("clear: ", c) - - print("color temp {}".format(colorutility.calculate_color_temperature(r, g, b))) - print("light lux {}".format(colorutility.calculate_lux(r, g, b))) +# SPDX-FileCopyrightText: 2021 ladyada for Adafruit Industries +# SPDX-License-Identifier: MIT + +import time +import board +from adafruit_apds9960.apds9960 import APDS9960 +from adafruit_apds9960 import colorutility + +i2c = board.I2C() +apds = APDS9960(i2c) +apds.enable_color = True + + +while True: + # create some variables to store the color data in + + # wait for color data to be ready + while not apds.color_data_ready: + time.sleep(0.005) + + # get the data and print the different channels + r, g, b, c = apds.color_data + print("red: ", r) + print("green: ", g) + print("blue: ", b) + print("clear: ", c) + + print("color temp {}".format(colorutility.calculate_color_temperature(r, g, b))) + print("light lux {}".format(colorutility.calculate_lux(r, g, b))) time.sleep(0.5) \ No newline at end of file diff --git a/Lab 4/distance_test.py b/Lab 4/distance_test.py index 22b80eb3c3..90c68483b5 100644 --- a/Lab 4/distance_test.py +++ b/Lab 4/distance_test.py @@ -1,29 +1,29 @@ -""" - Reading distance from the laser based VL53L1X - This example prints the distance to an object. If you are getting weird - readings, be sure the vacuum tape has been removed from the sensor. -""" - -import qwiic -import time - -print("VL53L1X Qwiic Test\n") -ToF = qwiic.QwiicVL53L1X() -if (ToF.sensor_init() == None): # Begin returns 0 on a good init - print("Sensor online!\n") - -while True: - try: - ToF.start_ranging() # Write configuration bytes to initiate measurement - time.sleep(.005) - distance = ToF.get_distance() # Get the result of the measurement from the sensor - time.sleep(.005) - ToF.stop_ranging() - - distanceInches = distance / 25.4 - distanceFeet = distanceInches / 12.0 - - print("Distance(mm): %s Distance(ft): %s" % (distance, distanceFeet)) - - except Exception as e: +""" + Reading distance from the laser based VL53L1X + This example prints the distance to an object. If you are getting weird + readings, be sure the vacuum tape has been removed from the sensor. +""" + +import qwiic +import time + +print("VL53L1X Qwiic Test\n") +ToF = qwiic.QwiicVL53L1X() +if (ToF.sensor_init() == None): # Begin returns 0 on a good init + print("Sensor online!\n") + +while True: + try: + ToF.start_ranging() # Write configuration bytes to initiate measurement + time.sleep(.005) + distance = ToF.get_distance() # Get the result of the measurement from the sensor + time.sleep(.005) + ToF.stop_ranging() + + distanceInches = distance / 25.4 + distanceFeet = distanceInches / 12.0 + + print("Distance(mm): %s Distance(ft): %s" % (distance, distanceFeet)) + + except Exception as e: print(e) \ No newline at end of file diff --git a/Lab 4/elemental_fury.py b/Lab 4/elemental_fury.py new file mode 100644 index 0000000000..52d8400eb3 --- /dev/null +++ b/Lab 4/elemental_fury.py @@ -0,0 +1,285 @@ +from digitalio import DigitalInOut +import busio +import board +import time +from PIL import Image, ImageDraw, ImageFont +import adafruit_rgb_display.ili9341 as ili9341 +import adafruit_rgb_display.st7789 as st7789 # pylint: disable=unused-import +import adafruit_rgb_display.hx8357 as hx8357 # pylint: disable=unused-import +import adafruit_rgb_display.st7735 as st7735 # pylint: disable=unused-import +import adafruit_rgb_display.ssd1351 as ssd1351 # pylint: disable=unused-import +import adafruit_rgb_display.ssd1331 as ssd1331 # pylint: disable=unused-import +import adafruit_ssd1306 +import qwiic_proximity +import adafruit_mpr121 +from adafruit_apds9960.apds9960 import APDS9960 +from adafruit_apds9960 import colorutility +from adafruit_seesaw import seesaw, rotaryio, digitalio +import qwiic_joystick + +# Configuration for CS and DC pins (these are PiTFT defaults): +cs_pin = DigitalInOut(board.D5) +dc_pin = DigitalInOut(board.D25) +reset_pin = DigitalInOut(board.D24) + +# Config for display baudrate (default max is 24mhz): +BAUDRATE = 24000000 + +# Setup SPI bus using hardware SPI: +spi = board.SPI() + +# pylint: disable=line-too-long +# Create the display: +# disp = st7789.ST7789(spi, rotation=90, # 2.0" ST7789 +# disp = st7789.ST7789(spi, height=240, y_offset=80, rotation=180, # 1.3", 1.54" ST7789 +# disp = st7789.ST7789(spi, rotation=90, width=135, height=240, x_offset=53, y_offset=40, # 1.14" ST7789 +# disp = hx8357.HX8357(spi, rotation=180, # 3.5" HX8357 +# disp = st7735.ST7735R(spi, rotation=90, # 1.8" ST7735R +# disp = st7735.ST7735R(spi, rotation=270, height=128, x_offset=2, y_offset=3, # 1.44" ST7735R +# disp = st7735.ST7735R(spi, rotation=90, bgr=True, # 0.96" MiniTFT ST7735R +# disp = ssd1351.SSD1351(spi, rotation=180, # 1.5" SSD1351 +# disp = ssd1351.SSD1351(spi, height=96, y_offset=32, rotation=180, # 1.27" SSD1351 +# disp = ssd1331.SSD1331(spi, rotation=180, # 0.96" SSD1331 +disp = st7789.ST7789( + spi, + cs=cs_pin, + dc=dc_pin, + rst=reset_pin, + baudrate=BAUDRATE, + width=135, + height=240, + x_offset=53, + y_offset=40, +) +# pylint: enable=line-too-long + +# Create blank image for drawing. +# Make sure to create image with mode 'RGB' for full color. +if disp.rotation % 180 == 90: + height = disp.width # we swap height/width to rotate it to landscape! + width = disp.height +else: + width = disp.width # we swap height/width to rotate it to landscape! + height = disp.height +image = Image.new("RGB", (width, height)) + +# Get drawing object to draw on image. +draw = ImageDraw.Draw(image) + +# Draw a black filled box to clear the image. +draw.rectangle((0, 0, width, height), outline=0, fill=(0, 0, 0)) +disp.image(image) + +# light setup +i2c = busio.I2C(board.SCL, board.SDA) +apds = APDS9960(i2c) +apds.enable_color = True + +# prox setup +oProx = qwiic_proximity.QwiicProximity() +if oProx.connected == False: + time.sleep(0.05) +oProx.begin() + +# joystick setup +myJoystick = qwiic_joystick.QwiicJoystick(i2c) +myJoystick.begin() + +# rot setup +seesaw = seesaw.Seesaw(i2c, addr=0x36) + +# oled setup +oled = adafruit_ssd1306.SSD1306_I2C(128, 32, i2c) +font = ImageFont.load_default() +# start with a blank screen +oled.fill(0) +oled.show() + +seesaw_product = (seesaw.get_version() >> 16) & 0xFFFF + +seesaw.pin_mode(24, seesaw.INPUT_PULLUP) +button = digitalio.DigitalIO(seesaw, 24) +button_held = False + +encoder = rotaryio.IncrementalEncoder(seesaw) +last_position = -encoder.position + +# initialize capacity twizzer +mpr121 = adafruit_mpr121.MPR121(i2c, address=0x5A) + +def display(image_name): + image = Image.open(image_name) + backlight = DigitalInOut(board.D22) + backlight.switch_to_output() + backlight.value = True + + + # Scale the image to the smaller screen dimension + image_ratio = image.width / image.height + screen_ratio = width / height + if screen_ratio < image_ratio: + scaled_width = image.width * height // image.height + scaled_height = height + else: + scaled_width = width + scaled_height = image.height * width // image.width + image = image.resize((scaled_width, scaled_height), Image.BICUBIC) + + # Crop and center the image + x = scaled_width // 2 - width // 2 + y = scaled_height // 2 - height // 2 + image = image.crop((x, y, x + width, y + height)) + + # Display image. + disp.image(image) + + +def draw_hp_bar(hp: int, max_hp: int = 5): + """ + Draw an HP bar with 'hp' filled blocks and 'max_hp' total blocks. + """ + # Create a blank image for drawing + image = Image.new("1", (oled.width, oled.height)) + draw = ImageDraw.Draw(image) + + # Draw "HP" text + draw.text((0, 10), "HP", font=font, fill=1) + + # Define HP bar layout + block_width = 18 + block_height = 10 + spacing = 2 + start_x = 30 + start_y = 10 + + # Draw filled / empty HP blocks + for i in range(max_hp): + x = start_x + i * (block_width + spacing) + if i < hp: + draw.rectangle([x, start_y, x + block_width, start_y + block_height], fill=1) + else: + draw.rectangle([x, start_y, x + block_width, start_y + block_height], outline=1) + + # Push the image to the display + oled.image(image) + oled.show() + +def display_text(text: str): + """ + Display a single line of text on the OLED. + """ + image = Image.new("1", (oled.width, oled.height)) + draw = ImageDraw.Draw(image) + draw.text((0, 10), text, font=font, fill=1) + oled.image(image) + oled.show() + +def joystick(): + # if |x - 512| + |y - 512| > 200 return True + x = myJoystick.horizontal + y = myJoystick.vertical + if abs(x - 512) + abs(y - 512) > 200: + return True + else: + return False + +def prox_sensor(): + val = oProx.get_proximity() + return val <= 10.0 + + +def light_sensor(): + # wait for color data to be ready + while not apds.color_data_ready: + time.sleep(0.005) + r, g, b, c = apds.color_data + if c <= 100: + return True + else: + return False + +def rotary(): + # negate the position to make clockwise rotation positive + global last_position + position = -encoder.position + diff = abs(last_position - position) + last_position = position + if diff >= 5: + return True + else: + return False + + +trigger = False +hp=5 +while True: + draw_hp_bar(hp) + + # 0 = lightning, 1= fireball, 2 = earthquake, 3 = slime + if mpr121[0].value: + display("lightning.png") + start_time = time.time() + trigger = False + while time.time() - start_time < 2: + if joystick() or prox_sensor() or rotary(): + trigger = False + break + if light_sensor(): + trigger = True + display("victory.png") + break + if not trigger: + hp -= 1 + draw_hp_bar(hp) + + if mpr121[1].value: + display("fireball.png") + start_time = time.time() + trigger = False + while time.time() - start_time < 2: + if light_sensor() or prox_sensor() or rotary(): + trigger = False + break + if joystick(): + trigger = True + display("victory.png") + break + if not trigger: + hp -= 1 + draw_hp_bar(hp) + + if mpr121[2].value: + display("earthquake.png") + start_time = time.time() + trigger = False + while time.time() - start_time < 2: + if joystick() or light_sensor() or rotary(): + trigger = False + break + if prox_sensor(): + trigger = True + display("victory.png") + break + if not trigger: + hp -= 1 + draw_hp_bar(hp) + + if mpr121[3].value: + display("slime.png") + start_time = time.time() + trigger = False + while time.time() - start_time < 2: + if joystick() or light_sensor() or prox_sensor(): + trigger = False + break + if rotary(): + trigger = True + display("victory.png") + break + + if not trigger: + hp -= 1 + draw_hp_bar(hp) + + if hp == 0: + display("skeleton.png") \ No newline at end of file diff --git a/Lab 4/encoder_accel_servo_dashboard.py b/Lab 4/encoder_accel_servo_dashboard.py index bc3d0a0809..b8219aec6e 100644 --- a/Lab 4/encoder_accel_servo_dashboard.py +++ b/Lab 4/encoder_accel_servo_dashboard.py @@ -1,78 +1,79 @@ -import time -import board -from adafruit_seesaw import seesaw, rotaryio, digitalio -from adafruit_lsm6ds.lsm6ds3 import LSM6DS3 -import pi_servo_hat -import math -import os - -# --- Setup --- -ss = seesaw.Seesaw(board.I2C(), addr=0x36) -ss.pin_mode(24, ss.INPUT_PULLUP) -button = digitalio.DigitalIO(ss, 24) -encoder = rotaryio.IncrementalEncoder(ss) -last_encoder = -999 - -# Accelerometer -sox = LSM6DS3(board.I2C(), address=0x6A) - -# Servo -servo = pi_servo_hat.PiServoHat() -servo.restart() -SERVO_MIN = 0 -SERVO_MAX = 120 -SERVO_CH = 0 - -# --- Modes --- -MODES = ["Encoder Only", "Accelerometer Only", "Combined"] -mode = 0 -mode_press = False - -# --- State --- -base_angle = 60 -enc_factor = 5 - -def clamp(val, minv, maxv): - return max(minv, min(maxv, val)) - -def clear(): - os.system('clear') - -# --- Main Loop --- -tilt_offset = 0 -while True: - # --- Read encoder/button --- - enc_pos = -encoder.position - if not button.value and not mode_press: - mode = (mode + 1) % len(MODES) - mode_press = True - if button.value and mode_press: - mode_press = False - - # --- Read accelerometer --- - accel_x, accel_y, accel_z = sox.acceleration - z_angle_rad = math.atan2(-accel_y, accel_x) - z_angle_deg = math.degrees(z_angle_rad) - raw_offset = -z_angle_deg * (40/90) - tilt_offset = 0.8 * tilt_offset + 0.2 * raw_offset - tilt_offset_int = int(tilt_offset) - - # --- Calculate servo angle --- - if mode == 0: # Encoder Only - servo_angle = clamp(60 + enc_pos * enc_factor, SERVO_MIN, SERVO_MAX) - elif mode == 1: # Accelerometer Only - servo_angle = clamp(60 + tilt_offset_int, SERVO_MIN, SERVO_MAX) - else: # Combined - servo_angle = clamp(60 + enc_pos * enc_factor + tilt_offset_int, SERVO_MIN, SERVO_MAX) - servo.move_servo_position(SERVO_CH, servo_angle) - - # --- Dashboard --- - clear() - print(f"=== Servo/Encoder/Accel Dashboard ===") - print(f"Mode: {MODES[mode]} (press encoder button to switch)") - print(f"Encoder position: {enc_pos}") - print(f"Accel X: {accel_x:.2f} Y: {accel_y:.2f} Z: {accel_z:.2f}") - print(f"Z angle: {z_angle_deg:.1f}° Tilt offset: {tilt_offset_int}") - print(f"Servo angle: {servo_angle}") - print(f"\n[Encoder sets base, Accel tilts needle, Combined = both]") - time.sleep(0.07) +import time +import board +from adafruit_seesaw import seesaw, rotaryio, digitalio +from adafruit_lsm6ds.lsm6ds3 import LSM6DS3 +import pi_servo_hat +import math +import os + +# --- Setup --- +ss = seesaw.Seesaw(board.I2C(), addr=0x36) +ss.pin_mode(24, ss.INPUT_PULLUP) +button = digitalio.DigitalIO(ss, 24) +encoder = rotaryio.IncrementalEncoder(ss) +last_encoder = -999 + +# Accelerometer +sox = LSM6DS3(board.I2C(), address=0x6A) + +# Servo +servo = pi_servo_hat.PiServoHat() +servo.restart() +SERVO_MIN = 0 +SERVO_MAX = 120 +SERVO_CH = 0 + +# --- Modes --- +MODES = ["Encoder Only", "Accelerometer Only", "Combined"] +mode = 0 +mode_press = False + +# --- State --- +base_angle = 60 +enc_factor = 5 + +def clamp(val, minv, maxv): + return max(minv, min(maxv, val)) + +def clear(): + os.system('clear') + +# --- Main Loop --- +tilt_offset = 0 +while True: + # --- Read encoder/button --- + enc_pos = -encoder.position + if not button.value and not mode_press: + mode = (mode + 1) % len(MODES) + mode_press = True + if button.value and mode_press: + mode_press = False + + # --- Read accelerometer --- + accel_x, accel_y, accel_z = sox.acceleration + z_angle_rad = math.atan2(-accel_y, accel_x) + z_angle_deg = math.degrees(z_angle_rad) + raw_offset = -z_angle_deg * (40/90) + tilt_offset = 0.8 * tilt_offset + 0.2 * raw_offset + tilt_offset_int = int(tilt_offset) + + # --- Calculate servo angle --- + if mode == 0: # Encoder Only + servo_angle = clamp(60 + enc_pos * enc_factor, SERVO_MIN, SERVO_MAX) + elif mode == 1: # Accelerometer Only + servo_angle = clamp(60 + tilt_offset_int, SERVO_MIN, SERVO_MAX) + else: # Combined + servo_angle = clamp(60 + enc_pos * enc_factor + tilt_offset_int, SERVO_MIN, SERVO_MAX) + servo.move_servo_position(SERVO_CH, servo_angle) + + # --- Dashboard --- + clear() + print(f"=== Servo/Encoder/Accel Dashboard ===") + print(f"Mode: {MODES[mode]} (press encoder button to switch)") + print(f"Encoder position: {enc_pos}") + print(f"Accel X: {accel_x:.2f} Y: {accel_y:.2f} Z: {accel_z:.2f}") + print(f"Z angle: {z_angle_deg:.1f}° Tilt offset: {tilt_offset_int}") + print(f"Servo angle: {servo_angle}") + print(f"\n[Encoder sets base, Accel tilts needle, Combined = both]") + time.sleep(0.07) + diff --git a/Lab 4/encoder_test.py b/Lab 4/encoder_test.py index 8ecc7c1818..afb774d132 100644 --- a/Lab 4/encoder_test.py +++ b/Lab 4/encoder_test.py @@ -1,43 +1,44 @@ -# SPDX-FileCopyrightText: 2021 John Furcean -# SPDX-License-Identifier: MIT - -"""I2C rotary encoder simple test example.""" - -import board -from adafruit_seesaw import seesaw, rotaryio, digitalio - -# For use with the STEMMA connector on QT Py RP2040 -# import busio -# i2c = busio.I2C(board.SCL1, board.SDA1) -# seesaw = seesaw.Seesaw(i2c, 0x36) - -seesaw = seesaw.Seesaw(board.I2C(), addr=0x36) - -seesaw_product = (seesaw.get_version() >> 16) & 0xFFFF -print("Found product {}".format(seesaw_product)) -if seesaw_product != 4991: - print("Wrong firmware loaded? Expected 4991") - -seesaw.pin_mode(24, seesaw.INPUT_PULLUP) -button = digitalio.DigitalIO(seesaw, 24) -button_held = False - -encoder = rotaryio.IncrementalEncoder(seesaw) -last_position = None - -while True: - - # negate the position to make clockwise rotation positive - position = -encoder.position - - if position != last_position: - last_position = position - print("Position: {}".format(position)) - - if not button.value and not button_held: - button_held = True - print("Button pressed") - - if button.value and button_held: - button_held = False + +# SPDX-FileCopyrightText: 2021 John Furcean +# SPDX-License-Identifier: MIT + +"""I2C rotary encoder simple test example.""" + +import board +from adafruit_seesaw import seesaw, rotaryio, digitalio + +# For use with the STEMMA connector on QT Py RP2040 +# import busio +# i2c = busio.I2C(board.SCL1, board.SDA1) +# seesaw = seesaw.Seesaw(i2c, 0x36) + +seesaw = seesaw.Seesaw(board.I2C(), addr=0x36) + +seesaw_product = (seesaw.get_version() >> 16) & 0xFFFF +print("Found product {}".format(seesaw_product)) +if seesaw_product != 4991: + print("Wrong firmware loaded? Expected 4991") + +seesaw.pin_mode(24, seesaw.INPUT_PULLUP) +button = digitalio.DigitalIO(seesaw, 24) +button_held = False + +encoder = rotaryio.IncrementalEncoder(seesaw) +last_position = None + +while True: + + # negate the position to make clockwise rotation positive + position = -encoder.position + + if position != last_position: + last_position = position + print("Position: {}".format(position)) + + if not button.value and not button_held: + button_held = True + print("Button pressed") + + if button.value and button_held: + button_held = False print("Button released") \ No newline at end of file diff --git a/Lab 4/game_rule.JPG b/Lab 4/game_rule.JPG new file mode 100644 index 0000000000..c1f0425ddd Binary files /dev/null and b/Lab 4/game_rule.JPG differ diff --git a/Lab 4/gesture_test.py b/Lab 4/gesture_test.py index 2d4d455df4..14a56e1a2a 100644 --- a/Lab 4/gesture_test.py +++ b/Lab 4/gesture_test.py @@ -1,26 +1,26 @@ -# SPDX-FileCopyrightText: 2021 ladyada for Adafruit Industries -# SPDX-License-Identifier: MIT - -import board -from adafruit_apds9960.apds9960 import APDS9960 - -i2c = board.I2C() - -apds = APDS9960(i2c) -apds.enable_proximity = True -apds.enable_gesture = True - -# Uncomment and set the rotation if depending on how your sensor is mounted. -# apds.rotation = 270 # 270 for CLUE - -while True: - gesture = apds.gesture() - - if gesture == 0x01: - print("up") - elif gesture == 0x02: - print("down") - elif gesture == 0x03: - print("left") - elif gesture == 0x04: +# SPDX-FileCopyrightText: 2021 ladyada for Adafruit Industries +# SPDX-License-Identifier: MIT + +import board +from adafruit_apds9960.apds9960 import APDS9960 + +i2c = board.I2C() + +apds = APDS9960(i2c) +apds.enable_proximity = True +apds.enable_gesture = True + +# Uncomment and set the rotation if depending on how your sensor is mounted. +# apds.rotation = 270 # 270 for CLUE + +while True: + gesture = apds.gesture() + + if gesture == 0x01: + print("up") + elif gesture == 0x02: + print("down") + elif gesture == 0x03: + print("left") + elif gesture == 0x04: print("right") \ No newline at end of file diff --git a/Lab 4/gpio_expander.py b/Lab 4/gpio_expander.py index 00aea270d3..31883f6a7a 100644 --- a/Lab 4/gpio_expander.py +++ b/Lab 4/gpio_expander.py @@ -1,64 +1,86 @@ -# gpio_expander.py -# LED fun with PCF8574 I2C GPIO expander -# -# Demonstrates how to use an I2C GPIO expander to sink current -# and control multiple LEDs for quick breadboard prototyping. - -import time -import random -import board -import adafruit_pcf8574 - -# Initialize I2C and PCF8574 -i2c = board.I2C() -pcf = adafruit_pcf8574.PCF8574(i2c) - -# Grab all 8 pins -leds = [pcf.get_pin(i) for i in range(8)] - -# Configure as outputs (HIGH = off, LOW = LED on) -for ld in leds: - ld.switch_to_output(value=True) - -# --- Patterns --- -def chase(): - """Simple left-to-right chase""" - for ld in leds: - ld.value = False - time.sleep(0.12) - ld.value = True - -def knight_rider(): - """Bounce back and forth""" - for ld in leds: - ld.value = False - time.sleep(0.12) - ld.value = True - for ld in reversed(leds[1:-1]): - ld.value = False - time.sleep(0.12) - ld.value = True - -def disco(): - """Random LED flashing""" - for _ in range(12): - ld = random.choice(leds) - ld.value = False - time.sleep(0.08) - ld.value = True - -patterns = [chase, knight_rider, disco] - -# --- Main Loop --- -pattern_index = 0 -runs = 0 - -while True: - # Run current pattern - patterns[pattern_index]() - runs += 1 - - # After a few runs, switch pattern - if runs >= 5: - runs = 0 - pattern_index = (pattern_index + 1) % len(patterns) +# gpio_expander.py +# LED fun with PCF8574 I2C GPIO expander +# +# Demonstrates how to use an I2C GPIO expander to sink current +# and control multiple LEDs for quick breadboard prototyping. + +import time +import random +import board +import adafruit_pcf8574 + +# Initialize I2C and PCF8574 +i2c = board.I2C() +pcf = adafruit_pcf8574.PCF8574(i2c) + +# Grab all 8 pins +leds = [pcf.get_pin(i) for i in range(8)] + +# Configure as outputs (HIGH = off, LOW = LED on) +for ld in leds: + ld.switch_to_output(value=True) + +# --- Patterns --- +def chase(): + """Simple left-to-right chase""" + for ld in leds: + ld.value = False + time.sleep(0.12) + ld.value = True + +def knight_rider(): + """Bounce back and forth""" + for ld in leds: + ld.value = False + time.sleep(0.12) + ld.value = True + for ld in reversed(leds[1:-1]): + ld.value = False + time.sleep(0.12) + ld.value = True + +def disco(): + """Random LED flashing""" + for _ in range(12): + ld = random.choice(leds) + ld.value = False + time.sleep(0.08) + ld.value = True + +patterns = [chase, knight_rider, disco] + +# --- Main Loop --- +pattern_index = 0 +runs = 0 + +while True: + # Run current pattern + patterns[pattern_index]() + runs += 1 + + # After a few runs, switch pattern + if runs >= 5: + runs = 0 + pattern_index = (pattern_index + 1) % len(patterns) +======= +# gpio_expander.py +# LED fun with PCF8574 I2C GPIO expander +# +# Demonstrates how to use an I2C GPIO expander to sink current +# and control multiple LEDs for quick breadboard prototyping. + +import time +import random +import board +import adafruit_pcf8574 + +# Initialize I2C and PCF8574 +i2c = board.I2C() +pcf = adafruit_pcf8574.PCF8574(i2c) + +# Grab all 8 pins +leds = [pcf.get_pin(i) for i in range(8)] + +# Configure as outputs (HIGH = off, LOW = LED on) +for ld in leds: + ld.switch_to_output(value=True) \ No newline at end of file diff --git a/Lab 4/joystick_test.py b/Lab 4/joystick_test.py index 133ad115d5..39d5c079ba 100644 --- a/Lab 4/joystick_test.py +++ b/Lab 4/joystick_test.py @@ -1,34 +1,34 @@ -from __future__ import print_function -import qwiic_joystick -import time -import sys - -def runExample(): - - print("\nSparkFun qwiic Joystick Example 1\n") - myJoystick = qwiic_joystick.QwiicJoystick() - - if myJoystick.connected == False: - print("The Qwiic Joystick device isn't connected to the system. Please check your connection", \ - file=sys.stderr) - return - - myJoystick.begin() - - print("Initialized. Firmware Version: %s" % myJoystick.version) - - while True: - - print("X: %d, Y: %d, Button: %d" % ( \ - myJoystick.horizontal, \ - myJoystick.vertical, \ - myJoystick.button)) - - time.sleep(.5) - -if __name__ == '__main__': - try: - runExample() - except (KeyboardInterrupt, SystemExit) as exErr: - print("\nEnding Example 1") +from __future__ import print_function +import qwiic_joystick +import time +import sys + +def runExample(): + + print("\nSparkFun qwiic Joystick Example 1\n") + myJoystick = qwiic_joystick.QwiicJoystick() + + if myJoystick.connected == False: + print("The Qwiic Joystick device isn't connected to the system. Please check your connection", \ + file=sys.stderr) + return + + myJoystick.begin() + + print("Initialized. Firmware Version: %s" % myJoystick.version) + + while True: + + print("X: %d, Y: %d, Button: %d" % ( \ + myJoystick.horizontal, \ + myJoystick.vertical, \ + myJoystick.button)) + + time.sleep(.5) + +if __name__ == '__main__': + try: + runExample() + except (KeyboardInterrupt, SystemExit) as exErr: + print("\nEnding Example 1") sys.exit(0) \ No newline at end of file diff --git a/Lab 4/keypad_test.py b/Lab 4/keypad_test.py index 60e851d7d2..ea233644ab 100644 --- a/Lab 4/keypad_test.py +++ b/Lab 4/keypad_test.py @@ -1,60 +1,61 @@ -# Make sure to have everything set up -# https://github.com/sparkfun/Qwiic_Keypad_Py -# `pip install sparkfun-qwiic-keypad` - -# From https://github.com/sparkfun/Qwiic_Keypad_Py/blob/main/examples/qwiic_keypad_ex2.py - - -from __future__ import print_function -import qwiic_keypad -import time -import sys - -def runExample(): - - print("\nSparkFun qwiic Keypad Example 1\n") - myKeypad = qwiic_keypad.QwiicKeypad() - - if myKeypad.connected == False: - print("The Qwiic Keypad device isn't connected to the system. Please check your connection", \ - file=sys.stderr) - return - - myKeypad.begin() - - print("Initialized. Firmware Version: %s" % myKeypad.version) - print("Press a button: * to do a space. # to go to next line.") - - button = 0 - while True: - - # necessary for keypad to pull button from stack to readable register - myKeypad.update_fifo() - button = myKeypad.get_button() - - if button == -1: - print("No keypad detected") - time.sleep(1) - - elif button != 0: - - # Get the character version of this char - charButton = chr(button) - if charButton == '#': - print() - elif charButton == '*': - print(" ", end="") - else: - print(charButton, end="") - - # Flush the stdout buffer to give immediate user feedback - sys.stdout.flush() - - time.sleep(.25) - -if __name__ == '__main__': - try: - runExample() - except (KeyboardInterrupt, SystemExit) as exErr: - print("\nEnding Example 1") + +# Make sure to have everything set up +# https://github.com/sparkfun/Qwiic_Keypad_Py +# `pip install sparkfun-qwiic-keypad` + +# From https://github.com/sparkfun/Qwiic_Keypad_Py/blob/main/examples/qwiic_keypad_ex2.py + + +from __future__ import print_function +import qwiic_keypad +import time +import sys + +def runExample(): + + print("\nSparkFun qwiic Keypad Example 1\n") + myKeypad = qwiic_keypad.QwiicKeypad() + + if myKeypad.connected == False: + print("The Qwiic Keypad device isn't connected to the system. Please check your connection", \ + file=sys.stderr) + return + + myKeypad.begin() + + print("Initialized. Firmware Version: %s" % myKeypad.version) + print("Press a button: * to do a space. # to go to next line.") + + button = 0 + while True: + + # necessary for keypad to pull button from stack to readable register + myKeypad.update_fifo() + button = myKeypad.get_button() + + if button == -1: + print("No keypad detected") + time.sleep(1) + + elif button != 0: + + # Get the character version of this char + charButton = chr(button) + if charButton == '#': + print() + elif charButton == '*': + print(" ", end="") + else: + print(charButton, end="") + + # Flush the stdout buffer to give immediate user feedback + sys.stdout.flush() + + time.sleep(.25) + +if __name__ == '__main__': + try: + runExample() + except (KeyboardInterrupt, SystemExit) as exErr: + print("\nEnding Example 1") sys.exit(0) \ No newline at end of file diff --git a/Lab 4/oled_test.py b/Lab 4/oled_test.py index d6e96ff59e..1b34aa89e6 100644 --- a/Lab 4/oled_test.py +++ b/Lab 4/oled_test.py @@ -1,89 +1,88 @@ - -# SPDX-FileCopyrightText: 2021 ladyada for Adafruit Industries -# SPDX-License-Identifier: MIT - -import board -import busio -import adafruit_ssd1306 - -# Create the I2C interface. -i2c = busio.I2C(board.SCL, board.SDA) - -# Create the SSD1306 OLED class. -# The first two parameters are the pixel width and pixel height. Change these -# to the right size for your display! -oled = adafruit_ssd1306.SSD1306_I2C(128, 32, i2c) - - -# Helper function to draw a circle from a given position with a given radius -# This is an implementation of the midpoint circle algorithm, -# see https://en.wikipedia.org/wiki/Midpoint_circle_algorithm#C_example for details -def draw_circle(xpos0, ypos0, rad, col=1): - x = rad - 1 - y = 0 - dx = 1 - dy = 1 - err = dx - (rad << 1) - while x >= y: - oled.pixel(xpos0 + x, ypos0 + y, col) - oled.pixel(xpos0 + y, ypos0 + x, col) - oled.pixel(xpos0 - y, ypos0 + x, col) - oled.pixel(xpos0 - x, ypos0 + y, col) - oled.pixel(xpos0 - x, ypos0 - y, col) - oled.pixel(xpos0 - y, ypos0 - x, col) - oled.pixel(xpos0 + y, ypos0 - x, col) - oled.pixel(xpos0 + x, ypos0 - y, col) - if err <= 0: - y += 1 - err += dy - dy += 2 - if err > 0: - x -= 1 - dx += 2 - err += dx - (rad << 1) - - -# initial center of the circle -center_x = 63 -center_y = 15 -# how fast does it move in each direction -x_inc = 1 -y_inc = 1 -# what is the starting radius of the circle -radius = 8 - -# start with a blank screen -oled.fill(0) -# we just blanked the framebuffer. to push the framebuffer onto the display, we call show() -oled.show() -while True: - # undraw the previous circle - draw_circle(center_x, center_y, radius, col=0) - - # if bouncing off right - if center_x + radius >= oled.width: - # start moving to the left - x_inc = -1 - # if bouncing off left - elif center_x - radius < 0: - # start moving to the right - x_inc = 1 - - # if bouncing off top - if center_y + radius >= oled.height: - # start moving down - y_inc = -1 - # if bouncing off bottom - elif center_y - radius < 0: - # start moving up - y_inc = 1 - - # go more in the current direction - center_x += x_inc - center_y += y_inc - - # draw the new circle - draw_circle(center_x, center_y, radius) - # show all the changes we just made - + +# SPDX-FileCopyrightText: 2021 ladyada for Adafruit Industries +# SPDX-License-Identifier: MIT + +import board +import busio +import adafruit_ssd1306 + +# Create the I2C interface. +i2c = busio.I2C(board.SCL, board.SDA) + +# Create the SSD1306 OLED class. +# The first two parameters are the pixel width and pixel height. Change these +# to the right size for your display! +oled = adafruit_ssd1306.SSD1306_I2C(128, 32, i2c) + + +# Helper function to draw a circle from a given position with a given radius +# This is an implementation of the midpoint circle algorithm, +# see https://en.wikipedia.org/wiki/Midpoint_circle_algorithm#C_example for details +def draw_circle(xpos0, ypos0, rad, col=1): + x = rad - 1 + y = 0 + dx = 1 + dy = 1 + err = dx - (rad << 1) + while x >= y: + oled.pixel(xpos0 + x, ypos0 + y, col) + oled.pixel(xpos0 + y, ypos0 + x, col) + oled.pixel(xpos0 - y, ypos0 + x, col) + oled.pixel(xpos0 - x, ypos0 + y, col) + oled.pixel(xpos0 - x, ypos0 - y, col) + oled.pixel(xpos0 - y, ypos0 - x, col) + oled.pixel(xpos0 + y, ypos0 - x, col) + oled.pixel(xpos0 + x, ypos0 - y, col) + if err <= 0: + y += 1 + err += dy + dy += 2 + if err > 0: + x -= 1 + dx += 2 + err += dx - (rad << 1) + + +# initial center of the circle +center_x = 63 +center_y = 15 +# how fast does it move in each direction +x_inc = 1 +y_inc = 1 +# what is the starting radius of the circle +radius = 8 + +# start with a blank screen +oled.fill(0) +# we just blanked the framebuffer. to push the framebuffer onto the display, we call show() +oled.show() +while True: + # undraw the previous circle + draw_circle(center_x, center_y, radius, col=0) + + # if bouncing off right + if center_x + radius >= oled.width: + # start moving to the left + x_inc = -1 + # if bouncing off left + elif center_x - radius < 0: + # start moving to the right + x_inc = 1 + + # if bouncing off top + if center_y + radius >= oled.height: + # start moving down + y_inc = -1 + # if bouncing off bottom + elif center_y - radius < 0: + # start moving up + y_inc = 1 + + # go more in the current direction + center_x += x_inc + center_y += y_inc + + # draw the new circle + draw_circle(center_x, center_y, radius) + # show all the changes we just made oled.show() \ No newline at end of file diff --git a/Lab 4/pi_servo_hat_test.py b/Lab 4/pi_servo_hat_test.py index 841be9f5b7..de9f3be50d 100644 --- a/Lab 4/pi_servo_hat_test.py +++ b/Lab 4/pi_servo_hat_test.py @@ -1,28 +1,28 @@ -import pi_servo_hat -import time - -# For most 9g micro servos (like SG90, MS18, SER0048), safe range is 0-120 degrees -SERVO_MIN = 0 -SERVO_MAX = 120 -SERVO_CH = 0 # Channel 0 by default - -servo = pi_servo_hat.PiServoHat() -servo.restart() - -print(f"Sweeping servo on channel {SERVO_CH} from {SERVO_MIN} to {SERVO_MAX} degrees...") - -try: - while True: - # Sweep up - for angle in range(SERVO_MIN, SERVO_MAX + 1, 1): - servo.move_servo_position(SERVO_CH, angle) - print(f"Angle: {angle}") - time.sleep(0.01) - # Sweep down - for angle in range(SERVO_MAX, SERVO_MIN - 1, -1): - servo.move_servo_position(SERVO_CH, angle) - print(f"Angle: {angle}") - time.sleep(0.01) -except KeyboardInterrupt: - print("\nTest stopped.") - servo.move_servo_position(SERVO_CH, 60) # Move to center on exit +import pi_servo_hat +import time + +# For most 9g micro servos (like SG90, MS18, SER0048), safe range is 0-120 degrees +SERVO_MIN = 0 +SERVO_MAX = 120 +SERVO_CH = 0 # Channel 0 by default + +servo = pi_servo_hat.PiServoHat() +servo.restart() + +print(f"Sweeping servo on channel {SERVO_CH} from {SERVO_MIN} to {SERVO_MAX} degrees...") + +try: + while True: + # Sweep up + for angle in range(SERVO_MIN, SERVO_MAX + 1, 1): + servo.move_servo_position(SERVO_CH, angle) + print(f"Angle: {angle}") + time.sleep(0.01) + # Sweep down + for angle in range(SERVO_MAX, SERVO_MIN - 1, -1): + servo.move_servo_position(SERVO_CH, angle) + print(f"Angle: {angle}") + time.sleep(0.01) +except KeyboardInterrupt: + print("\nTest stopped.") + servo.move_servo_position(SERVO_CH, 60) # Move to center on exit diff --git a/Lab 4/proximity_test.py b/Lab 4/proximity_test.py index 9d0799f61c..d160350bc1 100644 --- a/Lab 4/proximity_test.py +++ b/Lab 4/proximity_test.py @@ -1,15 +1,16 @@ -# SPDX-FileCopyrightText: 2021 ladyada for Adafruit Industries -# SPDX-License-Identifier: MIT - -import time -import board -from adafruit_apds9960.apds9960 import APDS9960 - -i2c = board.I2C() -apds = APDS9960(i2c) - -apds.enable_proximity = True - -while True: - print(apds.proximity) + +# SPDX-FileCopyrightText: 2021 ladyada for Adafruit Industries +# SPDX-License-Identifier: MIT + +import time +import board +from adafruit_apds9960.apds9960 import APDS9960 + +i2c = board.I2C() +apds = APDS9960(i2c) + +apds.enable_proximity = True + +while True: + print(apds.proximity) time.sleep(0.2) \ No newline at end of file diff --git a/Lab 4/qwiic_1_button.py b/Lab 4/qwiic_1_button.py index ad71620525..498514d911 100644 --- a/Lab 4/qwiic_1_button.py +++ b/Lab 4/qwiic_1_button.py @@ -1,33 +1,32 @@ - -import qwiic_button -import time -import sys - -def run_example(): - - print("\nSparkFun Qwiic Button Example 1") - my_button = qwiic_button.QwiicButton() - - if my_button.begin() == False: - print("\nThe Qwiic Button isn't connected to the system. Please check your connection", \ - file=sys.stderr) - return - print("\nButton ready!") - - while True: - - if my_button.is_button_pressed() == True: - print("The button is pressed!") - - else: - print("The button is not pressed!") - - time.sleep(0.1) - -if __name__ == '__main__': - try: - run_example() - except (KeyboardInterrupt, SystemExit) as exErr: - print("\nEnding Example 1") - sys.exit(0) - +import qwiic_button +import time +import sys + +def run_example(): + + print("\nSparkFun Qwiic Button Example 1") + my_button = qwiic_button.QwiicButton() + + if my_button.begin() == False: + print("\nThe Qwiic Button isn't connected to the system. Please check your connection", \ + file=sys.stderr) + return + print("\nButton ready!") + + while True: + + if my_button.is_button_pressed() == True: + print("The button is pressed!") + + else: + print("The button is not pressed!") + + time.sleep(0.1) + +if __name__ == '__main__': + try: + run_example() + except (KeyboardInterrupt, SystemExit) as exErr: + print("\nEnding Example 1") + sys.exit(0) + diff --git a/Lab 4/qwiic_button_ex6_changeI2CAddress.py b/Lab 4/qwiic_button_ex6_changeI2CAddress.py index 35340c267e..2797f1c310 100644 --- a/Lab 4/qwiic_button_ex6_changeI2CAddress.py +++ b/Lab 4/qwiic_button_ex6_changeI2CAddress.py @@ -1,93 +1,94 @@ -#!/usr/bin/env python -#----------------------------------------------------------------------------- -# qwiic_button_ex5.py -# -# Simple Example for the Qwiic Button. Shows how to change the I2C address of -# the Qwiic Button -#------------------------------------------------------------------------ -# -# Written by Priyanka Makin @ SparkFun Electronics, January 2021 -# -# This python library supports the SparkFun Electroncis qwiic -# qwiic sensor/board ecosystem on a Raspberry Pi (and compatable) single -# board computers. -# -# More information on qwiic is at https://www.sparkfun.com/qwiic -# -# Do you like this library? Help support SparkFun. Buy a board! -# -#================================================================================== -# Copyright (c) 2019 SparkFun Electronics -# -# Permission is hereby granted, free of charge, to any person obtaining a copy -# of this software and associated documentation files (the "Software"), to deal -# in the Software without restriction, including without limitation the rights -# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -# copies of the Software, and to permit persons to whom the Software is -# furnished to do so, subject to the following conditions: -# -# The above copyright notice and this permission notice shall be included in all -# copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. -#================================================================================== -# Example 5 - -import qwiic_button -import time -import sys - -# If you've already changed the I2C address, change this to the current address! -currentAddress = qwiic_button._QWIIC_BUTTON_DEFAULT_ADDRESS - -def run_example(): - - print("\nSparkFun Qwiic Button Example 6") - my_button = qwiic_button.QwiicButton(currentAddress) - - if my_button.begin() == False: - print("\nThe Qwiic Button isn't connected to the system. Please check your connection", \ - file=sys.stderr) - return - - print("\nButton ready!") - - print("Enter a new I2C address for the Qwiic Button to use.") - print("Any address from 0x08 to 0x77 works.") - print("Don't use the 0x prefix. For instance, if you wanted to") - print("change the address to 0x5B, you would type 5B and hit enter.") - - new_address = input("New Address: ") - new_address = int(new_address, 16) - - # Check if the user entered a valid address - if new_address >= 0x08 and new_address <= 0x77: - print("Characters received and new address valid!") - print("Attempting to set Qwiic Button address...") - - my_button.set_I2C_address(new_address) - print("Address successfully changed!") - # Check that the Qwiic Button acknowledges on the new address - time.sleep(0.02) - if my_button.begin() == False: - print("The Qwiic Button isn't connected to the system. Please check your connection", \ - file=sys.stderr) - - else: - print("Button acknowledged on new address!") - - else: - print("Address entered not a valid I2C address") - -if __name__ == '__main__': - try: - run_example() - except (KeyboardInterrupt, SystemExit) as exErr: - print("\nEnding Example 6") - sys.exit(0) +#!/usr/bin/env python +#----------------------------------------------------------------------------- +# qwiic_button_ex5.py +# +# Simple Example for the Qwiic Button. Shows how to change the I2C address of +# the Qwiic Button +#------------------------------------------------------------------------ +# +# Written by Priyanka Makin @ SparkFun Electronics, January 2021 +# +# This python library supports the SparkFun Electroncis qwiic +# qwiic sensor/board ecosystem on a Raspberry Pi (and compatable) single +# board computers. +# +# More information on qwiic is at https://www.sparkfun.com/qwiic +# +# Do you like this library? Help support SparkFun. Buy a board! +# +#================================================================================== +# Copyright (c) 2019 SparkFun Electronics +# +# Permission is hereby granted, free of charge, to any person obtaining a copy +# of this software and associated documentation files (the "Software"), to deal +# in the Software without restriction, including without limitation the rights +# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +# copies of the Software, and to permit persons to whom the Software is +# furnished to do so, subject to the following conditions: +# +# The above copyright notice and this permission notice shall be included in all +# copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +#================================================================================== +# Example 5 + +import qwiic_button +import time +import sys + +# If you've already changed the I2C address, change this to the current address! +currentAddress = qwiic_button._QWIIC_BUTTON_DEFAULT_ADDRESS + +def run_example(): + + print("\nSparkFun Qwiic Button Example 6") + my_button = qwiic_button.QwiicButton(currentAddress) + + if my_button.begin() == False: + print("\nThe Qwiic Button isn't connected to the system. Please check your connection", \ + file=sys.stderr) + return + + print("\nButton ready!") + + print("Enter a new I2C address for the Qwiic Button to use.") + print("Any address from 0x08 to 0x77 works.") + print("Don't use the 0x prefix. For instance, if you wanted to") + print("change the address to 0x5B, you would type 5B and hit enter.") + + new_address = input("New Address: ") + new_address = int(new_address, 16) + + # Check if the user entered a valid address + if new_address >= 0x08 and new_address <= 0x77: + print("Characters received and new address valid!") + print("Attempting to set Qwiic Button address...") + + my_button.set_I2C_address(new_address) + print("Address successfully changed!") + # Check that the Qwiic Button acknowledges on the new address + time.sleep(0.02) + if my_button.begin() == False: + print("The Qwiic Button isn't connected to the system. Please check your connection", \ + file=sys.stderr) + + else: + print("Button acknowledged on new address!") + + else: + print("Address entered not a valid I2C address") + +if __name__ == '__main__': + try: + run_example() + except (KeyboardInterrupt, SystemExit) as exErr: + print("\nEnding Example 6") + sys.exit(0) + diff --git a/Lab 4/qwiic_button_led_demo.py b/Lab 4/qwiic_button_led_demo.py index c033ff45b1..6fa58a7af7 100644 --- a/Lab 4/qwiic_button_led_demo.py +++ b/Lab 4/qwiic_button_led_demo.py @@ -1,49 +1,49 @@ -import qwiic_button -import time -import sys - -# Example: Use two Qwiic buttons and their LEDs interactively -# - Pressing button 1 toggles its own LED -# - Pressing button 2 toggles both LEDs - -def run_example(): - print("\nQwiic Button + LED Demo: Two Buttons, Two LEDs") - my_button1 = qwiic_button.QwiicButton() - my_button2 = qwiic_button.QwiicButton(0x6E) - - if not my_button1.begin(): - print("\nThe Qwiic Button 1 isn't connected. Check your connection.", file=sys.stderr) - return - if not my_button2.begin(): - print("\nThe Qwiic Button 2 isn't connected. Check your connection.", file=sys.stderr) - return - print("\nButtons ready! Press to toggle LEDs.") - - led1_on = False - led2_on = False - while True: - # Button 1 toggles its own LED - if my_button1.is_button_pressed(): - led1_on = not led1_on - my_button1.LED_on(led1_on) - print(f"Button 1 pressed! LED 1 is now {'ON' if led1_on else 'OFF'}.") - # Wait for release to avoid rapid toggling - while my_button1.is_button_pressed(): - time.sleep(0.02) - # Button 2 toggles both LEDs - if my_button2.is_button_pressed(): - led1_on = not led1_on - led2_on = not led2_on - my_button1.LED_on(led1_on) - my_button2.LED_on(led2_on) - print(f"Button 2 pressed! LED 1: {'ON' if led1_on else 'OFF'}, LED 2: {'ON' if led2_on else 'OFF'}.") - while my_button2.is_button_pressed(): - time.sleep(0.02) - time.sleep(0.05) - -if __name__ == '__main__': - try: - run_example() - except (KeyboardInterrupt, SystemExit): - print("\nEnding Qwiic Button + LED Demo") - sys.exit(0) +import qwiic_button +import time +import sys + +# Example: Use two Qwiic buttons and their LEDs interactively +# - Pressing button 1 toggles its own LED +# - Pressing button 2 toggles both LEDs + +def run_example(): + print("\nQwiic Button + LED Demo: Two Buttons, Two LEDs") + my_button1 = qwiic_button.QwiicButton() + my_button2 = qwiic_button.QwiicButton(0x6E) + + if not my_button1.begin(): + print("\nThe Qwiic Button 1 isn't connected. Check your connection.", file=sys.stderr) + return + if not my_button2.begin(): + print("\nThe Qwiic Button 2 isn't connected. Check your connection.", file=sys.stderr) + return + print("\nButtons ready! Press to toggle LEDs.") + + led1_on = False + led2_on = False + while True: + # Button 1 toggles its own LED + if my_button1.is_button_pressed(): + led1_on = not led1_on + my_button1.LED_on(led1_on) + print(f"Button 1 pressed! LED 1 is now {'ON' if led1_on else 'OFF'}.") + # Wait for release to avoid rapid toggling + while my_button1.is_button_pressed(): + time.sleep(0.02) + # Button 2 toggles both LEDs + if my_button2.is_button_pressed(): + led1_on = not led1_on + led2_on = not led2_on + my_button1.LED_on(led1_on) + my_button2.LED_on(led2_on) + print(f"Button 2 pressed! LED 1: {'ON' if led1_on else 'OFF'}, LED 2: {'ON' if led2_on else 'OFF'}.") + while my_button2.is_button_pressed(): + time.sleep(0.02) + time.sleep(0.05) + +if __name__ == '__main__': + try: + run_example() + except (KeyboardInterrupt, SystemExit): + print("\nEnding Qwiic Button + LED Demo") + sys.exit(0) diff --git a/Lab 4/qwiic_distance.py b/Lab 4/qwiic_distance.py index 5d3f62e8f3..df590a7fbc 100644 --- a/Lab 4/qwiic_distance.py +++ b/Lab 4/qwiic_distance.py @@ -1,72 +1,73 @@ -#!/usr/bin/env python -#----------------------------------------------------------------------------- -# qwiic_proximity_ex1.py -# -# Simple Example for the Qwiic Proximity Device -#------------------------------------------------------------------------ -# -# Written by SparkFun Electronics, May 2019 -# -# This python library supports the SparkFun Electroncis qwiic -# qwiic sensor/board ecosystem on a Raspberry Pi (and compatable) single -# board computers. -# -# More information on qwiic is at https://www.sparkfun.com/qwiic -# -# Do you like this library? Help support SparkFun. Buy a board! -# -#================================================================================== -# Copyright (c) 2019 SparkFun Electronics -# -# Permission is hereby granted, free of charge, to any person obtaining a copy -# of this software and associated documentation files (the "Software"), to deal -# in the Software without restriction, including without limitation the rights -# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -# copies of the Software, and to permit persons to whom the Software is -# furnished to do so, subject to the following conditions: -# -# The above copyright notice and this permission notice shall be included in all -# copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. -#================================================================================== -# Example 1 -# -# - Setup the device -# - Output the proximity value - -from __future__ import print_function -import qwiic_proximity -import time -import sys - -def runExample(): - - print("\nSparkFun Proximity Sensor VCN4040 Example 1\n") - oProx = qwiic_proximity.QwiicProximity() - - if oProx.connected == False: - print("The Qwiic Proximity device isn't connected to the system. Please check your connection", \ - file=sys.stderr) - return - - oProx.begin() - - while True: - proxValue = oProx.get_proximity() - print("Proximity Value: %d" % proxValue) - time.sleep(.4) - - -if __name__ == '__main__': - try: - runExample() - except (KeyboardInterrupt, SystemExit) as exErr: - print("\nEnding Example 1") + +#!/usr/bin/env python +#----------------------------------------------------------------------------- +# qwiic_proximity_ex1.py +# +# Simple Example for the Qwiic Proximity Device +#------------------------------------------------------------------------ +# +# Written by SparkFun Electronics, May 2019 +# +# This python library supports the SparkFun Electroncis qwiic +# qwiic sensor/board ecosystem on a Raspberry Pi (and compatable) single +# board computers. +# +# More information on qwiic is at https://www.sparkfun.com/qwiic +# +# Do you like this library? Help support SparkFun. Buy a board! +# +#================================================================================== +# Copyright (c) 2019 SparkFun Electronics +# +# Permission is hereby granted, free of charge, to any person obtaining a copy +# of this software and associated documentation files (the "Software"), to deal +# in the Software without restriction, including without limitation the rights +# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +# copies of the Software, and to permit persons to whom the Software is +# furnished to do so, subject to the following conditions: +# +# The above copyright notice and this permission notice shall be included in all +# copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +#================================================================================== +# Example 1 +# +# - Setup the device +# - Output the proximity value + +from __future__ import print_function +import qwiic_proximity +import time +import sys + +def runExample(): + + print("\nSparkFun Proximity Sensor VCN4040 Example 1\n") + oProx = qwiic_proximity.QwiicProximity() + + if oProx.connected == False: + print("The Qwiic Proximity device isn't connected to the system. Please check your connection", \ + file=sys.stderr) + return + + oProx.begin() + + while True: + proxValue = oProx.get_proximity() + print("Proximity Value: %d" % proxValue) + time.sleep(.4) + + +if __name__ == '__main__': + try: + runExample() + except (KeyboardInterrupt, SystemExit) as exErr: + print("\nEnding Example 1") sys.exit(0) \ No newline at end of file diff --git a/Lab 4/requirements-freeze.txt b/Lab 4/requirements-freeze.txt index 2ad78f8316..1af8d1eeab 100644 --- a/Lab 4/requirements-freeze.txt +++ b/Lab 4/requirements-freeze.txt @@ -1,199 +1,200 @@ -absl-py==1.4.0 -Adafruit-Blinka==8.20.1 -adafruit-circuitpython-apds9960==3.1.9 -adafruit-circuitpython-busdevice==5.2.6 -adafruit-circuitpython-framebuf==1.6.4 -adafruit-circuitpython-motor==3.4.12 -adafruit-circuitpython-mpr121==2.1.19 -adafruit-circuitpython-mpu6050==1.2.3 -adafruit-circuitpython-pca9685==3.4.11 -adafruit-circuitpython-pixelbuf==2.0.3 -adafruit-circuitpython-register==1.9.17 -adafruit-circuitpython-requests==2.0.1 -adafruit-circuitpython-rgb-display==3.12.0 -adafruit-circuitpython-seesaw==1.15.1 -adafruit-circuitpython-servokit==1.3.16 -adafruit-circuitpython-ssd1306==2.12.15 -adafruit-circuitpython-typing==1.9.4 -Adafruit-GPIO==1.0.3 -Adafruit-PlatformDetect==3.49.0 -Adafruit-PureIO==1.1.11 -adafruit-python-shell==1.7.0 -Adafruit-SSD1306==1.6.2 -arandr==0.1.10 -args==0.1.0 -astroid==2.5.1 -asttokens==2.0.4 -attrs==23.1.0 -automationhat==0.2.0 -beautifulsoup4==4.9.3 -blinker==1.4 -blinkt==0.1.2 -buttonshim==0.0.2 -Cap1xxx==0.1.3 -certifi==2020.6.20 -cffi==1.15.1 -chardet==4.0.0 -click==7.1.2 -clint==0.5.1 -colorama==0.4.4 -coloredlogs==15.0.1 -colorzero==1.1 -contourpy==1.1.0 -cryptography==3.3.2 -cupshelpers==1.0 -cycler==0.11.0 -dbus-python==1.2.16 -distro==1.5.0 -docutils==0.16 -drumhat==0.1.0 -envirophat==1.0.0 -ExplorerHAT==0.4.2 -Flask==1.1.2 -flatbuffers==20181003210633 -fonttools==4.42.1 -fourletterphat==0.1.0 -gpiozero==1.6.2 -html5lib==1.1 -humanfriendly==10.0 -idna==2.10 -importlib-resources==6.0.1 -isort==5.6.4 -itsdangerous==1.1.0 -jedi==0.18.0 -Jinja2==2.11.3 -kiwisolver==1.4.5 -lazy-object-proxy==0.0.0 -logilab-common==1.8.1 -lxml==4.6.3 -MarkupSafe==1.1.1 -matplotlib==3.7.2 -mccabe==0.6.1 -mediapipe==0.10.3 -microdotphat==0.2.1 -mote==0.0.4 -motephat==0.0.3 -mpmath==1.3.0 -mypy==0.812 -mypy-extensions==0.4.3 -numpy==1.25.2 -oauthlib==3.1.0 -onnxruntime==1.15.1 -opencv-contrib-python==4.8.0.76 -packaging==23.1 -pantilthat==0.0.7 -parso==0.8.1 -pexpect==4.8.0 -pgzero==1.2 -phatbeat==0.1.1 -pianohat==0.1.0 -picamera2==0.3.12 -pidng==4.0.9 -piexif==1.1.3 -piglow==1.2.5 -pigpio==1.78 -Pillow==8.1.2 -piper-phonemize==1.1.0 -piper-tts==1.2.0 -protobuf==3.20.3 -psutil==5.8.0 -pycairo==1.16.2 -pycparser==2.21 -pycups==2.0.1 -pyftdi==0.55.0 -pygame==1.9.6 -Pygments==2.7.1 -PyGObject==3.38.0 -pyinotify==0.9.6 -PyJWT==1.7.1 -pylint==2.7.2 -pynmea2==1.19.0 -PyOpenGL==3.1.5 -pyOpenSSL==20.0.1 -pyparsing==3.0.9 -PyQt5==5.15.2 -PyQt5-sip==12.8.1 -pyserial==3.5b0 -pysmbc==1.0.23 -python-apt==2.2.1 -python-dateutil==2.8.2 -python-prctl==1.7 -pyusb==1.2.1 -rainbowhat==0.1.0 -reportlab==3.5.59 -requests==2.25.1 -requests-oauthlib==1.0.0 -responses==0.12.1 -roman==2.0.0 -rpi-ws281x==5.0.0 -RPi.GPIO==0.7.1 -RTIMULib==7.2.1 -scrollphat==0.0.7 -scrollphathd==1.2.1 -Send2Trash==1.6.0b1 -sense-hat==2.4.0 -simplejpeg==1.6.4 -simplejson==3.17.2 -six==1.16.0 -skywriter==0.0.7 -smbus2==0.4.3 -sn3218==1.2.7 -sounddevice==0.4.6 -soupsieve==2.2.1 -sparkfun-pi-servo-hat==0.9.0 -sparkfun-qwiic==1.1.6 -sparkfun-qwiic-adxl313==0.0.7 -sparkfun-qwiic-alphanumeric==0.0.1 -sparkfun-qwiic-as6212==0.0.2 -sparkfun-qwiic-bme280==0.9.0 -sparkfun-qwiic-button==2.0.1 -sparkfun-qwiic-ccs811==0.9.4 -sparkfun-qwiic-dual-encoder-reader==0.0.2 -sparkfun-qwiic-eeprom==0.0.1 -sparkfun-qwiic-gpio==0.0.2 -sparkfun-qwiic-i2c==0.9.11 -sparkfun-qwiic-icm20948==0.0.1 -sparkfun-qwiic-joystick==0.9.0 -sparkfun-qwiic-keypad==0.9.0 -sparkfun-qwiic-kx13x==1.0.0 -sparkfun-qwiic-led-stick==0.0.1 -sparkfun-qwiic-max3010x==0.0.2 -sparkfun-qwiic-micro-oled==0.10.0 -sparkfun-qwiic-oled-base==0.0.2 -sparkfun-qwiic-oled-display==0.0.2 -sparkfun-qwiic-pca9685==0.9.1 -sparkfun-qwiic-pir==0.0.4 -sparkfun-qwiic-proximity==0.9.0 -sparkfun-qwiic-relay==0.0.2 -sparkfun-qwiic-rfid==2.0.0 -sparkfun-qwiic-scmd==0.9.1 -sparkfun-qwiic-serlcd==0.0.1 -sparkfun-qwiic-sgp40==0.0.4 -sparkfun-qwiic-soil-moisture-sensor==0.0.2 -sparkfun-qwiic-tca9548a==0.9.0 -sparkfun-qwiic-titan-gps==0.1.1 -sparkfun-qwiic-twist==0.9.0 -sparkfun-qwiic-vl53l1x==1.0.1 -sparkfun-top-phat-button==0.0.2 -sparkfun-ublox-gps==1.1.5 -spidev==3.6 -srt==3.5.3 -ssh-import-id==5.10 -sympy==1.12 -sysv-ipc==1.1.0 -thonny==4.0.1 -toml==0.10.1 -touchphat==0.0.1 -tqdm==4.66.1 -twython==3.8.2 -typed-ast==1.4.2 -typing-extensions==4.7.1 -unicornhathd==0.0.4 -urllib3==1.26.5 -v4l2-python3==0.3.2 -vosk==0.3.45 -webencodings==0.5.1 -websockets==11.0.3 -Werkzeug==1.0.1 -wrapt==1.12.1 -zipp==3.16.2 +absl-py==1.4.0 +Adafruit-Blinka==8.20.1 +adafruit-circuitpython-apds9960==3.1.9 +adafruit-circuitpython-busdevice==5.2.6 +adafruit-circuitpython-framebuf==1.6.4 +adafruit-circuitpython-motor==3.4.12 +adafruit-circuitpython-mpr121==2.1.19 +adafruit-circuitpython-mpu6050==1.2.3 +adafruit-circuitpython-pca9685==3.4.11 +adafruit-circuitpython-pixelbuf==2.0.3 +adafruit-circuitpython-register==1.9.17 +adafruit-circuitpython-requests==2.0.1 +adafruit-circuitpython-rgb-display==3.12.0 +adafruit-circuitpython-seesaw==1.15.1 +adafruit-circuitpython-servokit==1.3.16 +adafruit-circuitpython-ssd1306==2.12.15 +adafruit-circuitpython-typing==1.9.4 +Adafruit-GPIO==1.0.3 +Adafruit-PlatformDetect==3.49.0 +Adafruit-PureIO==1.1.11 +adafruit-python-shell==1.7.0 +Adafruit-SSD1306==1.6.2 +arandr==0.1.10 +args==0.1.0 +astroid==2.5.1 +asttokens==2.0.4 +attrs==23.1.0 +automationhat==0.2.0 +beautifulsoup4==4.9.3 +blinker==1.4 +blinkt==0.1.2 +buttonshim==0.0.2 +Cap1xxx==0.1.3 +certifi==2020.6.20 +cffi==1.15.1 +chardet==4.0.0 +click==7.1.2 +clint==0.5.1 +colorama==0.4.4 +coloredlogs==15.0.1 +colorzero==1.1 +contourpy==1.1.0 +cryptography==3.3.2 +cupshelpers==1.0 +cycler==0.11.0 +dbus-python==1.2.16 +distro==1.5.0 +docutils==0.16 +drumhat==0.1.0 +envirophat==1.0.0 +ExplorerHAT==0.4.2 +Flask==1.1.2 +flatbuffers==20181003210633 +fonttools==4.42.1 +fourletterphat==0.1.0 +gpiozero==1.6.2 +html5lib==1.1 +humanfriendly==10.0 +idna==2.10 +importlib-resources==6.0.1 +isort==5.6.4 +itsdangerous==1.1.0 +jedi==0.18.0 +Jinja2==2.11.3 +kiwisolver==1.4.5 +lazy-object-proxy==0.0.0 +logilab-common==1.8.1 +lxml==4.6.3 +MarkupSafe==1.1.1 +matplotlib==3.7.2 +mccabe==0.6.1 +mediapipe==0.10.3 +microdotphat==0.2.1 +mote==0.0.4 +motephat==0.0.3 +mpmath==1.3.0 +mypy==0.812 +mypy-extensions==0.4.3 +numpy==1.25.2 +oauthlib==3.1.0 +onnxruntime==1.15.1 +opencv-contrib-python==4.8.0.76 +packaging==23.1 +pantilthat==0.0.7 +parso==0.8.1 +pexpect==4.8.0 +pgzero==1.2 +phatbeat==0.1.1 +pianohat==0.1.0 +picamera2==0.3.12 +pidng==4.0.9 +piexif==1.1.3 +piglow==1.2.5 +pigpio==1.78 +Pillow==8.1.2 +piper-phonemize==1.1.0 +piper-tts==1.2.0 +protobuf==3.20.3 +psutil==5.8.0 +pycairo==1.16.2 +pycparser==2.21 +pycups==2.0.1 +pyftdi==0.55.0 +pygame==1.9.6 +Pygments==2.7.1 +PyGObject==3.38.0 +pyinotify==0.9.6 +PyJWT==1.7.1 +pylint==2.7.2 +pynmea2==1.19.0 +PyOpenGL==3.1.5 +pyOpenSSL==20.0.1 +pyparsing==3.0.9 +PyQt5==5.15.2 +PyQt5-sip==12.8.1 +pyserial==3.5b0 +pysmbc==1.0.23 +python-apt==2.2.1 +python-dateutil==2.8.2 +python-prctl==1.7 +pyusb==1.2.1 +rainbowhat==0.1.0 +reportlab==3.5.59 +requests==2.25.1 +requests-oauthlib==1.0.0 +responses==0.12.1 +roman==2.0.0 +rpi-ws281x==5.0.0 +RPi.GPIO==0.7.1 +RTIMULib==7.2.1 +scrollphat==0.0.7 +scrollphathd==1.2.1 +Send2Trash==1.6.0b1 +sense-hat==2.4.0 +simplejpeg==1.6.4 +simplejson==3.17.2 +six==1.16.0 +skywriter==0.0.7 +smbus2==0.4.3 +sn3218==1.2.7 +sounddevice==0.4.6 +soupsieve==2.2.1 +sparkfun-pi-servo-hat==0.9.0 +sparkfun-qwiic==1.1.6 +sparkfun-qwiic-adxl313==0.0.7 +sparkfun-qwiic-alphanumeric==0.0.1 +sparkfun-qwiic-as6212==0.0.2 +sparkfun-qwiic-bme280==0.9.0 +sparkfun-qwiic-button==2.0.1 +sparkfun-qwiic-ccs811==0.9.4 +sparkfun-qwiic-dual-encoder-reader==0.0.2 +sparkfun-qwiic-eeprom==0.0.1 +sparkfun-qwiic-gpio==0.0.2 +sparkfun-qwiic-i2c==0.9.11 +sparkfun-qwiic-icm20948==0.0.1 +sparkfun-qwiic-joystick==0.9.0 +sparkfun-qwiic-keypad==0.9.0 +sparkfun-qwiic-kx13x==1.0.0 +sparkfun-qwiic-led-stick==0.0.1 +sparkfun-qwiic-max3010x==0.0.2 +sparkfun-qwiic-micro-oled==0.10.0 +sparkfun-qwiic-oled-base==0.0.2 +sparkfun-qwiic-oled-display==0.0.2 +sparkfun-qwiic-pca9685==0.9.1 +sparkfun-qwiic-pir==0.0.4 +sparkfun-qwiic-proximity==0.9.0 +sparkfun-qwiic-relay==0.0.2 +sparkfun-qwiic-rfid==2.0.0 +sparkfun-qwiic-scmd==0.9.1 +sparkfun-qwiic-serlcd==0.0.1 +sparkfun-qwiic-sgp40==0.0.4 +sparkfun-qwiic-soil-moisture-sensor==0.0.2 +sparkfun-qwiic-tca9548a==0.9.0 +sparkfun-qwiic-titan-gps==0.1.1 +sparkfun-qwiic-twist==0.9.0 +sparkfun-qwiic-vl53l1x==1.0.1 +sparkfun-top-phat-button==0.0.2 +sparkfun-ublox-gps==1.1.5 +spidev==3.6 +srt==3.5.3 +ssh-import-id==5.10 +sympy==1.12 +sysv-ipc==1.1.0 +thonny==4.0.1 +toml==0.10.1 +touchphat==0.0.1 +tqdm==4.66.1 +twython==3.8.2 +typed-ast==1.4.2 +typing-extensions==4.7.1 +unicornhathd==0.0.4 +urllib3==1.26.5 +v4l2-python3==0.3.2 +vosk==0.3.45 +webencodings==0.5.1 +websockets==11.0.3 +Werkzeug==1.0.1 +wrapt==1.12.1 +zipp==3.16.2 + diff --git a/Lab 4/requirements2023.txt b/Lab 4/requirements2023.txt index a59e7d02cc..ad150b7218 100644 --- a/Lab 4/requirements2023.txt +++ b/Lab 4/requirements2023.txt @@ -1,27 +1,27 @@ -Adafruit-Blinka -adafruit-circuitpython-busdevice -adafruit-circuitpython-framebuf -adafruit-circuitpython-mpr121 -adafruit-circuitpython-mpu6050 -adafruit-circuitpython-ssd1306 -adafruit-circuitpython-pca9685 -adafruit-circuitpython-servokit -adafruit-circuitpython-apds9960 -adafruit-circuitpython-seesaw -sparkfun-qwiic -sparkfun-qwiic-joystick -sparkfun-qwiic-vl53l1x -Adafruit-GPIO -Adafruit-PlatformDetect -Adafruit-PureIO -Adafruit-SSD1306 -pyftdi -pyserial -pyusb -rpi-ws281x -RPi.GPIO -spidev -sysv-ipc -sparkfun-qwiic-proximity - - + +Adafruit-Blinka +adafruit-circuitpython-busdevice +adafruit-circuitpython-framebuf +adafruit-circuitpython-mpr121 +adafruit-circuitpython-mpu6050 +adafruit-circuitpython-ssd1306 +adafruit-circuitpython-pca9685 +adafruit-circuitpython-servokit +adafruit-circuitpython-apds9960 +adafruit-circuitpython-seesaw +sparkfun-qwiic +sparkfun-qwiic-joystick +sparkfun-qwiic-vl53l1x +Adafruit-GPIO +Adafruit-PlatformDetect +Adafruit-PureIO +Adafruit-SSD1306 +pyftdi +pyserial +pyusb +rpi-ws281x +RPi.GPIO +spidev +sysv-ipc +sparkfun-qwiic-proximity + diff --git a/Lab 4/requirements2025.txt b/Lab 4/requirements2025.txt index 58852be37d..e330dd3038 100644 --- a/Lab 4/requirements2025.txt +++ b/Lab 4/requirements2025.txt @@ -1,31 +1,32 @@ -Adafruit-Blinka -adafruit-circuitpython-busdevice -adafruit-circuitpython-framebuf -adafruit-circuitpython-mpr121 -adafruit-circuitpython-mpu6050 -adafruit-circuitpython-ssd1306 -adafruit-circuitpython-pca9685 -adafruit-circuitpython-servokit -adafruit-circuitpython-apds9960 -adafruit-circuitpython-seesaw -sparkfun-qwiic -sparkfun-qwiic-joystick -sparkfun-qwiic-vl53l1x -Adafruit-GPIO -Adafruit-PlatformDetect -Adafruit-PureIO -Adafruit-SSD1306 -pyftdi -pyserial -pyusb -rpi-ws281x -RPi.GPIO -spidev -sysv-ipc -sparkfun-qwiic-proximity -adafruit-circuitpython-busdevice -sparkfun-pi-servo-hat -adafruit-circuitpython-lsm6ds -sparkfun-qwiic-button -adafruit-circuitpython-pcf8574 +Adafruit-Blinka +adafruit-circuitpython-busdevice +adafruit-circuitpython-framebuf +adafruit-circuitpython-mpr121 +adafruit-circuitpython-mpu6050 +adafruit-circuitpython-ssd1306 +adafruit-circuitpython-pca9685 +adafruit-circuitpython-servokit +adafruit-circuitpython-apds9960 +adafruit-circuitpython-seesaw +sparkfun-qwiic +sparkfun-qwiic-joystick +sparkfun-qwiic-vl53l1x +Adafruit-GPIO +Adafruit-PlatformDetect +Adafruit-PureIO +Adafruit-SSD1306 +pyftdi +pyserial +pyusb +rpi-ws281x +RPi.GPIO +spidev +sysv-ipc +sparkfun-qwiic-proximity +adafruit-circuitpython-busdevice +sparkfun-pi-servo-hat +adafruit-circuitpython-lsm6ds +sparkfun-qwiic-button +adafruit-circuitpython-pcf8574 + sparkfun-qwiic-proximity \ No newline at end of file diff --git a/Lab 4/servo_test.py b/Lab 4/servo_test.py index cb8b094ccb..6051250667 100644 --- a/Lab 4/servo_test.py +++ b/Lab 4/servo_test.py @@ -1,28 +1,28 @@ -import time -from adafruit_servokit import ServoKit - -# Set channels to the number of servo channels on your kit. -# There are 16 channels on the PCA9685 chip. -kit = ServoKit(channels=16) - -# Name and set up the servo according to the channel you are using. -servo = kit.servo[2] - -# Set the pulse width range of your servo for PWM control of rotating 0-180 degree (min_pulse, max_pulse) -# Each servo might be different, you can normally find this information in the servo datasheet -servo.set_pulse_width_range(500, 2500) - -while True: - try: - # Set the servo to 180 degree position - servo.angle = 180 - time.sleep(2) - # Set the servo to 0 degree position - servo.angle = 0 - time.sleep(2) - - except KeyboardInterrupt: - # Once interrupted, set the servo back to 0 degree position - servo.angle = 0 - time.sleep(0.5) - break +import time +from adafruit_servokit import ServoKit + +# Set channels to the number of servo channels on your kit. +# There are 16 channels on the PCA9685 chip. +kit = ServoKit(channels=16) + +# Name and set up the servo according to the channel you are using. +servo = kit.servo[2] + +# Set the pulse width range of your servo for PWM control of rotating 0-180 degree (min_pulse, max_pulse) +# Each servo might be different, you can normally find this information in the servo datasheet +servo.set_pulse_width_range(500, 2500) + +while True: + try: + # Set the servo to 180 degree position + servo.angle = 180 + time.sleep(2) + # Set the servo to 0 degree position + servo.angle = 0 + time.sleep(2) + + except KeyboardInterrupt: + # Once interrupted, set the servo back to 0 degree position + servo.angle = 0 + time.sleep(0.5) + break diff --git a/Lab 5/README.md b/Lab 5/README.md index 73770087a4..43b11a2832 100644 --- a/Lab 5/README.md +++ b/Lab 5/README.md @@ -1,3 +1,118 @@ +Team Members: jc3828 Junxiong Chen, cc2952 Chiahsuan Chang +# Part C + +The user plays a word-guessing game (hangman) on the Raspberry Pi. +They show hand gestures to a camera, the system detects the letter via the sign-language classifier, and the guessed letter is shown on display. Small oled shows the hangman state. + + +Observed Behavior + +1. When does the system work as intended? + + Works well when the user holds their hand steady and toward the camera. + +Performs best in good lighting conditions. + +Frequently correct for letters with distinct shapes (e.g., A, L, V, Y). + +2. When does the system fail? + When the user moves too fast. + For letters with similar hand shapes (e.g., M, N, T). +4. Why does it fail? + The landmark detection becomes unstable when the hand is not in frame or moves. + Some letters share very similar finger positions → insufficient feature separation. + Small camera resolution and model trained on limited dataset. + +5. Possible additional failure scenarios + Multiple hands in the frame. + Very different hand sizes or orientations than training data. + + +User Experience Reflection +1. Are users aware of uncertainties? + + Only partially. They see misclassifications but may not know the cause (lighting, angle, etc.). + +2. How bad is a misclassification? + + Mild frustration during gameplay. + + Does not prevent using the system — users can re-attempt gestures. + +3. How to address this? + + Add a “confirm letter” gesture (e.g., hold the sign for 1.5 seconds). + Display a small indicator when the model is “confident.” + Improve training data or add personalized calibration. + +4. Sense-Making Optimizations + Smooth predictions over time (majority vote). + Track hand bounding box to normalize scale better. + Use a lightweight deep model trained on more representative data. + + +# Part D — Characterizing the Observant System + +System Being Characterized: + +Sign language camera recognition + word guessing interaction. + +What can the system be used for? + +Non-verbal input, accessibility tools, small games. +Good environment? + +Well-lit room, stable camera, single hand clearly visible. +Bad environment? + +Dim light, cluttered background, fast movement, multiple hands. +When will it break? + +When gestures look too similar or hand tracking fails. + +How will it break? + +It outputs the wrong letter or no detection at all. + +Other behaviors? + +Encourages slower, intentional hand movement. + +How does it feel to use? + +Playful, a bit slow, body-involved, requires patience. + + + + +User Feedback (Summary) + +Users found the concept fun and novel, especially seeing the guessed letters and hangman displayed on the small OLED. + +Users reported that accuracy could be improved, especially for similar letter forms. + +Holding the hand steady for correct detection required practice, which some found slightly slow. + +Overall reaction: “It works and it’s cute, but could be smoother and more accurate.” + + + +# Implementation Details (code under `./word_guessing_with_sign/`) +1. Download the dataset from Kaggle `get_dataset.py` +2. Extract landmark features using MediaPipe `create_datasets.py` +3. Train the model using `train_classifier.py`, dump model to `model.p` +4. Run the main application `main.py` + Game logic in `main.py`, sign detection in `sign_detector.py` + +References: +https://github.com/computervisioneng/sign-language-detector-python +Video Demos: +https://drive.google.com/file/d/1SW5_jm0X6H_66JZrcg_LkmduQ3dNAiSn/view?usp=sharing +https://drive.google.com/file/d/1n7s3EWm2OHqjuYbMLcpwIdFZQUA4my1I/view?usp=sharing + +
+ Instructions for Students (Click to Expand) + # Observant Systems **NAMES OF COLLABORATORS HERE** @@ -198,3 +313,4 @@ During the lecture, we mentioned questions to help characterize a material: Following exploration and reflection from Part 1, finish building your interactive system, and demonstrate it in use with a video. **\*\*\*Include a short video demonstrating the finished result.\*\*\*** +
diff --git a/Lab 5/word_guessing_with_sign/create_datasets.py b/Lab 5/word_guessing_with_sign/create_datasets.py new file mode 100644 index 0000000000..04c5492177 --- /dev/null +++ b/Lab 5/word_guessing_with_sign/create_datasets.py @@ -0,0 +1,50 @@ +import os +import pickle + +import mediapipe as mp +import cv2 + + +mp_hands = mp.solutions.hands +mp_drawing = mp.solutions.drawing_utils +mp_drawing_styles = mp.solutions.drawing_styles + +hands = mp_hands.Hands(static_image_mode=True, min_detection_confidence=0.3) + +DATA_DIR = './asl_dataset' + +data = [] +labels = [] +for dir_ in os.listdir(DATA_DIR): + for img_path in os.listdir(os.path.join(DATA_DIR, dir_)): + data_aux = [] + + x_ = [] + y_ = [] + + img = cv2.imread(os.path.join(DATA_DIR, dir_, img_path)) + img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) + + results = hands.process(img_rgb) + if results.multi_hand_landmarks: + for hand_landmarks in results.multi_hand_landmarks: + for i in range(len(hand_landmarks.landmark)): + x = hand_landmarks.landmark[i].x + y = hand_landmarks.landmark[i].y + + x_.append(x) + y_.append(y) + + for i in range(len(hand_landmarks.landmark)): + x = hand_landmarks.landmark[i].x + y = hand_landmarks.landmark[i].y + data_aux.append(x - min(x_)) + data_aux.append(y - min(y_)) + + print("trained", dir_) + data.append(data_aux) + labels.append(dir_) + +f = open('data.pickle', 'wb') +pickle.dump({'data': data, 'labels': labels}, f) +f.close() diff --git a/Lab 5/word_guessing_with_sign/data.pickle b/Lab 5/word_guessing_with_sign/data.pickle new file mode 100644 index 0000000000..a53ff53611 Binary files /dev/null and b/Lab 5/word_guessing_with_sign/data.pickle differ diff --git a/Lab 5/word_guessing_with_sign/get_dataset.py b/Lab 5/word_guessing_with_sign/get_dataset.py new file mode 100644 index 0000000000..4995d422c1 --- /dev/null +++ b/Lab 5/word_guessing_with_sign/get_dataset.py @@ -0,0 +1,6 @@ +import kagglehub + +# Download latest version +path = kagglehub.dataset_download("ayuraj/asl-dataset") + +print("Path to dataset files:", path) diff --git a/Lab 5/word_guessing_with_sign/main.py b/Lab 5/word_guessing_with_sign/main.py new file mode 100644 index 0000000000..eef7d7f44e --- /dev/null +++ b/Lab 5/word_guessing_with_sign/main.py @@ -0,0 +1,251 @@ +import csv +import random +import time +import subprocess +import digitalio +import board +from PIL import Image, ImageDraw, ImageFont +import adafruit_rgb_display.st7789 as st7789 +import board +import busio +import adafruit_ssd1306 +from sign_detector import camera + +# Configuration for CS and DC pins (these are FeatherWing defaults on M0/M4): +cs_pin = digitalio.DigitalInOut(board.D5) +dc_pin = digitalio.DigitalInOut(board.D25) +reset_pin = None + +# Config for display baudrate (default max is 24mhz): +BAUDRATE = 64000000 + +# Setup SPI bus using hardware SPI: +spi = board.SPI() + +# Create the ST7789 display: +disp = st7789.ST7789( + spi, + cs=cs_pin, + dc=dc_pin, + rst=reset_pin, + baudrate=BAUDRATE, + width=135, + height=240, + x_offset=53, + y_offset=40, +) + +buttonA = digitalio.DigitalInOut(board.D23) # GPI023 (PIN 16) +buttonB = digitalio.DigitalInOut(board.D24) # GPI024 (PIN 18) +buttonA.switch_to_input(pull=digitalio.Pull.UP) +buttonB.switch_to_input(pull=digitalio.Pull.UP) + +# Create the I2C interface. +i2c = busio.I2C(board.SCL, board.SDA) + +# Create the SSD1306 OLED class. +# The first two parameters are the pixel width and pixel height. Change these +# to the right size for your display! +oled = adafruit_ssd1306.SSD1306_I2C(128, 32, i2c) + +# start with a blank screen +oled.fill(0) +# we just blanked the framebuffer. to push the framebuffer onto the display, we call show() +oled.show() + +# Configuration for CS and DC pins (these are FeatherWing defaults on M0/M4): +cs_pin = digitalio.DigitalInOut(board.D5) +dc_pin = digitalio.DigitalInOut(board.D25) +reset_pin = None + +# Config for display baudrate (default max is 24mhz): +BAUDRATE = 64000000 + +# Setup SPI bus using hardware SPI: +spi = board.SPI() + +# Create the ST7789 display: +disp = st7789.ST7789( + spi, + cs=cs_pin, + dc=dc_pin, + rst=reset_pin, + baudrate=BAUDRATE, + width=135, + height=240, + x_offset=53, + y_offset=40, +) + +# Create blank image for drawing. +# Make sure to create image with mode 'RGB' for full color. +height = disp.width # we swap height/width to rotate it to landscape! +width = disp.height +image = Image.new("RGB", (width, height)) +rotation = 90 + +# Get drawing object to draw on image. +draw = ImageDraw.Draw(image) + +# Draw a black filled box to clear the image. +draw.rectangle((0, 0, width, height), outline=0, fill=(0, 0, 0)) +disp.image(image, rotation) +# Draw some shapes. +# First define some constants to allow easy resizing of shapes. +padding = -2 +top = padding +bottom = height - padding +# Move left to right keeping track of the current x position for drawing shapes. +x = 0 + +# Alternatively load a TTF font. Make sure the .ttf font file is in the +# same directory as the python script! +# Some other nice fonts to try: http://www.dafont.com/bitmap.php +font = ImageFont.truetype("/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf", 32) +font2 = ImageFont.truetype("/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf", 18) + +# Turn on the backlight +backlight = digitalio.DigitalInOut(board.D22) +backlight.switch_to_output() +backlight.value = True + + +# ↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓ +# Simulated camera() function (replace with your actual camera input) +#def camera(): +# return input("Enter a letter: ").strip().lower() + + +# ↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑ + +# Read a random word from CSV +def get_random_word(filename="words.csv"): + with open(filename, "r") as f: + reader = csv.DictReader(f) + words = [row["word"].strip().lower() for row in reader] + return random.choice(words) + + +def retry_window(): + draw.rectangle((0, 0, width, height), outline=0, fill=0) + text = "Retry?" + alabel = "No" + blabel = "Yes" + + text_bbox = draw.textbbox((0, 0), text, font=font) + text_width = text_bbox[2] - text_bbox[0] + text_height = text_bbox[3] - text_bbox[0] + + alabel_bbox = draw.textbbox((0, 0), alabel, font=font2) + alabel_width = alabel_bbox[2] - alabel_bbox[0] + alabel_height = alabel_bbox[3] - alabel_bbox[0] + + blabel_bbox = draw.textbbox((0, 0), blabel, font=font2) + blabel_width = blabel_bbox[2] - blabel_bbox[0] + blabel_height = blabel_bbox[3] - blabel_bbox[0] + + draw.text((width // 2 - text_width // 2, height // 2 - text_height // 2), text, font=font, fill="#FFFFFF") + draw.text((0, height - alabel_height), alabel, font=font2, fill="#FFFFFF") + draw.text((0, 0), blabel, font=font2, fill="#FFFFFF") + disp.image(image, rotation) + time.sleep(0.1) + + +# Simulated Pi screen display (replace with your actual display function) +def piscreen_display(text): + # Draw a black filled box to clear the image. + draw.rectangle((0, 0, width, height), outline=0, fill=0) + + text_bbox = draw.textbbox((0, 0), text, font=font) + text_width = text_bbox[2] - text_bbox[0] + text_height = text_bbox[3] - text_bbox[0] + + draw.text((width // 2 - text_width // 2, height // 2 - text_height // 2), text, font=font, fill="#FFFFFF") + + disp.image(image, rotation) + time.sleep(0.1) + + +count = 0 + + +def hangman(): + global count + oled_display(count) + word = get_random_word() + guessed = ["_"] * len(word) + max_wrong = 6 + + piscreen_display(" ".join(guessed)) + + while count < max_wrong and "_" in guessed: + guess = camera() + if not guess or len(guess) != 1 or not guess.isalpha(): + piscreen_display("Invalid input.\nEnter a\nsingle letter.") + continue + + print(guess) + if guess in word: + for i, ch in enumerate(word): + if ch == guess: + guessed[i] = guess + piscreen_display("Correct!") + else: + count += 1 + piscreen_display(f"Wrong!") + oled_display(count) + + piscreen_display(" ".join(guessed)) + + if "_" not in guessed: + piscreen_display(f"You win!") + else: + piscreen_display(f"Game over\nWord: {word}") + time.sleep(5) + + +def oled_display(count): + image = Image.new("1", (oled.width, oled.height)) + draw = ImageDraw.Draw(image) + + draw.rectangle((0, 0, oled.width, oled.height), outline=0, fill=0) + + # Draw different parts depending on count + if count >= 0: + draw.line((90, 45, 90, 5), fill=255) # base + draw.line((90, 5, 20, 5), fill=255) # pole + draw.line((20, 5, 20, 40), fill=255) # top bar + draw.line((20, 20, 30, 20), fill=255) # rope + if count >= 1: + draw.ellipse((30, 15, 40, 25), outline=255) # head + if count >= 2: + draw.line((40, 20, 60, 20), fill=255) # body + if count >= 3: + draw.line((45, 20, 50, 12), fill=255) # left arm + if count >= 4: + draw.line((45, 20, 50, 28), fill=255) # right arm + if count >= 5: + draw.line((60, 20, 70, 12), fill=255) # left leg + if count >= 6: + draw.line((60, 20, 70, 28), fill=255) # right leg + + oled.image(image) + oled.show() + + +running = True +while running: + hangman() + print("retry") + while True: + a_pressed = not buttonA.value + b_pressed = not buttonB.value + retry_window() + if a_pressed: + print("pressed") + count = 0 + break # restart by breaking out to re-run hangman() + elif b_pressed: + running = False + break + time.sleep(0.1) diff --git a/Lab 5/word_guessing_with_sign/model.p b/Lab 5/word_guessing_with_sign/model.p new file mode 100644 index 0000000000..a347b9f412 Binary files /dev/null and b/Lab 5/word_guessing_with_sign/model.p differ diff --git a/Lab 5/word_guessing_with_sign/sign_detector.py b/Lab 5/word_guessing_with_sign/sign_detector.py new file mode 100644 index 0000000000..b23ba40c37 --- /dev/null +++ b/Lab 5/word_guessing_with_sign/sign_detector.py @@ -0,0 +1,117 @@ +import pickle + +import cv2 +import mediapipe as mp +import numpy as np +import time + +model_dict = pickle.load(open('./model.p', 'rb')) +model = model_dict['model'] + +window_name = "frame" +cv2.namedWindow(window_name) + +# Run the camera and detect signs +# return the predicted sign for detected hand +def camera(duration=5): + letter_counts = {} + start_time = time.time() + last_letter = None + stable_start = None + + print(f"Show a sign to the camera... (recording for {duration}s)") + + cap = cv2.VideoCapture(0) + + mp_hands = mp.solutions.hands + mp_drawing = mp.solutions.drawing_utils + mp_drawing_styles = mp.solutions.drawing_styles + + hands = mp_hands.Hands(static_image_mode=True, min_detection_confidence=0.3) + while True: + + data_aux = [] + x_ = [] + y_ = [] + + ret, frame = cap.read() + if not ret: + continue + H, W, _ = frame.shape + + frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) + + letter = None + results = hands.process(frame_rgb) + if results.multi_hand_landmarks: + for hand_landmarks in results.multi_hand_landmarks: + mp_drawing.draw_landmarks( + frame, # image to draw + hand_landmarks, # model output + mp_hands.HAND_CONNECTIONS, # hand connections + mp_drawing_styles.get_default_hand_landmarks_style(), + mp_drawing_styles.get_default_hand_connections_style()) + + for hand_landmarks in results.multi_hand_landmarks: + for i in range(len(hand_landmarks.landmark)): + x = hand_landmarks.landmark[i].x + y = hand_landmarks.landmark[i].y + + x_.append(x) + y_.append(y) + + for i in range(len(hand_landmarks.landmark)): + x = hand_landmarks.landmark[i].x + y = hand_landmarks.landmark[i].y + data_aux.append(x - min(x_)) + data_aux.append(y - min(y_)) + + x1 = int(min(x_) * W) - 10 + y1 = int(min(y_) * H) - 10 + + x2 = int(max(x_) * W) - 10 + y2 = int(max(y_) * H) - 10 + + prediction = model.predict([np.asarray(data_aux)[:42]]) + letter = str(prediction[0]) + + cv2.rectangle(frame, (x1, y1), (x2, y2), (0, 0, 0), 4) + cv2.putText(frame,letter, (x1, y1 - 10), cv2.FONT_HERSHEY_SIMPLEX, 1.3, (0, 0, 0), 3, + cv2.LINE_AA) + + #print(letter) + if letter and letter == last_letter: + # continuing the same sign + if stable_start is None: + stable_start = time.time() + elapsed = time.time() - stable_start + if elapsed >= 0.3: # consider stable for 0.3s + letter_counts[letter] = letter_counts.get(letter, 0) + 1 + else: + stable_start = None + + #print(letter_counts) + last_letter = letter + cv2.imshow('frame', frame) + + # Time or key exit + if (time.time() - start_time) > duration: + break + if cv2.waitKey(1) & 0xFF == ord('q'): + break + + + cap.release() + + if not letter_counts: + return None + + # return the letter shown most consistently + best_letter = max(letter_counts, key=letter_counts.get) + #print(f"Detected letter: {best_letter}") + return best_letter + + +if __name__ == "__main__": + while True: + camera() diff --git a/Lab 5/word_guessing_with_sign/train_classifier.py b/Lab 5/word_guessing_with_sign/train_classifier.py new file mode 100644 index 0000000000..30be3d112d --- /dev/null +++ b/Lab 5/word_guessing_with_sign/train_classifier.py @@ -0,0 +1,56 @@ +import pickle + +from sklearn.ensemble import RandomForestClassifier +from sklearn.model_selection import train_test_split +from sklearn.metrics import accuracy_score +import numpy as np + + +dataset = pickle.load(open('./data.pickle', 'rb')) + +X_raw = dataset['data'] +y_raw = dataset['labels'] + +X_fixed = [] +y_fixed = [] + +for x, y in zip(X_raw, y_raw): + if len(x) == 42: + X_fixed.append(x) + y_fixed.append(y) + elif len(x) == 84: + X_fixed.append(x[:42]) # take first hand only + y_fixed.append(y) + else: + # skip corrupted/malformed data + continue + +X = np.array(X_fixed) +y = np.array(y_fixed) + +print("Fixed shapes:", X.shape, y.shape) + + + + +data = np.asarray(X_fixed) +labels = np.asarray(y_fixed) + +#print(data) +#print(labels) + +x_train, x_test, y_train, y_test = train_test_split(data, labels, test_size=0.2, shuffle=True, stratify=labels) + +model = RandomForestClassifier() + +model.fit(x_train, y_train) + +y_predict = model.predict(x_test) + +score = accuracy_score(y_predict, y_test) + +print('{}% of samples were classified correctly !'.format(score * 100)) + +f = open('model.p', 'wb') +pickle.dump({'model': model}, f) +f.close() diff --git a/Lab 5/word_guessing_with_sign/words.csv b/Lab 5/word_guessing_with_sign/words.csv new file mode 100644 index 0000000000..aabcca397b --- /dev/null +++ b/Lab 5/word_guessing_with_sign/words.csv @@ -0,0 +1,3 @@ +word +broken +robust diff --git a/Lab 6/README.md b/Lab 6/README.md index c23ff6153b..3a00b27493 100644 --- a/Lab 6/README.md +++ b/Lab 6/README.md @@ -1,3 +1,115 @@ +Team Members: jc3828 Junxiong Chen, cc2952 Chiahsuan Chang + +# Project Documentation + +Our project connects two Raspberry Pis to play a simple battleship game using MQTT for communication. Each Pi acts as a player — one host and one client. Players take turns selecting grid cells, and messages like “hit,” “miss,” or “end” are sent between the Pis through the MQTT broker. We used small image displays and touch input to make the game interactive. The main focus was on designing reliable communication and keeping both devices synchronized during gameplay. + + +When it’s the opponent’s turn, the game blocks and waits on an event or message from MQTT, effectively pausing the local player until a “turn” or “action” message is received. This ensures that each player can only make a move when it’s their turn, keeping both Pis synchronized without busy-waiting. + +## How to play the game + + +``` +Battleship Game Flow (Compact 3×4 Grid) + +Turn Player Action Grid State +0 Initial state + + 1 2 3 4 + ┌───┬───┬───┬───┐ + A │ │ │ │ │ + ├───┼───┼───┼───┤ + B │ │ │ │ │ + ├───┼───┼───┼───┤ + C │ │ │ │ │ + └───┴───┴───┴───┘ + +| 1 | Player 1 hits A1 (Battleship) | Map of Player 2 + + 1 2 3 4 + ┌───┬───┬───┬───┐ + A │ B │ │ │ │ + ├───┼───┼───┼───┤ + B │ │ │ │ │ + ├───┼───┼───┼───┤ + C │ │ │ │ │ + └───┴───┴───┴───┘ + +| 2 | Player 2 misses C2 | Map of Player 1 + + 1 2 3 4 + ┌───┬───┬───┬───┐ + A │ │ │ │ │ + ├───┼───┼───┼───┤ + B │ │ │ │ │ + ├───┼───┼───┼───┤ + C │ │ X │ │ │ + └───┴───┴───┴───┘ + +| 3 | Player 1 hits B3 (Destroyer) | Map of Player 2 + + 1 2 3 4 + ┌───┬───┬───┬───┐ + A │ B │ │ │ │ + ├───┼───┼───┼───┤ + B │ │ │ D │ │ + ├───┼───┼───┼───┤ + C │ │ │ │ │ + └───┴───┴───┴───┘ + +| 4 | Player 2 hits C4 (Submarine) | Map of Player 1 + + 1 2 3 4 + ┌───┬───┬───┬───┐ + A │ │ │ │ │ + ├───┼───┼───┼───┤ + B │ │ │ │ │ + ├───┼───┼───┼───┤ + C │ │ X │ │ S │ + └───┴───┴───┴───┘ + +| 5 | Player 1 finishes all ships → game ends | Map of Player 2 + + 1 2 3 4 + ┌───┬───┬───┬───┐ + A │ B │ B │ │ │ + ├───┼───┼───┼───┤ + B │ │ │ D │ │ + ├───┼───┼───┼───┤ + C │ │ │ │ │ + └───┴───┴───┴───┘ + +Legend: + • B → Battleship hit + • D → Destroyer hit + • S → Submarine hit + • X → Miss +``` + +## Videos +[Waiting for event](https://drive.google.com/file/d/1caxutPKolDBzbmR787EIQ7XXxEGbfHXk/view?usp=sharing) + +[Victory, all ships taken down](https://drive.google.com/file/d/1caxutPKolDBzbmR787EIQ7XXxEGbfHXk/view?usp=sharing) + +Grid Screen + + +# Reflection on Learnings + +Through this project, we got hands-on experience building a multiplayer game across two Raspberry Pis using MQTT. We learned how messages are sent and received, how to keep both devices in sync, and how even small delays or missed messages can confuse the players. We also realized how important it is to give clear feedback on the display, so players know what’s happening + + +One thing to note is that because this is a two-player game, the setup cannot be completely symmetric. If both Pis try to start as “host” or “client” at the same time, the game will hang or fail to start. To handle this, we use an environment variable (or a simple configuration flag) to decide which Pi is the host and who takes the first turn. This ensures the game always starts correctly and both devices stay in sync. + +# User Feedback +We did not test the game with people outside our team, so we only have our own impressions. Playing it ourselves, we found the turn synchronization mostly worked, but sometimes the display updates were slow or confusing. We also noticed that hitting the wrong cells felt a bit unclear without better visual feedback. Overall, it was a useful experience in managing two devices, using MQTT, and designing a playable interface, but external testing would help identify real usability issues. + + + +
+ Instructions (Click to Expand) + # Distributed Interaction **NAMES OF COLLABORATORS HERE** @@ -241,3 +353,5 @@ Before submitting: --- Resources: [MQTT Guide](https://www.hivemq.com/mqtt-essentials/) | [Paho Python](https://www.eclipse.org/paho/index.php?page=clients/python/docs/index.php) | [Flask-SocketIO](https://flask-socketio.readthedocs.io/) +
+ diff --git a/Lab 6/battle_ship/defeat.png b/Lab 6/battle_ship/defeat.png new file mode 100644 index 0000000000..8f07a7efae Binary files /dev/null and b/Lab 6/battle_ship/defeat.png differ diff --git a/Lab 6/battle_ship/main.py b/Lab 6/battle_ship/main.py new file mode 100644 index 0000000000..7761974ef2 --- /dev/null +++ b/Lab 6/battle_ship/main.py @@ -0,0 +1,297 @@ +import pandas as pd +import time +import subprocess +import digitalio +import board +import mqtt +from PIL import Image, ImageDraw, ImageFont +import adafruit_rgb_display.st7789 as st7789 +import busio +import adafruit_rgb_display.ili9341 as ili9341 +import adafruit_rgb_display.st7789 as st7789 # pylint: disable=unused-import +import adafruit_rgb_display.hx8357 as hx8357 # pylint: disable=unused-import +import adafruit_rgb_display.st7735 as st7735 # pylint: disable=unused-import +import adafruit_rgb_display.ssd1351 as ssd1351 # pylint: disable=unused-import +import adafruit_rgb_display.ssd1331 as ssd1331 # pylint: disable=unused-import + +import adafruit_mpr121 + +i2c = busio.I2C(board.SCL, board.SDA) + +mpr121 = adafruit_mpr121.MPR121(i2c) + +# Configuration for CS and DC pins (these are FeatherWing defaults on M0/M4): +cs_pin = digitalio.DigitalInOut(board.D5) +dc_pin = digitalio.DigitalInOut(board.D25) +reset_pin = None + +# Config for display baudrate (default max is 24mhz): +BAUDRATE = 64000000 + +# Setup SPI bus using hardware SPI: +spi = board.SPI() + +# Create the ST7789 display: +disp = st7789.ST7789( + spi, + cs=cs_pin, + dc=dc_pin, + rst=reset_pin, + baudrate=BAUDRATE, + width=135, + height=240, + x_offset=53, + y_offset=40, +) + +# Create blank image for drawing. +# Make sure to create image with mode 'RGB' for full color. +height = disp.width # we swap height/width to rotate it to landscape! +width = disp.height +image = Image.new("RGB", (width, height)) +rotation = 90 + +# Get drawing object to draw on image. +draw = ImageDraw.Draw(image) + +font = ImageFont.truetype("/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf", 22) + +IS_HOST = True +def wait_for_touch(): + print("Touch a pad to choose ROW...") + row_chosen = False + while not row_chosen: + for i in range(3): + if mpr121[i].value: + if i == 0: + cell = "A" + elif i == 1: + cell = "B" + elif i == 2: + cell = "C" + print(f"Row selected: {cell}") + row_chosen = True + while mpr121[i].value: + time.sleep(0.2) + break + time.sleep(0.05) + + print("Touch a pad to choose COLUMN...") + col_chosen = False + while not col_chosen: + for i in range(4): + if mpr121[i].value: + cell += str(i + 1) + print(f"Column selected: {i + 1}") + col_chosen = True + while mpr121[i].value: + time.sleep(0.2) + break + time.sleep(0.05) + + print(f"? Selected cell: {cell}") + return cell + + +def draw_square(C, side): + cx = point_dict[C][0] + cy = point_dict[C][1] + x0 = cx - side / 2 + y0 = cy - side / 2 + x1 = cx + side / 2 + y1 = cy + side / 2 + draw.rectangle((x0, y0, x1, y1), outline="white", width=3) + disp.image(image, rotation=90) + + +def draw_cross(C, side): + cx = point_dict[C][0] + cy = point_dict[C][1] + x0 = cx - side / 2 + y0 = cy - side / 2 + x1 = cx + side / 2 + y1 = cy + side / 2 + draw.line((x0, y0, x1, y1), fill="white", width=3) + draw.line((x0, y1, x1, y0), fill="white", width=3) + disp.image(image, rotation=90) + + +def draw_word(ship, C): + cx = point_dict[C][0] + cy = point_dict[C][1] + if ship == "battleship": + s = "B" + if ship == "destroyer": + s = "D" + if ship == "submarine": + s = "S" + s_bbox = draw.textbbox((0, 0), s, font=font) + s_width = s_bbox[2] - s_bbox[0] + s_height = s_bbox[3] - s_bbox[0] + draw.text((cx - s_width // 2, cy - s_height // 2), s, font=font, fill="white") + disp.image(image, rotation=90) + + +def load_ships(csv_path="ships.csv"): + df = pd.read_csv(csv_path) + ships = {} + for _, row in df.iterrows(): + ship_type = row["type"] + cells = [c.strip().upper() for c in row["cells"].split(",")] + ships[ship_type] = cells + return ships + + +def check_hit(cell, ships): + for ship, cells in ships.items(): + if cell in cells: + cells.remove(cell) + print(f"Hit {ship}!") + if len(cells) == 0: + print(f"Sink {ship}!") + return True, ship + print("Miss!") + return False, None + + +def display(image_name): + image = Image.open(image_name) + image = image.rotate(270, expand=True) + backlight = digitalio.DigitalInOut(board.D22) + backlight.switch_to_output() + backlight.value = True + + # Scale the image to the smaller screen dimension + image_ratio = image.width / image.height + screen_ratio = width / height + if screen_ratio < image_ratio: + scaled_width = image.width * height // image.height + scaled_height = height + else: + scaled_width = width + scaled_height = image.height * width // image.width + image = image.resize((scaled_width, scaled_height), Image.BICUBIC) + + # Crop and center the image + x = scaled_width // 2 - width // 2 + y = scaled_height // 2 - height // 2 + image = image.crop((x, y, x + width, y + height)) + + # Display image. + disp.image(image, rotation=90) + + +# main loop +ships = load_ships("ships.csv") # !!! to build a map for now, comment it when server is implemented + +''' +myships = load_ships("ships.csv") +send_map(myships) +ships = read_map() # opponent map +''' + +print("Ship map loaded") +for k, v in ships.items(): + print(f" {k}: {v}") + +turn = True +all_cells = sum(len(v) for v in ships.values()) + +point_dict = {'A1': [60, 25], + 'A2': [100, 25], + 'A3': [140, 25], + 'A4': [180, 25], + 'B1': [60, 65], + 'B2': [100, 65], + 'B3': [140, 65], + 'B4': [180, 65], + 'C1': [60, 105], + 'C2': [100, 105], + 'C3': [140, 105], + 'C4': [180, 105]} + +draw_square('A1', 40) +draw_square('A2', 40) +draw_square('A3', 40) +draw_square('A4', 40) +draw_square('B1', 40) +draw_square('B2', 40) +draw_square('B3', 40) +draw_square('B4', 40) +draw_square('C1', 40) +draw_square('C2', 40) +draw_square('C3', 40) +draw_square('C4', 40) + +mqtt.start_mqtt() +PLAYER_ID = "host" if IS_HOST else "client" +if IS_HOST: + print("You are the host. Starting the game...") + mqtt.send_message({"action": "start", "player": PLAYER_ID}) +else: + print("You are the client. Waiting for the host to start...") + +gameplay = True +turn = IS_HOST # host starts first + +# memorize guess to avoid repeated guesses +guessed_cells = set() +while gameplay: + if not turn: + print("Waiting for opponent...") + msg = mqtt.wait_for_message() + if not msg: + continue + if "sender" in msg and msg["sender"] == PLAYER_ID: + continue # ignore own messages + + action = msg.get("action") + + if action == "start": + print("Game started! Your turn.") + turn = True + + elif action == "hit": + print(f"Opponent hit {msg.get('cell')} ({msg.get('ship', '')})!") + turn = True # your turn now + + elif action == "miss": + print(f"Opponent missed at {msg.get('cell')}.") + turn = True # your turn now + + elif action == "end": + print("Game over — you lost!") + display("defeat.png") + break + + continue # go back to loop + + # --- your turn --- + print("Your turn!") + guess = wait_for_touch() + while guess in guessed_cells: + print("You already guessed that cell. Choose another one.") + guess = wait_for_touch() + guessed_cells.add(guess) + hit, ship = check_hit(guess, ships) + + if hit: + draw_word(ship, guess) + mqtt.send_message({"action": "hit", "cell": guess, "ship": ship, "sender": PLAYER_ID}) + else: + draw_cross(guess, 40) + mqtt.send_message({"action": "miss", "cell": guess, "sender": PLAYER_ID}) + + # --- check for victory --- + all_cells = sum(len(v) for v in ships.values()) + if all_cells == 0: + print("You win!") + display("victory.png") + mqtt.send_message({"action": "end", "sender": PLAYER_ID}) + gameplay = False + break + + # end of your turn, now opponent’s turn + turn = False + +mqtt.stop_mqtt() diff --git a/Lab 6/battle_ship/mqtt.py b/Lab 6/battle_ship/mqtt.py new file mode 100644 index 0000000000..712512aa60 --- /dev/null +++ b/Lab 6/battle_ship/mqtt.py @@ -0,0 +1,94 @@ +import paho.mqtt.client as mqtt +import ssl +import json +import uuid +import threading +import queue + +MQTT_BROKER = 'farlab.infosci.cornell.edu' +MQTT_PORT = 1883 +MQTT_TOPIC = 'IDD/battleship/game' +MQTT_USERNAME = 'idd' +MQTT_PASSWORD = 'device@theFarm' + +mqtt_client = None +game_event = threading.Event() +latest_message = None +event_queue = queue.Queue() + +def on_connect(client, userdata, flags, rc): + if rc == 0: + print(f'Connected to {MQTT_BROKER}:{MQTT_PORT}') + client.subscribe(MQTT_TOPIC) + print(f'Subscribed to topic: {MQTT_TOPIC}') + else: + print(f'MQTT connection failed: {rc}') + +def on_message(client, userdata, msg): + global latest_message + try: + payload = msg.payload.decode('utf-8') + data = {} + try: + data = json.loads(payload) + except Exception: + data = {'raw': payload} + latest_message = data + game_event.set() + try: + event_queue.put_nowait(data) + except queue.Full: + pass + print(f'Received: {data}') + except Exception as e: + print(f'Error in on_message: {e}') + +def wait_for_message(timeout=None): + """Block until any MQTT message is received.""" + global latest_message + if game_event.wait(timeout): + game_event.clear() + msg = latest_message + latest_message = None + return msg + return None + +def start_mqtt(): + global mqtt_client + try: + mqtt_client = mqtt.Client(str(uuid.uuid1())) + if MQTT_PORT == 8883: + mqtt_client.tls_set(cert_reqs=ssl.CERT_NONE) + mqtt_client.username_pw_set(MQTT_USERNAME, MQTT_PASSWORD) + mqtt_client.on_connect = on_connect + mqtt_client.on_message = on_message + mqtt_client.connect(MQTT_BROKER, port=MQTT_PORT, keepalive=60) + mqtt_client.loop_start() + print('MQTT bridge started.') + return True + except Exception as e: + print(f'MQTT failed to start: {e}') + return False + +def stop_mqtt(): + global mqtt_client + if mqtt_client: + mqtt_client.loop_stop() + mqtt_client.disconnect() + mqtt_client = None + print('MQTT stopped.') + +def send_message(payload): + """Send any message to the unified topic.""" + global mqtt_client + if not mqtt_client: + print('MQTT not running') + return False + try: + mqtt_client.publish(MQTT_TOPIC, json.dumps(payload), qos=1) + print(f'→ Published: {payload}') + return True + except Exception as e: + print(f'Error publishing: {e}') + return False + diff --git a/Lab 6/battle_ship/ships.csv b/Lab 6/battle_ship/ships.csv new file mode 100644 index 0000000000..3db1cf53c0 --- /dev/null +++ b/Lab 6/battle_ship/ships.csv @@ -0,0 +1,4 @@ +type,cells +battleship,"A1,A2,A3,A4" +destroyer,"B1,B2,B3" +submarine,"C2,C3" \ No newline at end of file diff --git a/Lab 6/battle_ship/victory.png b/Lab 6/battle_ship/victory.png new file mode 100644 index 0000000000..c14638a970 Binary files /dev/null and b/Lab 6/battle_ship/victory.png differ diff --git a/Lab 6/grid.jpg b/Lab 6/grid.jpg new file mode 100644 index 0000000000..cc5280f85e Binary files /dev/null and b/Lab 6/grid.jpg differ diff --git a/README.md b/README.md index 086eafada8..79cbfea54d 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,4 @@ -# [Your name here]'s-Lab-Hub +# Junxiong's-Lab-Hub for [Interactive Device Design](https://github.com/FAR-Lab/Developing-and-Designing-Interactive-Devices/) Please place links here to the README.md's for each of your labs here: