-Write out what you imagine the dialogue to be. Use cards, post-its, or whatever method helps you develop alternatives or group responses.
-\*\***Please describe and document your process.**\*\*
+### 💡 Idea
+Modern life often leaves people emotionally disconnected, even when surrounded by others.
+**Fairy Mate** is a friendly, speech-enabled companion that helps users express and reflect on their feelings.
-### Acting out the dialogue
+### 🌈 Scenario
+1. Fairy Mate greets the user: *“How was your day?”*
+2. The user shares their thoughts and emotions.
+3. Fairy Mate offers empathetic responses or practical advice.
+4. The device automatically creates a **journal entry** summarizing the conversation.
-Find a partner, and *without sharing the script with your partner* try out the dialogue you've designed, where you (as the device designer) act as the device you are designing. Please record this interaction (for example, using Zoom's record feature).
+### ⚙️ Key Functions
+- **Emotion Detection:** Analyzes environmental and biological signals to gauge mood.
+- **Emotional Support:** Provides comfort and encouragement.
+- **Personal Journaling:** Logs conversations and emotions for long-term reflection.
-\*\***Describe if the dialogue seemed different than what you imagined when it was acted out, and how.**\*\*
+---
-### Wizarding with the Pi (optional)
-In the [demo directory](./demo), you will find an example Wizard of Oz project. In that project, you can see how audio and sensor data is streamed from the Pi to a wizard controller that runs in the browser. You may use this demo code as a template. By running the `app.py` script, you can see how audio and sensor data (Adafruit MPU-6050 6-DoF Accel and Gyro Sensor) is streamed from the Pi to a wizard controller that runs in the browser `http://
-### What lessons can you take away from the WoZ interactions for designing a more autonomous version of the system?
+---
+
+## 🎬 Part 2-B. Demo
+
+https://github.com/user-attachments/assets/e1faef80-e93d-4fc6-898c-d23e209f9f9c
+
+
+---
+
+## 🧪 Part 2-C. System Testing & Reflections
+
+### ✅ What Worked Well
-\*\**your answer here*\*\*
+> 💬 What worked well about the system?
+1. The color-coded modes (blue for Emotional Support and green for Solution Support) made it easy to recognize the device’s state.
+2. The device could provide more personalized feedback based on tone or emotional cues, rather than relying only on preset responses.
+3. Adding clearer visual or sound cues for when the device is “listening” versus “processing” would also make the interaction feel more intuitive.
-### How could you use your system to create a dataset of interaction? What other sensing modalities would make sense to capture?
-\*\**your answer here*\*\*
+### ⚠️ What Could Improve (advice from other test users)
+> 💬 What didn't work well about the system?
+1. The touch sensor controller worked well in providing a simple and intuitive way for users to interact with Fairy Mate.
+2. The use of color feedback, blue for Emotional Support and green for Solution Support, clearly indicated which mode was active, helping users feel confident that their input was recognized.
+3. The sensitivity of the touch sensor sometimes caused accidental activations or missed touches, especially if the user’s finger didn’t make full contact with the pad.
+4. Because the pads were numbered rather than labeled with words or icons, some users had to remember which numbers corresponded to each mode.
+### 🧩 Lessons from WoZ Interactions
+
+> 💬 What lessons can you take away from the WoZ interactions for designing a more autonomous version of the system?
+
+To make Fairy Mate more autonomous:
+- Collect **anonymized interaction data** (voice, touch, response timing, user sentiment).
+- Label data by **emotional state** and **support type**.
+- Train the system to adapt its tone and timing dynamically.
+
+### 📉 Building an Interaction Dataset
+
+> 💬 How could you use your system to create a dataset of interaction? What other sensing modalities would make sense to capture?
+
+| Sensor | Purpose |
+|--------|----------|
+| 🎤 **Microphone** | Detect tone, stress, or hesitation. |
+| 📷 **Facial Recognition** | Identify expressions (smile, frown, eye contact). |
+| 🩺 **Motion Sensors (IMU)** | Track restlessness or relaxation. |
+| 🌡️ **Environmental Sensors** | Adjust feedback based on context (e.g., quiet/dark room → bedtime suggestion). |
+| ✋ **Touch Pressure** | Infer emotional intensity from contact force or duration. |
+
+---
+### 🌻 Summary
+Fairy Mate blends **speech, emotion sensing, and multi-modal feedback** to create a compassionate, responsive digital companion.
+Through iterative design, testing, and user reflection, we refined it from a concept into a system that **listens with empathy and responds with understanding**.
diff --git a/Lab 3/assets/FairyMate.png b/Lab 3/assets/FairyMate.png
new file mode 100644
index 0000000000..67add7cf7d
Binary files /dev/null and b/Lab 3/assets/FairyMate.png differ
diff --git a/Lab 3/assets/FairyMate2.png b/Lab 3/assets/FairyMate2.png
new file mode 100644
index 0000000000..72e0b7ad12
Binary files /dev/null and b/Lab 3/assets/FairyMate2.png differ
diff --git a/Lab 3/fairymate.py b/Lab 3/fairymate.py
new file mode 100644
index 0000000000..5185b32f85
--- /dev/null
+++ b/Lab 3/fairymate.py
@@ -0,0 +1,153 @@
+import time
+import board
+import busio
+import adafruit_mpr121
+import speech_recognition as sr
+import subprocess
+
+# --- Setup hardware ---
+i2c = busio.I2C(board.SCL, board.SDA)
+mpr121 = adafruit_mpr121.MPR121(i2c)
+
+import digitalio
+from PIL import Image, ImageDraw, ImageFont
+import adafruit_rgb_display.st7789 as st7789
+
+# --- ST7789 setup (adjust pins if needed) ---
+cs_pin = digitalio.DigitalInOut(board.D5)
+dc_pin = digitalio.DigitalInOut(board.D25)
+reset_pin = None
+
+BAUDRATE = 64000000
+
+spi = board.SPI()
+
+DISPLAY_WIDTH = 135
+DISPLAY_HEIGHT = 240
+disp = st7789.ST7789(
+ spi,
+ cs=cs_pin,
+ dc=dc_pin,
+ rst=reset_pin,
+ baudrate=BAUDRATE,
+ width=DISPLAY_WIDTH,
+ height=DISPLAY_HEIGHT,
+ x_offset=53,
+ y_offset=40, # adjust depending on your display orientation
+)
+
+# --- Create image buffer ---
+image = Image.new("RGB", (disp.width, disp.height))
+draw = ImageDraw.Draw(image)
+font = ImageFont.truetype("/usr/share/fonts/truetype/dejavu/DejaVuSans-Bold.ttf", 20)
+
+def update_mode_display(mode):
+ # Clear the screen
+ draw.rectangle((0, 0, disp.width, disp.height), outline=0, fill=(0, 0, 0))
+
+ # Choose text and color
+ if mode == "emotional":
+ text = "Emotional\nSupport <3"
+ color = (255, 100, 150)
+ else:
+ text = "Solution\nSupport :)"
+ color = (100, 180, 255)
+
+ # Measure text size (Pillow ≥10 uses textbbox)
+ try:
+ bbox = draw.textbbox((0, 0), text, font=font)
+ w, h = bbox[2] - bbox[0], bbox[3] - bbox[1]
+ except AttributeError:
+ w, h = font.getsize(text)
+
+ # Draw centered text
+ draw.text(
+ ((disp.width - w) // 2, (disp.height - h) // 2),
+ text,
+ font=font,
+ fill=color,
+ align="center"
+ )
+
+ disp.image(image)
+
+
+# --- Speech recognition ---
+recognizer = sr.Recognizer()
+
+# --- System state ---
+mode = "solution" # default mode
+print("System ready! Default mode: Solution Support.")
+
+# --- Speak using espeak ---
+def speak(text):
+ print(f"Pi says: {text}")
+ subprocess.run(["espeak", "-s", "165", text])
+
+# --- Listen for voice input ---
+def listen():
+ with sr.Microphone() as source:
+ recognizer.adjust_for_ambient_noise(source, duration=0.5)
+ print("🎤 Listening...")
+ audio = recognizer.listen(source)
+ try:
+ text = recognizer.recognize_google(audio, language="en-US")
+ print(f"User said: {text}")
+ return text
+ except Exception:
+ print("Could not understand speech.")
+ return ""
+
+# --- Simple keyword-based mood detection ---
+def analyze_mood(user_text):
+ if not user_text:
+ return "neutral"
+ text = user_text.lower()
+ if any(w in text for w in ["good", "great", "happy", "awesome", "amazing"]):
+ return "positive"
+ elif any(w in text for w in ["bad", "sad", "tired", "angry", "terrible"]):
+ return "negative"
+ else:
+ return "neutral"
+
+def emotional_response(mood):
+ responses = {
+ "positive": "That’s wonderful to hear! I’m so glad you’re feeling good today.",
+ "neutral": "I see. It sounds like a calm day. I’m here if you’d like to share more.",
+ "negative": "I’m really sorry it’s been a tough day. Remember, it’s okay to rest and take things slow."
+ }
+ return responses[mood]
+
+def solution_response(mood):
+ responses = {
+ "positive": "Awesome! Maybe keep that momentum going with something you enjoy.",
+ "neutral": "Alright. Maybe you could set a small goal to make the day more productive.",
+ "negative": "Sounds like you’ve had a rough day. Maybe try writing down one thing you can solve step by step."
+ }
+ return responses[mood]
+
+# --- Main loop ---
+while True:
+ for i in range(12):
+ if mpr121[i].value:
+ if i < 6:
+ mode = "emotional"
+ speak("Switched to emotional support mode.")
+ else:
+ mode = "solution"
+ speak("Switched to solution support mode.")
+
+ update_mode_display(mode)
+ time.sleep(1)
+ speak("How’s your day?")
+ user_text = listen()
+ mood = analyze_mood(user_text)
+
+ if mode == "emotional":
+ speak(emotional_response(mood))
+ else:
+ speak(solution_response(mood))
+
+ print(f"Mode: {mode}, Mood: {mood}\n")
+ time.sleep(3) # debounce
+ time.sleep(0.1)
diff --git a/Lab 3/speech-scripts/ask_number.sh b/Lab 3/speech-scripts/ask_number.sh
new file mode 100755
index 0000000000..6795ce0873
--- /dev/null
+++ b/Lab 3/speech-scripts/ask_number.sh
@@ -0,0 +1,15 @@
+#!/bin/bash
+# ask_number.sh - Verbal prompt and record spoken answer
+
+OUTPUT_FILE="response.wav"
+
+# Step 1: Speak the question
+espeak "Please say your zip code."
+
+# Step 2: Record audio for 5 seconds
+echo "Recording... please speak now."
+arecord -f cd -t wav -d 5 -r 16000 -c 1 $OUTPUT_FILE
+echo "Recording saved to $OUTPUT_FILE"
+
+# Step 3: Confirm
+espeak "Thank you. Your response has been recorded."
diff --git a/Lab 3/speech-scripts/faster_whisper_try.py b/Lab 3/speech-scripts/faster_whisper_try.py
old mode 100644
new mode 100755
diff --git a/Lab 3/speech-scripts/greet.sh b/Lab 3/speech-scripts/greet.sh
new file mode 100755
index 0000000000..8e7d62b26d
--- /dev/null
+++ b/Lab 3/speech-scripts/greet.sh
@@ -0,0 +1,5 @@
+#!/bin/bash
+# greet.sh - Simple TTS greeting script using espeak
+
+NAME="Jessica Hsiao"
+espeak "Hello $NAME, welcome back to your Raspberry Pi!"
diff --git a/Lab 3/speech-scripts/response.wav b/Lab 3/speech-scripts/response.wav
new file mode 100644
index 0000000000..29add1fda7
Binary files /dev/null and b/Lab 3/speech-scripts/response.wav differ
diff --git a/Lab 3/speech-scripts/test.txt b/Lab 3/speech-scripts/test.txt
new file mode 100644
index 0000000000..a385b8f2bc
--- /dev/null
+++ b/Lab 3/speech-scripts/test.txt
@@ -0,0 +1 @@
+zero one two three four
diff --git a/Lab 3/speech-scripts/voice_chat.py b/Lab 3/speech-scripts/voice_chat.py
new file mode 100644
index 0000000000..8a8945a5ff
--- /dev/null
+++ b/Lab 3/speech-scripts/voice_chat.py
@@ -0,0 +1,50 @@
+import subprocess
+import vosk
+import sys
+import sounddevice as sd
+import ollama
+import pyttsx3
+import queue
+import json
+
+# --------------------
+# 1. Speech recognition setup
+# --------------------
+model = vosk.Model("model") # path to your vosk model (downloaded separately)
+samplerate = 16000
+q = queue.Queue()
+
+def callback(indata, frames, time, status):
+ q.put(bytes(indata))
+
+# --------------------
+# 2. Start mic stream
+# --------------------
+rec = vosk.KaldiRecognizer(model, samplerate)
+
+print("🎤 Say something... (Ctrl+C to stop)")
+with sd.RawInputStream(samplerate=samplerate, blocksize=8000, dtype='int16',
+ channels=1, callback=callback):
+ while True:
+ data = q.get()
+ if rec.AcceptWaveform(data):
+ result = rec.Result()
+ text = json.loads(result).get("text", "")
+ if text:
+ print(f"You said: {text}")
+
+ # --------------------
+ # 3. Send to Ollama
+ # --------------------
+ response = ollama.chat(model="llama3", messages=[
+ {"role": "user", "content": text}
+ ])
+ reply = response['message']['content']
+ print(f"Ollama: {reply}")
+
+ # --------------------
+ # 4. Text-to-speech
+ # --------------------
+ engine = pyttsx3.init()
+ engine.say(reply)
+ engine.runAndWait()
diff --git a/Lab 3/speech-scripts/whisper_try.py b/Lab 3/speech-scripts/whisper_try.py
old mode 100644
new mode 100755
diff --git a/Lab 4/README.md b/Lab 4/README.md
index afbb46ed98..bb28a062be 100644
--- a/Lab 4/README.md
+++ b/Lab 4/README.md
@@ -13,9 +13,9 @@
This helps ensure your README.md is clear, professional, and uniquely yours!
----
-
-## Lab 4 Deliverables
+
+
+Application #2: Natural Lighting Room
+
+
+
+Application #3: Exhibition Visitor Counter
+
+
+
+Application #4: Eye Wellness Assistant
+
+
+
+Application #5: Movie Controller
+
+
+
+
**\*\*\*What are some things these sketches raise as questions? What do you need to physically prototype to understand how to anwer those questions?\*\*\***
+**Application #1: Plant Health Monitor**
+- Questions raised:
+ 1. Connection and communication: The sketch doesn’t specify how the sensor and the display communicate — whether they are connected by wire, use wireless transmission, or operate through an intermediary microcontroller such as a Raspberry Pi.
+ 2. Sensor placement: It’s unclear how the sensor should be positioned or angled relative to the plant’s leaves to ensure consistent and accurate color readings.
+ 3. Environmental lighting effects: Different ambient lighting conditions (sunny vs. cloudy, indoor vs. outdoor) could alter the perceived color of the plant, even when its actual health remains the same.
+
+- To address these questions, we can first build a prototype that tests various sensor positions and angles to determine which setup yields the most stable readings. Next, running lighting variation tests — capturing RGB data under multiple environmental conditions — would help calibrate thresholds or develop compensation algorithms. Finally, prototyping the communication pathway between the sensor and display (wired vs. wireless) will clarify how to make the system responsive and portable for real-world use.
+
+**Application #2: Natural Lighting Room**
+
+- Questions raised:
+ 1. Light differentiation: The sketch doesn’t explain how the sensor distinguishes between natural and artificial light — will it rely solely on overall brightness, or also consider color temperature and spectral balance?
+ 2. Sensitivity and stability: It’s unclear how sensitive the system should be before adjusting lighting intensity. Minor fluctuations in sunlight (e.g., when clouds pass) could cause unstable dimming or flickering.
+ 3. Sensor orientation: The sensor’s placement and angle might significantly affect readings — pointing it directly at a window versus facing the interior ceiling could produce very different results.
+ 4. Communication and control: The sketch doesn’t specify how the sensor interacts with the lighting system — through direct wiring, a microcontroller, or a wireless connection.
+
+- To address these questions, we can start by testing the sensor at different positions and orientations within a room to measure how light readings vary across the day. Next, a calibration experiment can help determine appropriate brightness thresholds and smoothing algorithms to avoid frequent or erratic lighting changes. Finally, building a prototype connection between the sensor and a light controller (e.g., Raspberry Pi + LED dimmer) will allow us to evaluate real-time response and communication reliability.
+
+**Application #3: Exhibition Visitor Counter**
+
+- Questions raised:
+ 1. Detection accuracy: The sketch doesn’t specify how the sensor distinguishes between individual visitors — for example, how it avoids counting the same person twice or missing people walking closely together.
+ 2. Range and directionality: It’s unclear how far the APDS-9960 can reliably detect gestures or movement and whether it can sense entry versus exit directions.
+ 3. Environmental interference: Exhibition lighting, reflections, or nearby displays might interfere with gesture detection or falsely trigger the counter.
+ 4. Physical placement: The optimal height and orientation of the sensor relative to the doorway are uncertain — it may need to be tested at various positions to ensure consistent detection.
+
+- By testing different sensor heights and distances near an actual doorway, we can measure detection accuracy for varying visitor speeds and group sizes. Additionally, running environmental tests under different lighting conditions will help determine if shielding or calibration is needed to reduce false triggers. Finally, prototyping a count display system will help evaluate real-time feedback and timing consistency during high-traffic scenarios.
+
+**Application #4: Eye Wellness Assistant**
+
+- Questions raised:
+ 1. Unclear sense of distance: The sketch doesn’t convey how far the user is from the screen, making it difficult to judge whether the sensor can accurately measure a comfortable viewing range.
+ 2. Screen tilt impact: The monitor’s tilt angle could affect proximity readings, since the sensor’s detection depends heavily on its facing direction.
+ 3. Sensor form and placement: The drawing doesn’t indicate the sensor’s actual size or visibility, leaving uncertainty about whether it would distract the user or blend seamlessly into the screen design.
+
+- To answer the first question, we need to design a way to display the distance information to the user, for instance, putting a tiny monitor next to the screen to show the number. For the second and the third questions, they can be approached by conducting user studies with the physical prototype to experiment in a real-world settings. We can run user trials with different sensor placements and screen angles to evaluate accuracy, comfort, and intrusiveness in daily use.
+
+**Application #5: Movie Controller**
+
+- Questions raised:
+ 1. Gesture recognition accuracy: The sketch doesn’t clarify how reliably the sensor can differentiate between gestures such as left, right, or up swipes, especially when users vary in hand speed or distance.
+ 2. Detection range and angle: It’s unclear how far the user can sit from the screen while still being detected, or how wide the sensor’s field of view needs to be to capture gestures effectively.
+ 3. Interference and usability: Ambient lighting, reflections, or nearby movement (like someone walking past) could accidentally trigger commands. The sketch doesn’t show how the system might prevent or handle such false positives.
+
+- To address these questions, we can build a functional prototype that connects the gesture sensor to a simple media controller. This would allow testing of gesture detection accuracy across multiple users, distances, and lighting conditions. We also need to experiment with sensor placement and viewing angles — for example, mounting it on top of or below the TV — to find the most reliable setup. Lastly, collecting real interaction data can help define gesture thresholds and filtering strategies to minimize unintended activations.
+
+
**\*\*\*Pick one of these designs to prototype.\*\*\***
+We picked the forth one to prototype: Eye Wellness Assistant
+
### Part D
### Physical considerations for displaying information and housing parts
-
-
-Here is a Pi with a paper faceplate on it to turn it into a display interface:
+
@@ -298,201 +399,332 @@ Fold the first flap of the strip so that it sits flush against the back of the f
Here is an example:
+
+
+**Design #2**: The physical prototype includes an adjustable curved phone stand extending from the computer, with a small light placed beside it to indicate the proximity detection status. It uses a color light (green = good, yellow = a bit close, red = too close) to present the status.
+
+
+
+**Design #3**: The physical prototype includes an adjustable curved phone stand extending from the computer, with a small board placed next to it for sensor integration. The actual distance between the screen and the user’s face will be displayed on the board.
+
+
+
+**Design #4**: Place the sensor on top of the computer with the sensor facing downward. When the user gets too close to the computer (meaning their head is below the sensor), the sensor detects the proximity and uses a beeping sound to alert the users.
+
+
+
+**Design #5**: Place the sensor on top of the computer with the sensor facing downward. When the user gets too close to the computer (meaning their head is below the sensor), a voice message will be played to remind the user of the distance, such as “you are too close to your screen, please keep away from it”
+
+
+
+
**\*\*\*What are some things these sketches raise as questions? What do you need to physically prototype to understand how to anwer those questions?\*\*\***
+#### Design #1
+
+- Questions raised:
+ 1. How sensitive should the proximity threshold be to accurately detect when the user is “too close” without triggering false alerts?
+ 2. Will the flashing red light be noticeable yet comfortable for the user, or could it become distracting during long use?
+ 3. How much does the adjustable sensor angle affect detection accuracy across different screen tilt positions and user heights?
+- What to prototype:
+ 1. Test various distance thresholds to determine the optimal trigger range for comfort and accuracy.
+ 2. Experiment with different light intensities, colors, and flashing rates to find a balance between visibility and user comfort.
+ 3. Build and evaluate the adjustable holder to measure how sensor angle and placement influence proximity detection reliability.
+
+
+#### Design #2
+
+- Questions raised:
+ 1. What are the optimal distance thresholds for each color indicator (green, yellow, red) to provide meaningful and intuitive feedback to users?
+ 2. Will users easily notice and interpret the color changes, or would additional feedback (e.g., brightness or flashing) improve clarity?
+ 3. How does the curved stand’s angle and height affect the accuracy of proximity detection for users of different sitting positions?
+- What to prototype:
+ 1. Calibrate and test various distance ranges to define clear and consistent thresholds for each light color.
+ 2. Conduct short user tests to evaluate whether color feedback alone is sufficient for awareness and comfort.
+ 3. Build the curved stand prototype and measure detection performance at multiple sensor angles and user heights to ensure reliability.
+
+
+#### Design #3
+
+- Questions raised:
+ 1. How accurate and responsive will the displayed distance values be when the user moves slightly or changes posture?
+ 2. What is the most effective way to display the distance information so that it’s noticeable without being distracting?
+ 3. How does the sensor’s placement or tilt angle influence measurement consistency across different screen setups and user positions?
+- What to prototype:
+ 1. Test the sensor’s real-time distance measurement accuracy and response speed under typical usage movements.
+ 2. Experiment with different display formats (numerical, graphical, or color-coded) to assess readability and user comfort.
+ 3. Build the adjustable stand prototype and measure how various angles and distances affect data stability and reliability.
+
+#### Design #4:
+
+- Questions raised:
+ 1. How accurately can the downward-facing sensor detect when the user’s head crosses the threshold distance without being affected by lighting or hair color?
+ 2. What should the distance threshold be to ensure the alert triggers at a comfortable and safe viewing distance?
+ 3. Will the beeping alert remain effective over time, or might it become annoying or easy to ignore during long computer use?
+- What to prototype:
+ 1. Test detection reliability across users with different heights, hairstyles, and seating positions.
+ 2. Experiment with several threshold distances to determine an optimal range that balances comfort and alert sensitivity.
+ 3. Prototype different audio feedback patterns (e.g., single beep vs. continuous tone) to evaluate which is most noticeable yet least disruptive.
+
+
+#### Design #5:
+- Questions raised:
+ 1. How should the timing and frequency of the voice alert be set so it effectively reminds users without feeling intrusive?
+ 2. Will the downward-facing sensor maintain consistent accuracy across users with different heights and seating distances?
+ 3. Do users perceive the voice message as helpful feedback or as a distraction during focused work?
+- What to prototype:
+ 1. Experiment with various voice alert intervals, tones, and volumes to find a balance between clarity and comfort.
+ 2. Test the sensor’s performance across diverse user positions and lighting environments to ensure reliable detection.
+ 3. Conduct short user evaluations comparing voice feedback with other alert methods (e.g., light or sound) to determine the most preferred design.
+
+
+
**\*\*\*Pick one of these display designs to integrate into your prototype.\*\*\***
+We picked the forth one to integrate into our prototype.
+
**\*\*\*Explain the rationale for the design.\*\*\*** (e.g. Does it need to be a certain size or form or need to be able to be seen from a certain distance?)
-Build a cardboard prototype of your design.
+#### Why the device is elevated
+- The sensor needs to face downward to accurately detect the user’s head position. However, because the sensor strip is relatively short, we had to raise the Raspberry Pi to achieve the proper downward angle. Elevating the device also ensures that the user’s head does not accidentally block or strike the sensor during normal use.
+- During testing, we found that the sensor can reliably detect objects within a range of about 15 cm. Beyond this distance, detection becomes unreliable. Therefore, raising the device ensures that the user’s head remains within this effective detection range for consistent performance.
+
+#### Why the device has several holes
+- The holes serve multiple functional purposes. One is for the Bluetooth module, which broadcasts audio alerts to notify the user when they are sitting too close to the screen. Another set of holes supports display visibility and ventilation. We integrated a small screen on the Raspberry Pi to show system details and user feedback. To make this display clear and user-friendly, we positioned it near the radio broadcaster and ensured there are openings for both visibility and heat dissipation.
+#### Why handles were added to the device
+- We added two handles on the right side of the device to make maintenance easier. These handles allow users or developers to conveniently open the casing for adjustments, sensor calibration, or component replacement without damaging the housing.
+
+
+Build a cardboard prototype of your design.
**\*\*\*Document your rough prototype.\*\*\***
+
# LAB PART 2
-### Part 2
-
Following exploration and reflection from Part 1, complete the "looks like," "works like" and "acts like" prototypes for your design, reiterated below.
+---
-### Part E
+### Part C: Chaining Devices and Exploring Interaction Effects
-#### Chaining Devices and Exploring Interaction Effects
+For Part 2, We designed and built a fun interactive prototype using multiple inputs and outputs. We use two inputs and two outputs to give users hints about the distance between user's head and their computers.
-For Part 2, you will design and build a fun interactive prototype using multiple inputs and outputs. This means chaining Qwiic and STEMMA QT devices (e.g., buttons, encoders, sensors, servos, displays) and/or combining with traditional breadboard prototyping (e.g., LEDs, buzzers, etc.).
+#### Detail Function:
-**Your prototype should:**
-- Combine at least two different types of input and output devices, inspired by your physical considerations from Part 1.
-- Be playful, creative, and demonstrate multi-input/multi-output interaction.
+- Proximity Detection
+ - The sensor continuously measures the distance between the user’s head and the computer area.
+ - When the user gets too close, the system switches from the “normal” state to the “warning” state.
-**Document your system with:**
-- Code for your multi-device demo
-- Photos and/or video of the working prototype in action
-- A simple interaction diagram or sketch showing how inputs and outputs are connected and interact
-- Written reflection: What did you learn about multi-input/multi-output interaction? What was fun, surprising, or challenging?
-**Questions to consider:**
-- What new types of interaction become possible when you combine two or more sensors or actuators?
-- How does the physical arrangement of devices (e.g., where the encoder or sensor is placed) change the user experience?
-- What happens if you use one device to control or modulate another (e.g., encoder sets a threshold, sensor triggers an action)?
-- How does the system feel if you swap which device is "primary" and which is "secondary"?
+- Visual Feedback (Screen)
+ - In the normal state, the screen displays a motivational or positive message to encourage healthy posture.
+ - In the warning state, the screen turns red and shows a clear alert message (“You are too close to the screen”) to draw attention.
-Try chaining different combinations and document what you discover!
-See encoder_accel_servo_dashboard.py in the Lab 4 folder for an example of chaining together three devices.
+- Audio Feedback (Speaker)
+ - When the warning is triggered, the speaker plays a sound alert.
+ - This adds an additional layer of feedback that is harder to ignore than visuals alone.
-**`Lab 4/encoder_accel_servo_dashboard.py`**
+- User Control (Buttons)
+ - One button allows the user to start or activate the monitoring system.
+ - The second button lets the user silence the audio alert or stop the notification, giving the user agency.
-#### Using Multiple Qwiic Buttons: Changing I2C Address (Physically & Digitally)
+- Interaction Flow
+ - The sensor detects → screen and speaker react → user acknowledges with a button press → system resets.
+ - This creates a loop of detection, feedback, and user response.
-If you want to use more than one Qwiic Button in your project, you must give each button a unique I2C address. There are two ways to do this:
-##### 1. Physically: Soldering Address Jumpers
+##### Input:
-On the back of the Qwiic Button, you'll find four solder jumpers labeled A0, A1, A2, and A3. By bridging these with solder, you change the I2C address. Only one button on the chain can use the default address (0x6F).
+1. Proximity Sensor: The sensor is placed next to the computer (typically around 40 cm away). When it detects an object below it—usually the user’s head—it triggers the application to remind the user to maintain a proper distance from the screen.
-**Address Table:**
+2. Two Led Button: Users can press one button to start the application, while the second button is used to stop notifications. For example, if the application plays a sound to warn the user that they are too close to the screen, pressing the second button will silence the alert.
-| A3 | A2 | A1 | A0 | Address (hex) |
-|----|----|----|----|---------------|
-| 0 | 0 | 0 | 0 | 0x6F |
-| 0 | 0 | 0 | 1 | 0x6E |
-| 0 | 0 | 1 | 0 | 0x6D |
-| 0 | 0 | 1 | 1 | 0x6C |
-| 0 | 1 | 0 | 0 | 0x6B |
-| 0 | 1 | 0 | 1 | 0x6A |
-| 0 | 1 | 1 | 0 | 0x69 |
-| 0 | 1 | 1 | 1 | 0x68 |
-| 1 | 0 | 0 | 0 | 0x67 |
-| ...| ...| ...| ... | ... |
+##### Output:
+1. Screen: When the application is in a normal state (no warning is needed), the screen displays a motivational message to encourage the user to maintain good posture and continue working. When a warning is triggered, the screen switches to a red background and displays the message “You are too close to the screen” to alert the user.
+2. Speaker: When a warning is triggered, which indicates that the user is too close to the computer, the speaker plays an alert sound.
-For example, if you solder A0 closed (leave A1, A2, A3 open), the address becomes 0x6E.
-**Soldering Tips:**
-- Use a small amount of solder to bridge the pads for the jumper you want to close.
-- Only one jumper needs to be closed for each address change (see table above).
-- Power cycle the button after changing the jumper.
+**Hardware Block Diagram**
-##### 2. Digitally: Using Software to Change Address
+```scss
+ ┌─────────────────────┐
+ │ Proximity Sensor │
+ └─────────┬───────────┘
+ │ (distance data)
+ ▼
+ ┌─────────────────────┐
+ │ Raspberry Pi │
+ └───────┬───────┬─────┘
+ │ │ │
+ (visual) (audio) (user input)
+ ▼ ▼ ▼
+ ┌──────┐ ┌───────┐ ┌───────┐
+ │Screen│ │Speaker│ │Buttons│
+ └──────┘ └───────┘ └───────┘
-You can also change the address in software (temporarily or permanently) using the example script `qwiic_button_ex6_changeI2CAddress.py` in the Lab 4 folder. This is useful if you want to reassign addresses without soldering.
-Run the script and follow the prompts:
-```bash
-python qwiic_button_ex6_changeI2CAddress.py
```
-Enter the new address (e.g., 5B for 0x5B) when prompted. Power cycle the button after changing the address.
-**Note:** The software method is less foolproof and you need to make sure to keep track of which button has which address!
+**Interaction Flow Diagram**
+
+```pgsql
+ ┌────────────────────────┐
+ │ Sensor checks distance │
+ └─────────────┬──────────┘
+ │
+ ┌────────────────┴─────────────────┐
+ │ │
+ ▼ ▼
+ (User at safe distance) (User too close)
+ │ │
+ ▼ ▼
+Screen shows motivation Screen turns red + alert text
+ │ │
+ ▼ ▼
+ No sound Speaker plays warning
+ │
+ ▼
+ User presses button to silence alert
+ │
+ ▼
+ System returns to normal state
+```
-##### Using Multiple Buttons in Code
+**System Flowchart**
+
+```pgsql
+ ┌──────────────────────┐
+ │ System Started │
+ │ (User presses start) │
+ └───────────┬──────────┘
+ │
+ ▼
+ ┌──────────────────────┐
+ │ Sensor reads distance│
+ └───────────┬──────────┘
+ │
+ ┌─────────────┴─────────────┐
+ │ │
+ ▼ ▼
+ ┌───────────────────┐ ┌─────────────────────┐
+ │ User far enough │ NO │ User too close │ YES
+ │ (safe distance) │────── │ (below threshold) │
+ └─────────┬─────────┘ └─────────┬───────────┘
+ │ │
+ ▼ ▼
+ ┌────────────────────┐ ┌────────────────────────┐
+ │ Show motivational │ │ Turn screen red │
+ │ text on screen │ │ Display warning text │
+ └─────────┬──────────┘ └─────────┬──────────────┘
+ │ │
+ ▼ ▼
+ ┌────────────────────┐ ┌────────────────────────┐
+ │ No sound played │ │ Speaker plays sound │
+ └─────────┬──────────┘ └─────────┬──────────────┘
+ │ │
+ │ ┌────────────┴────────────┐
+ │ │ User presses stop btn │
+ │ └────────────┬────────────┘
+ │ ▼
+ │ ┌──────────────────────┐
+ └───────────────▶│ Sound is silenced │
+ │ System goes back to │
+ │ normal state │
+ └───────────┬──────────┘
+ │
+ ▼
+ (Loop continues)
-After setting unique addresses, you can use multiple buttons in your script. See these example scripts in the Lab 4 folder:
+```
-- **`qwiic_1_button.py`**: Basic example for reading a single Qwiic Button (default address 0x6F). Run with:
- ```bash
- python qwiic_1_button.py
- ```
+#### System Architecture
-- **`qwiic_button_led_demo.py`**: Demonstrates using two Qwiic Buttons at different addresses (e.g., 0x6F and 0x6E) and controlling their LEDs. Button 1 toggles its own LED; Button 2 toggles both LEDs. Run with:
- ```bash
- python qwiic_button_led_demo.py
- ```
+| Layer | Components | Role |
+| ----------------------- | ------------------------------------------------------------------------------------- | --------------------------------------------------------------- |
+| **Sensors (Input)** | APDS9960 proximity sensor, Qwiic Button #1 & #2, GPIO buttons | Detect user proximity and button interaction |
+| **Compute & Logic** | Raspberry Pi running Python main loop | Coordinates all logic and state machine (NORMAL / WARNING mode) |
+| **Communication Buses** | I2C (APDS9960 + QWIIC buttons), SPI (TFT display), GPIO (backlight + onboard buttons) | Hardware communication backbone |
+| **Output** | ST7789 TFT screen, backlight, text-to-speech (espeak) | Displays user feedback and audible warning |
-Here is a minimal code example for two buttons:
-```python
-import qwiic_button
-# Default button (0x6F)
-button1 = qwiic_button.QwiicButton()
-# Button with A0 soldered (0x6E)
-button2 = qwiic_button.QwiicButton(0x6E)
-button1.begin()
-button2.begin()
+#### Demo (Photos and/or video of the working prototype in action)
-while True:
- if button1.is_button_pressed():
- print("Button 1 pressed!")
- if button2.is_button_pressed():
- print("Button 2 pressed!")
-```
+Coding: look at [demo.py](https://github.com/JessicaDJ0807/Interactive-Lab-Hub/blob/Fall2025/Lab%204/demo.py)
+
+* "Looks like": shows how the device should look, feel, sit, weigh, etc.
+
+
+
+* "Works like": shows what the device can do
+
+
+
+* "Acts like": shows how a person would interact with the device
-For more details, see the [Qwiic Button Hookup Guide](https://learn.sparkfun.com/tutorials/qwiic-button-hookup-guide/all#i2c-address).
+https://github.com/user-attachments/assets/16318d59-851d-438b-872a-084e3ad348d3
+
+
+#### Users Feedback
+* At first the sound surprised the user, but it was helpful because they sometimes don’t notice visual warnings when they're focused on the screen.
+* They like that they can press a button to stop the sound. It feels like they still have control, instead of being stuck with an alarm.
+* When the sensor was placed in front of them, it triggered too often and felt a bit annoying. But once you moved it to the side, it felt much smoother and more natural.
---
-### PCF8574 GPIO Expander: Add More Pins Over I²C
+### Part D. Written Reflection
-Sometimes your Pi’s header GPIO pins are already full (e.g., with a display or HAT). That’s where an I²C GPIO expander comes in handy.
+- Learning about multi-input/multi-output interaction
-We use the Adafruit PCF8574 I²C GPIO Expander, which gives you 8 extra digital pins over I²C. It’s a great way to prototype with LEDs, buttons, or other components on the breadboard without worrying about pin conflicts—similar to how Arduino users often expand their pinouts when prototyping physical interactions.
+ - We learned that combining multiple sensors and actuators enables much richer and more meaningful interaction than using a single component. A proximity sensor alone can only detect that the user is too close, but it cannot communicate anything back. Once we added the screen, speaker, and buttons, the system could respond in multiple ways, including displaying a visual warning, playing a sound, or showing encouragement when the user maintained a healthy distance. This made the feedback intuitive and noticeable, transforming the system from a hidden measurement tool into an interactive assistant.
-**Why is this useful?**
-- You only need two wires (I²C: SDA + SCL) to unlock 8 extra GPIOs.
-- It integrates smoothly with CircuitPython and Blinka.
-- It allows a clean prototyping workflow when the Pi’s 40-pin header is already occupied by displays, HATs, or sensors.
-- Makes breadboard setups feel more like an Arduino-style prototyping environment where it’s easy to wire up interaction elements.
+- New types of interaction from combining components
-**Demo Script:** `Lab 4/gpio_expander.py`
+ - A proximity sensor alone only measures distance, but when paired with a screen and speaker, it becomes a behavioral feedback tool.
-
-
-
-
-
+ Choose a decade, let the light pick the mood, and control music with gesture. +
+ + {/* Small MQTT status */} +