-
Notifications
You must be signed in to change notification settings - Fork 391
feat: update assignments #111
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Introduces “Assignments v2” by adding updated assignment specifications and scoring rubrics for the Node.js course, and updates the root README to point to the new v2 structure while preserving links to the previous version.
Changes:
- Added v2 assignment + scoring markdown files for modules 01–10.
- Updated
README.mdto list v2 assignments and link to the preserved v1 index. - Added
Assignments_v1.mdas an index page for the previous (v1) assignment set.
Reviewed changes
Copilot reviewed 24 out of 24 changed files in this pull request and generated 8 comments.
Show a summary per file
| File | Description |
|---|---|
| README.md | Replaces the old index with an “Assignments v2” index and links to v1. |
| Assignments_v1.md | New v1 index preserving links to the prior assignment set. |
| assignments-v2/01-nodejs-fundamentals/assignment.md | New v2 assignment spec for Node.js fundamentals. |
| assignments-v2/01-nodejs-fundamentals/score.md | New v2 scoring rubric for Node.js fundamentals. |
| assignments-v2/02-crud-api/assignment.md | New v2 assignment spec for CRUD API. |
| assignments-v2/02-crud-api/score.md | New v2 scoring rubric for CRUD API. |
| assignments-v2/03-fastify-rest-api/assignment.md | New v2 assignment spec for Fastify REST API. |
| assignments-v2/03-fastify-rest-api/score.md | New v2 scoring rubric for Fastify REST API. |
| assignments-v2/04-database-prisma/assignment.md | New v2 assignment spec for DB + Prisma. |
| assignments-v2/04-database-prisma/score.md | New v2 scoring rubric for DB + Prisma. |
| assignments-v2/05-auth-jwt/assignment.md | New v2 assignment spec for JWT auth. |
| assignments-v2/05-auth-jwt/score.md | New v2 scoring rubric for JWT auth. |
| assignments-v2/06a-testing/assignment.md | New v2 assignment spec for testing. |
| assignments-v2/06a-testing/score.md | New v2 scoring rubric for testing. |
| assignments-v2/06b-logging-errors/assignment.md | New v2 assignment spec for logging + error handling. |
| assignments-v2/06b-logging-errors/score.md | New v2 scoring rubric for logging + error handling. |
| assignments-v2/07-docker/assignment.md | New v2 assignment spec for Docker/containerization. |
| assignments-v2/07-docker/score.md | New v2 scoring rubric for Docker/containerization. |
| assignments-v2/08-websockets/assignment.md | New v2 assignment spec for WebSockets quiz game. |
| assignments-v2/08-websockets/score.md | New v2 scoring rubric for WebSockets quiz game. |
| assignments-v2/09-ai-llm-integration/assignment.md | New v2 assignment spec for LLM integration API. |
| assignments-v2/09-ai-llm-integration/score.md | New v2 scoring rubric for LLM integration API. |
| assignments-v2/10-ai-rag-vectordb/assignment.md | New v2 assignment spec for RAG + vector search knowledge base. |
| assignments-v2/10-ai-rag-vectordb/score.md | New v2 scoring rubric for RAG + vector search knowledge base. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| ``` | ||
|
|
||
| Processing steps: | ||
| 1. Split `content` into chunks of ~500-1000 characters with ~100-character overlap between consecutive chunks |
Copilot
AI
Feb 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Chunking requirements are inconsistent: here it says ~500–1000 chars with ~100 overlap, but later the "Document Chunking" section specifies CHUNK_SIZE=800 and CHUNK_OVERLAP=200. Please make these values consistent (or explicitly define the defaults vs acceptable ranges).
| 1. Split `content` into chunks of ~500-1000 characters with ~100-character overlap between consecutive chunks | |
| 1. Split `content` into chunks using a **default** size of 800 characters with a 200-character overlap between consecutive chunks (in general, chunk sizes in the range 500–1000 characters with 100–200 characters of overlap are acceptable) |
| query: string; // required | ||
| limit?: number; // optional, default 5, max 20 |
Copilot
AI
Feb 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Advanced scope in score.md mentions optional metadata filtering for POST /api/search, but SearchRequest here does not include any metadata field. Please add the optional filter to the request schema (or remove/adjust the scoring criterion).
| query: string; // required | |
| limit?: number; // optional, default 5, max 20 | |
| query: string; // required | |
| limit?: number; // optional, default 5, max 20 | |
| metadata?: Record<string, string>; // optional metadata filters |
| documentId: string; | ||
| documentTitle: string; | ||
| chunk: string; // the text of the matched chunk | ||
| similarity: number; // cosine similarity score (0 to 1) |
Copilot
AI
Feb 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cosine similarity is generally in the range [-1, 1], not strictly 0–1. If you intend to clamp/shift it into [0, 1], please document the transformation; otherwise update the comment/range to avoid misleading implementations.
| similarity: number; // cosine similarity score (0 to 1) | |
| similarity: number; // cosine similarity score (-1 to 1) |
| 4. **Semantic Search** — `POST /api/search` | ||
|
|
Copilot
AI
Feb 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
POST /api/search is defined as a core endpoint here, but in score.md semantic search is listed under Advanced scope. Consider clarifying in the assignment whether /api/search is mandatory for everyone or only required for Advanced scope, to keep expectations and scoring aligned.
| 4. **Semantic Search** — `POST /api/search` | |
| 4. **Semantic Search** — `POST /api/search` *(Advanced scope)* | |
| This endpoint is **required for the Advanced scope**. For the **Basic scope**, implementing it is **optional** and its absence should not be treated as a failure of the core requirements. |
| - Previous conversation messages (if `conversationId` is provided) | ||
| - The user's question | ||
| 4. Send the prompt to the LLM and return the answer | ||
|
|
Copilot
AI
Feb 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Chat processing steps require including previous conversation messages when conversationId is provided, which effectively makes conversation memory a baseline requirement. In score.md, conversation memory is listed as Advanced scope. Please clarify whether conversation memory is required or Advanced-only so students aren't penalized for following the assignment spec.
| - Previous conversation messages (if `conversationId` is provided) | |
| - The user's question | |
| 4. Send the prompt to the LLM and return the answer | |
| - Previous conversation messages (if `conversationId` is provided — this is part of the **Advanced Scope** and is optional for the **Basic Scope**) | |
| - The user's question | |
| 4. Send the prompt to the LLM and return the answer | |
| > **Note:** For the **Basic Scope**, you may treat each question independently and ignore `conversationId` and previous conversation messages. Implementing conversation memory and using `conversationId` to fetch and include prior messages in the prompt belongs to the **Advanced Scope**. |
| ### Note: `data` value should be a **JSON string**, `id` should always be `0` | ||
|
|
||
| ### Player Commands | ||
|
|
||
| - **Register / Login** | ||
|
|
||
| `<-` (from client) | ||
| ```json | ||
| { | ||
| "type": "reg", | ||
| "data": { | ||
| "name": "<string>", | ||
| "password": "<string>" | ||
| }, | ||
| "id": 0 | ||
| } |
Copilot
AI
Feb 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The note says the data field should be a JSON string, but all subsequent examples show data as a JSON object. This is ambiguous for implementers; either update the note (e.g., the whole message is a JSON string over the socket) or update all command examples to show data as a stringified JSON payload.
| - Only Fastify's built-in Pino logger should be used for logging — no additional logging libraries are allowed | ||
| - Use 24.x.x version (24.10.0 or upper) of Node.js | ||
|
|
||
| ## Implementation details | ||
|
|
||
| 1. **Pino Logger Configuration** | ||
|
|
||
| Configure Fastify's built-in Pino logger with the following settings: | ||
| - Log level should be configurable via the `LOG_LEVEL` environment variable (default: `info`) | ||
| - Supported levels: `trace`, `debug`, `info`, `warn`, `error`, `fatal` | ||
| - In development mode, use `pino-pretty` for human-readable output | ||
| - In production mode, use structured JSON format |
Copilot
AI
Feb 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The technical requirements state "no additional logging libraries are allowed", but the implementation details require pino-pretty in development mode. Since pino-pretty is a separate dependency, this is contradictory—please either explicitly allow pino-pretty (and clarify it's the only extra), or adjust the requirement to stay within Fastify/Pino built-ins without extra packages.
| Response: **Server-Sent Events (SSE) stream** | ||
|
|
||
| The response should use `Content-Type: text/event-stream` and stream tokens as they arrive from the LLM: | ||
| ``` | ||
| data: {"token": "Hello"} | ||
|
|
||
| data: {"token": " there"} | ||
|
|
||
| data: {"token": "!"} | ||
|
|
||
| data: {"done": true, "sessionId": "abc-123", "usage": {"promptTokens": 50, "completionTokens": 12}} | ||
|
|
||
| ``` | ||
|
|
||
| - Each SSE event contains either a `token` (partial response) or a `done` flag with metadata | ||
| - The server stores conversation history per `sessionId` (in-memory, up to the last 20 messages) | ||
| - If no `sessionId` is provided, a new session is created and the `sessionId` is returned in the final event |
Copilot
AI
Feb 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
POST /api/chat is specified here as an SSE-only response, but score.md treats SSE streaming as an Advanced scope item (Basic scope just requires returning a response). Please align the assignment and scoring: either make non-streaming JSON the Basic requirement and SSE optional/Advanced, or update the scoring to require SSE in Basic.
| Response: **Server-Sent Events (SSE) stream** | |
| The response should use `Content-Type: text/event-stream` and stream tokens as they arrive from the LLM: | |
| ``` | |
| data: {"token": "Hello"} | |
| data: {"token": " there"} | |
| data: {"token": "!"} | |
| data: {"done": true, "sessionId": "abc-123", "usage": {"promptTokens": 50, "completionTokens": 12}} | |
| ``` | |
| - Each SSE event contains either a `token` (partial response) or a `done` flag with metadata | |
| - The server stores conversation history per `sessionId` (in-memory, up to the last 20 messages) | |
| - If no `sessionId` is provided, a new session is created and the `sessionId` is returned in the final event | |
| Response: | |
| - **Basic requirement:** standard JSON response with status code **200**: | |
| ```typescript | |
| interface ChatResponse { | |
| reply: string; // full assistant message | |
| sessionId: string; // id of the conversation | |
| usage: { | |
| promptTokens: number; | |
| completionTokens: number; | |
| }; | |
| } |
-
Advanced (optional): support Server-Sent Events (SSE) streaming as an alternative response format.
When SSE is implemented, the response should useContent-Type: text/event-streamand stream tokens as they arrive from the LLM:data: {"token": "Hello"} data: {"token": " there"} data: {"token": "!"} data: {"done": true, "sessionId": "abc-123", "usage": {"promptTokens": 50, "completionTokens": 12}} -
When using SSE, each event should contain either a
token(partial response) or adoneflag with metadata -
The server stores conversation history per
sessionId(in-memory, up to the last 20 messages) -
If no
sessionIdis provided, a new session is created and thesessionIdis returned either in the JSON response (basic) or in the final SSE event (advanced)
No description provided.