Skip to content

Conversation

@AlreadyBored
Copy link
Owner

No description provided.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Introduces “Assignments v2” by adding updated assignment specifications and scoring rubrics for the Node.js course, and updates the root README to point to the new v2 structure while preserving links to the previous version.

Changes:

  • Added v2 assignment + scoring markdown files for modules 01–10.
  • Updated README.md to list v2 assignments and link to the preserved v1 index.
  • Added Assignments_v1.md as an index page for the previous (v1) assignment set.

Reviewed changes

Copilot reviewed 24 out of 24 changed files in this pull request and generated 8 comments.

Show a summary per file
File Description
README.md Replaces the old index with an “Assignments v2” index and links to v1.
Assignments_v1.md New v1 index preserving links to the prior assignment set.
assignments-v2/01-nodejs-fundamentals/assignment.md New v2 assignment spec for Node.js fundamentals.
assignments-v2/01-nodejs-fundamentals/score.md New v2 scoring rubric for Node.js fundamentals.
assignments-v2/02-crud-api/assignment.md New v2 assignment spec for CRUD API.
assignments-v2/02-crud-api/score.md New v2 scoring rubric for CRUD API.
assignments-v2/03-fastify-rest-api/assignment.md New v2 assignment spec for Fastify REST API.
assignments-v2/03-fastify-rest-api/score.md New v2 scoring rubric for Fastify REST API.
assignments-v2/04-database-prisma/assignment.md New v2 assignment spec for DB + Prisma.
assignments-v2/04-database-prisma/score.md New v2 scoring rubric for DB + Prisma.
assignments-v2/05-auth-jwt/assignment.md New v2 assignment spec for JWT auth.
assignments-v2/05-auth-jwt/score.md New v2 scoring rubric for JWT auth.
assignments-v2/06a-testing/assignment.md New v2 assignment spec for testing.
assignments-v2/06a-testing/score.md New v2 scoring rubric for testing.
assignments-v2/06b-logging-errors/assignment.md New v2 assignment spec for logging + error handling.
assignments-v2/06b-logging-errors/score.md New v2 scoring rubric for logging + error handling.
assignments-v2/07-docker/assignment.md New v2 assignment spec for Docker/containerization.
assignments-v2/07-docker/score.md New v2 scoring rubric for Docker/containerization.
assignments-v2/08-websockets/assignment.md New v2 assignment spec for WebSockets quiz game.
assignments-v2/08-websockets/score.md New v2 scoring rubric for WebSockets quiz game.
assignments-v2/09-ai-llm-integration/assignment.md New v2 assignment spec for LLM integration API.
assignments-v2/09-ai-llm-integration/score.md New v2 scoring rubric for LLM integration API.
assignments-v2/10-ai-rag-vectordb/assignment.md New v2 assignment spec for RAG + vector search knowledge base.
assignments-v2/10-ai-rag-vectordb/score.md New v2 scoring rubric for RAG + vector search knowledge base.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

```

Processing steps:
1. Split `content` into chunks of ~500-1000 characters with ~100-character overlap between consecutive chunks
Copy link

Copilot AI Feb 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Chunking requirements are inconsistent: here it says ~500–1000 chars with ~100 overlap, but later the "Document Chunking" section specifies CHUNK_SIZE=800 and CHUNK_OVERLAP=200. Please make these values consistent (or explicitly define the defaults vs acceptable ranges).

Suggested change
1. Split `content` into chunks of ~500-1000 characters with ~100-character overlap between consecutive chunks
1. Split `content` into chunks using a **default** size of 800 characters with a 200-character overlap between consecutive chunks (in general, chunk sizes in the range 500–1000 characters with 100–200 characters of overlap are acceptable)

Copilot uses AI. Check for mistakes.
Comment on lines +127 to +128
query: string; // required
limit?: number; // optional, default 5, max 20
Copy link

Copilot AI Feb 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Advanced scope in score.md mentions optional metadata filtering for POST /api/search, but SearchRequest here does not include any metadata field. Please add the optional filter to the request schema (or remove/adjust the scoring criterion).

Suggested change
query: string; // required
limit?: number; // optional, default 5, max 20
query: string; // required
limit?: number; // optional, default 5, max 20
metadata?: Record<string, string>; // optional metadata filters

Copilot uses AI. Check for mistakes.
documentId: string;
documentTitle: string;
chunk: string; // the text of the matched chunk
similarity: number; // cosine similarity score (0 to 1)
Copy link

Copilot AI Feb 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cosine similarity is generally in the range [-1, 1], not strictly 0–1. If you intend to clamp/shift it into [0, 1], please document the transformation; otherwise update the comment/range to avoid misleading implementations.

Suggested change
similarity: number; // cosine similarity score (0 to 1)
similarity: number; // cosine similarity score (-1 to 1)

Copilot uses AI. Check for mistakes.
Comment on lines +120 to +121
4. **Semantic Search** — `POST /api/search`

Copy link

Copilot AI Feb 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

POST /api/search is defined as a core endpoint here, but in score.md semantic search is listed under Advanced scope. Consider clarifying in the assignment whether /api/search is mandatory for everyone or only required for Advanced scope, to keep expectations and scoring aligned.

Suggested change
4. **Semantic Search**`POST /api/search`
4. **Semantic Search**`POST /api/search` *(Advanced scope)*
This endpoint is **required for the Advanced scope**. For the **Basic scope**, implementing it is **optional** and its absence should not be treated as a failure of the core requirements.

Copilot uses AI. Check for mistakes.
Comment on lines +170 to +173
- Previous conversation messages (if `conversationId` is provided)
- The user's question
4. Send the prompt to the LLM and return the answer

Copy link

Copilot AI Feb 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Chat processing steps require including previous conversation messages when conversationId is provided, which effectively makes conversation memory a baseline requirement. In score.md, conversation memory is listed as Advanced scope. Please clarify whether conversation memory is required or Advanced-only so students aren't penalized for following the assignment spec.

Suggested change
- Previous conversation messages (if `conversationId` is provided)
- The user's question
4. Send the prompt to the LLM and return the answer
- Previous conversation messages (if `conversationId` is provided — this is part of the **Advanced Scope** and is optional for the **Basic Scope**)
- The user's question
4. Send the prompt to the LLM and return the answer
> **Note:** For the **Basic Scope**, you may treat each question independently and ignore `conversationId` and previous conversation messages. Implementing conversation memory and using `conversationId` to fetch and include prior messages in the prompt belongs to the **Advanced Scope**.

Copilot uses AI. Check for mistakes.
Comment on lines +66 to +81
### Note: `data` value should be a **JSON string**, `id` should always be `0`

### Player Commands

- **Register / Login**

`<-` (from client)
```json
{
"type": "reg",
"data": {
"name": "<string>",
"password": "<string>"
},
"id": 0
}
Copy link

Copilot AI Feb 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The note says the data field should be a JSON string, but all subsequent examples show data as a JSON object. This is ambiguous for implementers; either update the note (e.g., the whole message is a JSON string over the socket) or update all command examples to show data as a stringified JSON payload.

Copilot uses AI. Check for mistakes.
Comment on lines +11 to +22
- Only Fastify's built-in Pino logger should be used for logging — no additional logging libraries are allowed
- Use 24.x.x version (24.10.0 or upper) of Node.js

## Implementation details

1. **Pino Logger Configuration**

Configure Fastify's built-in Pino logger with the following settings:
- Log level should be configurable via the `LOG_LEVEL` environment variable (default: `info`)
- Supported levels: `trace`, `debug`, `info`, `warn`, `error`, `fatal`
- In development mode, use `pino-pretty` for human-readable output
- In production mode, use structured JSON format
Copy link

Copilot AI Feb 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The technical requirements state "no additional logging libraries are allowed", but the implementation details require pino-pretty in development mode. Since pino-pretty is a separate dependency, this is contradictory—please either explicitly allow pino-pretty (and clarify it's the only extra), or adjust the requirement to stay within Fastify/Pino built-ins without extra packages.

Copilot uses AI. Check for mistakes.
Comment on lines +106 to +122
Response: **Server-Sent Events (SSE) stream**

The response should use `Content-Type: text/event-stream` and stream tokens as they arrive from the LLM:
```
data: {"token": "Hello"}

data: {"token": " there"}

data: {"token": "!"}

data: {"done": true, "sessionId": "abc-123", "usage": {"promptTokens": 50, "completionTokens": 12}}

```

- Each SSE event contains either a `token` (partial response) or a `done` flag with metadata
- The server stores conversation history per `sessionId` (in-memory, up to the last 20 messages)
- If no `sessionId` is provided, a new session is created and the `sessionId` is returned in the final event
Copy link

Copilot AI Feb 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

POST /api/chat is specified here as an SSE-only response, but score.md treats SSE streaming as an Advanced scope item (Basic scope just requires returning a response). Please align the assignment and scoring: either make non-streaming JSON the Basic requirement and SSE optional/Advanced, or update the scoring to require SSE in Basic.

Suggested change
Response: **Server-Sent Events (SSE) stream**
The response should use `Content-Type: text/event-stream` and stream tokens as they arrive from the LLM:
```
data: {"token": "Hello"}
data: {"token": " there"}
data: {"token": "!"}
data: {"done": true, "sessionId": "abc-123", "usage": {"promptTokens": 50, "completionTokens": 12}}
```
- Each SSE event contains either a `token` (partial response) or a `done` flag with metadata
- The server stores conversation history per `sessionId` (in-memory, up to the last 20 messages)
- If no `sessionId` is provided, a new session is created and the `sessionId` is returned in the final event
Response:
- **Basic requirement:** standard JSON response with status code **200**:
```typescript
interface ChatResponse {
reply: string; // full assistant message
sessionId: string; // id of the conversation
usage: {
promptTokens: number;
completionTokens: number;
};
}
  • Advanced (optional): support Server-Sent Events (SSE) streaming as an alternative response format.
    When SSE is implemented, the response should use Content-Type: text/event-stream and stream tokens as they arrive from the LLM:

    data: {"token": "Hello"}
    
    data: {"token": " there"}
    
    data: {"token": "!"}
    
    data: {"done": true, "sessionId": "abc-123", "usage": {"promptTokens": 50, "completionTokens": 12}}
    
    
  • When using SSE, each event should contain either a token (partial response) or a done flag with metadata

  • The server stores conversation history per sessionId (in-memory, up to the last 20 messages)

  • If no sessionId is provided, a new session is created and the sessionId is returned either in the JSON response (basic) or in the final SSE event (advanced)

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant