- Clone this repository and open it in VS Code.
- Install the GitHub Copilot Chat extension and ensure you have access to Copilot agent.
- Use the Agent tab for all code generation.
- Use the Chat tab for brainstorming, planning, and research.
- Write clear, specific prompts (e.g., “Create a function to move a player sprite in four directions”).
- For this workshop never write or edit code manually. Only review, test, and prompt for changes.
Doing complicated stuff without writing anything down is dumb. The chat view is OK, but the insight is bound to the single chat session.
Use the agent to write & persist markdown and iterate on it together - documenting insight, coming up with todos, definition of done, in what order to perform tasks, etc..
The most important step is setting up a clear project structure. Don't even think about writing code yet.
Use the chat tab for brainstorming/research and the agent tab for writing actual code.
Swap between the different chat modes in VS code's copilot chat plugin. Along with ask and agent, edit is also possible. However, you will most likly get better results with agent.
Create "Rules for AI" custom instructions to modify your agent's behavior as you progress, or maintain a RulesForAI.md file. You can find examples for different use cases/languages online. Check out cursor.directory.
Github Copilot supports the automatic usage of both a single .github/copilot-instructions.md, and one or more .github/instructions/"something".instructions.md /w applyTofrontmatter to define files and directions the instructions apply to. Check out Add repository instructions
Don't just say "Extract text from PDF and generate a summary." That's two problems! Extract text first, then generate the summary. Solve one problem at a time.
Share your thoughts with AI about tackling the problem. Once its solution steps look good, then ask it to write code.
💡 Pro Tip: Keep an eye on the agent when it's coding: if it starts doing something you don't like - STOP the execution and let it know your thoughts.
Since agent tools don't include all files in context (to reduce their costs), accurate file naming prevents code duplication. Make sure filenames clearly describe their responsibility.
It might feel unnecessary when your project is small, but when it grows, tests will be your hero.
If you don't, you will lose 4 months of work like this guy [Reddit post]
When you want to solve a new problem, start a new chat.
It's tempting to just accept code that works and move on. But there will be times when AI can't fix your bugs - that's when your hands need to get dirty (main reason non-tech people still need developers).
When I tried integrating a new payment gateway, it hallucinated. But once I provided docs, it got it right.
If AI can't find the problem in the code and is stuck in a loop, ask it to insert debugging statements. AI is excellent at debugging, but sometimes needs your help to point it in the right direction.
AI can make mistakes or introduce subtle bugs. Always review and test generated code before merging.
When prompting the agent, include relevant code snippets or file paths. The more context you give, the better the output.
💡 Pro Tip: Some modern frameworks provide documentation in LLM-friendly formats. Consider pulling these docs directly into your repository for better AI context!
If you discover a useful prompt or workflow, document it or share it with others in your team.
Let the agent spin up a real browser and run end‑to‑end tests.
💡 Pro Tip: In VSCode you can install Playwright MCP through Extensions > MCP Servers, and let Copilot do explorative testing of the frontend on it's own. Without writing any Playwright scripts or setting up a Playwright server.
Prereqs
- Node 18+ and VS Code with Copilot Chat/Agent.
- Playwright browsers installed.
Setup
- Install test deps and browsers:
- npm i -D @playwright/test
- npx playwright install --with-deps
- Add Playwright config (playwright.config.ts) and a smoke test (tests/e2e/smoke.spec.ts).
- Add scripts in package.json:
- "test:e2e": "playwright test"
- "test:e2e:ui": "playwright test --ui"
- "test:e2e:headed": "playwright test --headed"
MCP integration (agent tools)
- If your Copilot Agent build supports MCP, register a local Playwright MCP server in your client’s MCP settings. Example: { "mcpServers": { "playwright": { "command": "node", "args": ["scripts/mcp-playwright.js"], "env": { "BASE_URL": "http://localhost:5173" } } } }
- Exact config keys vary by client/build. Check your agent’s docs for where to add MCP servers.
How to use (typical flow)
- Terminal 1: start your app (e.g., npm run dev) on http://localhost:5173
- Terminal 2: run tests manually (npm run test:e2e) or ask the agent:
- “Start the Playwright MCP tool against BASE_URL=http://localhost:5173 and run the smoke tests.”
- “Open /login, fill in demo creds from env, take a screenshot, and save it to test-results/.”
- “Add a test for the signup flow and run it headed.”
CI tip
- In CI, set BASE_URL to your preview URL and enable trace/video in playwright.config.ts for failures.

