Combined (04) (PEXT) Practice Exam Test #34
MohamedRadwan-DevOps
announced in
Documentation
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Combined (04): (PEXT) Practice Exam Test
Document Type: PEXT (Practice Exam Test)
Scope: This document provides the first combined practice exam set, containing exam-style questions spanning multiple modules of the learning path. Each question includes the correct answer and a concise explanation, designed to mirror real exam scenarios, reinforce key concepts, and highlight common pitfalls and misunderstandings.
Question: [091]
Where can you find Copilot/extension logs and deeper Electron logs in VS Code?
Options:
A. Only in GitHub.com → Your profile → Logs
B. Output panel / Extensions logs folder for Copilot; Developer Tools for Electron logs
C. Git → Show Git Output (only)
D. Copilot Chat → /logs command
Correct Answer(s): B
Explanation:
VS Code writes Copilot/extension diagnostics to the Output view and the Extensions logs folder. You can open logs via View → Output → “GitHub Copilot”, and open the extension log files using the Command Palette command “Developer: Open Extension Logs Folder”. Platform-level details live under Developer: Toggle Developer Tools → Console (Electron DevTools). Use these alongside “GitHub Copilot: Collect Diagnostics” for complete troubleshooting artifacts.
Tips and Tricks:
Important
For support, use “GitHub Copilot: Collect Diagnostics” together with Output logs, extension log files, and Electron DevTools Console. This combination gives a complete view of connectivity, configuration, and runtime errors, which significantly accelerates triage.
Source:
Viewing logs for GitHub Copilot in your environment (GitHub Docs)
Troubleshoot GitHub Copilot (GitHub Docs)
Introduction to GitHub Copilot (Microsoft Learn)
Get started with GitHub Copilot (Microsoft Learn)
Question: [092]
How do you run “GitHub Copilot: Collect Diagnostics” from the Command Palette in VS Code?
Options:
A. Ctrl/Cmd+` → type “diagnostics copilot”
B. View → Explorer → Diagnostics
C. Ctrl/Cmd+Shift+P → type “Collect Diagnostics” → select “GitHub Copilot: Collect Diagnostics”
D. F1 → “Open Logs” (only)
Correct Answer(s): C
Explanation:
Use the Command Palette (Windows/Linux Ctrl+Shift+P, macOS Cmd+Shift+P) and run “GitHub Copilot: Collect Diagnostics.” From the Command Palette, type “Diagnostics” or “Collect Diagnostics”, then select “GitHub Copilot: Collect Diagnostics”. The command packages Copilot logs, environment details, versions, and connectivity information into an editor tab that you can inspect or share with support to help identify connectivity/extension issues.
Tips and Tricks:
Important
The diagnostics report includes a Reachability section showing whether Copilot can access required services. Attach this report together with Output logs and extension log files to accelerate firewall/proxy troubleshooting and reduce back-and-forth with support.
Source:
Viewing logs for GitHub Copilot in your environment (GitHub Docs)
Troubleshoot GitHub Copilot (GitHub Docs)
Troubleshooting network errors for GitHub Copilot (GitHub Docs)
Introduction to GitHub Copilot (Microsoft Learn)
Question: [093]
On which surfaces is Copilot Chat available?
Options:
A. Only VS Code and Visual Studio
B. GitHub.com, VS Code, Visual Studio, JetBrains IDEs, Eclipse, Xcode, GitHub Mobile, Windows Terminal
C. Only GitHub.com
D. Only IDEs (no browser or mobile)
Correct Answer(s): B
Explanation:
GitHub lists multiple surfaces for Copilot Chat, including GitHub.com, major IDEs (Visual Studio Code, Visual Studio, JetBrains IDEs, Eclipse, Xcode), GitHub Mobile, and Windows Terminal. Core chat behavior is similar across these environments, but effective availability still depends on your plan and organization/enterprise policy.
Tips and Tricks:
Important
Surface ≠ feature set. Copilot Chat is broadly available on GitHub.com, supported IDEs, GitHub Mobile, and Windows Terminal for eligible plans, while advanced features such as repository-aware chat on GitHub.com and enterprise agents are tied to Copilot Enterprise and governed by organization/enterprise policies.
Source:
GitHub Copilot features (GitHub Docs)
About GitHub Copilot Chat (GitHub Docs)
Chat with GitHub Copilot in your IDE (GitHub Docs)
Chat with GitHub Copilot in Windows Terminal (GitHub Docs)
Question: [094]
What’s the difference between Copilot Edits – Edit mode and Agent mode?
Options:
A. Edit = disables approvals; Agent = requires approvals
B. Edit = user-driven targeted edits; Agent = autonomous multi-step changes (can open a PR)
C. Edit = chat only; Agent = inline only
D. Edit = repository-aware; Agent = IDE-only
Correct Answer(s): B
Explanation:
Edit mode keeps you in control: you pick files, preview diffs, and accept changes incrementally. It is a user-driven, targeted edit experience where you select the scope, review suggested diffs, and apply or reject changes. Agent mode lets the Copilot coding agent decide which files and commands to run and perform autonomous multi-step work across files and tools, often creating or updating pull requests that you then review and merge.
Tips and Tricks:
Important
Governance remains: even in Agent mode, changes land via PR review. Copilot does not bypass branch protections, required approvals, or CI policies; you still own final review and merge decisions.
Source:
GitHub Copilot Edits in Visual Studio (Microsoft Learn)
About GitHub Copilot coding agent (GitHub Docs)
Agent mode 101: All about GitHub Copilot’s powerful mode (GitHub Blog)
Copilot ask, edit, and agent modes: What they do and when to use them (GitHub Blog)
Question: [095]
What do Copilot code reviews and PR summaries provide on GitHub.com?
Options:
A. Auto-merge PRs
B. AI review suggestions and natural-language summaries of changes
C. License scanning
D. Static code analysis only
Correct Answer(s): B
Explanation:
On GitHub.com, Copilot can generate review suggestions and summaries to accelerate PR understanding. Copilot code review can analyze pull requests and offer inline review comments and suggested changes, while Copilot PR summaries generate natural-language overviews of the changes so reviewers can quickly understand scope and intent. These aids do not replace mandatory reviews or testing; they help reviewers focus on high-risk areas by condensing diffs and spotting potential issues.
Tips and Tricks:
Important
Copilot enhances but doesn’t automate approval/merge. Branch protection rules, required reviewers, CI checks, and security scans remain the authoritative gates that must pass before a PR is merged.
Source:
About GitHub Copilot code review (GitHub Docs)
Using GitHub Copilot code review (GitHub Docs)
Creating a summary for a pull request with GitHub Copilot (GitHub Docs)
Leveling up code reviews and pull requests with GitHub Copilot (Microsoft Learn)
Question: [096]
For individuals, which statements about access and eligibility are correct?
Options:
A. Copilot Free is for personal use; Pro is paid but may be free for verified students/teachers/maintainers
B. Pro is only for organizations
C. Business is required for single developers
D. Enterprise includes Pro seats for free
Correct Answer(s): A
Explanation:
Copilot Free targets individuals getting started with a personal-use, limited-quota plan. Copilot Pro (and Pro+) are paid individual subscriptions and Copilot Pro can be free for verified students, teachers, and maintainers of popular open-source projects. You do not need Business or Enterprise plans for a single developer using a personal GitHub account.
Tips and Tricks:
Important
Copilot Free has limited completions and premium requests and is not designed for organization-managed users. Copilot Pro / Pro+ unlock higher limits and premium models, while education/maintainer benefits specifically grant Pro, not Business or Enterprise seats.
Source:
About individual GitHub Copilot plans and benefits (GitHub Docs)
GitHub Copilot billing (GitHub Docs)
Getting free access to GitHub Copilot Pro (GitHub Docs)
Introduction to GitHub Copilot (Microsoft Learn)
Question: [097]
Who purchases and assigns seats for Business vs. Enterprise plans?
Options:
A. Developers purchase directly in the IDE
B. Org owners purchase/assign Business; Enterprise owners manage Enterprise subscriptions and seat assignment
C. Repository admins purchase both
D. Students purchase Enterprise
Correct Answer(s): B
Explanation:
For Copilot Business, organization owners subscribe at the organization level, purchase Copilot seats, and assign them to organization members. For Copilot Enterprise (or Business managed at the enterprise level), enterprise owners subscribe at the enterprise layer and centrally manage plans and seat assignment across organizations, delegating as needed while respecting organization boundaries.
Tips and Tricks:
Important
Seat management is still about assigning Copilot seats to individual users or teams. Organization owners manage seats for org-scoped Business plans, while enterprise owners manage or delegate seat assignment for enterprise-scoped plans, without bypassing existing organization boundaries and governance.
Source:
GitHub Copilot seat assignment (GitHub Docs)
Managing the GitHub Copilot plan for your organization (GitHub Docs)
Managing the GitHub Copilot plan for your enterprise (GitHub Docs)
Administer GitHub Copilot for your team (GitHub Docs)
Question: [098]
Which answer best captures the Copilot plan taxonomy and key highlights?
Options:
A. Free, Team, Premium
B. Free / Pro (individual), Pro+ (individual), Business (org), Enterprise (org)
C. Pro, Business, Enterprise only
D. Free, Team, Enterprise Server
Correct Answer(s): B
Explanation:
GitHub’s current plan taxonomy distinguishes individual plans (Copilot Free, Copilot Pro, Copilot Pro+) and organization/enterprise plans (Copilot Business, Copilot Enterprise). As you move up tiers, you gain governance and policy controls (such as usage reporting and content exclusion) and then enterprise capabilities (for example, SSO integrations, audit logs, repo-aware chat, and advanced data controls).
Tips and Tricks:
Important
Exam questions often test naming: the current taxonomy is Free, Pro, Pro+ for individuals and Business, Enterprise for organizations. Ignore legacy labels and base your answers on what the official GitHub Copilot Plans page shows today.
Source:
Plans for GitHub Copilot (GitHub Docs)
GitHub Copilot billing (GitHub Docs)
Getting started with a GitHub Copilot plan (GitHub Docs)
Introduction to GitHub Copilot (Microsoft Learn)
Question: [099]
What’s the practical contrast in how developers invoke Copilot features?
Options:
A. Inline suggestions: appear near the cursor (accept with Tab/Enter). Chat: open a chat panel and prompt (supports selections).
B. Both are automatically applied without user action
C. Only chat can generate code; inline cannot
D. Inline runs in the browser; chat runs only in IDEs
Correct Answer(s): A
Explanation:
Inline is a lightweight completion experience that appears as ghost text near the cursor and is accepted via keys such as Tab/Enter. Chat is invoked by opening a chat view or inline chat, then typing prompts that can use selected code or files as context. Both experiences run in supported IDEs, while Copilot Chat is also available on GitHub.com and Windows Terminal.
Tips and Tricks:
Important
Invocation mode does not change trust level: both inline and Chat suggestions are candidates that still require compilation, testing, and review. The exam distinction is mainly about how you start them (accept inline vs open chat), not different governance.
Source:
Code suggestions with GitHub Copilot (GitHub Docs)
Get IDE code suggestions with GitHub Copilot (GitHub Docs)
Chat with GitHub Copilot in your IDE (GitHub Docs)
GitHub Copilot features overview (GitHub Docs)
Question: [100]
Can Copilot help generate unit tests in supported IDEs?
Options:
A. No, only refactors are supported
B. Yes, use Copilot Chat or context prompts to generate tests for selected code
C. Only on Enterprise plans
D. Only in JetBrains IDEs
Correct Answer(s): B
Explanation:
Copilot Chat and prompt patterns support test generation for selected code. In supported IDEs you can select a function or file and ask Copilot to generate unit (or integration) tests, and Copilot will propose test cases, scaffolds, and assertions based on the surrounding context. Guides such as “Writing tests with GitHub Copilot” also describe using prompts or commands (for example, /tests) to generate tests.
Tips and Tricks:
Important
Copilot-generated tests may not cover all scenarios or match project style perfectly. Treat them as accelerators, not replacements, for human-designed test strategy, coverage analysis, and code review.
Source:
Write tests with GitHub Copilot (GitHub Docs)
Chat with GitHub Copilot in your IDE (GitHub Docs)
Getting started with prompts for GitHub Copilot Chat (GitHub Docs)
Test with GitHub Copilot in VS Code (VS Code Docs)
Question: [101]
Is Windows Terminal a supported surface for Copilot Chat?
Options:
A. No, terminal isn’t supported
B. Yes, Windows Terminal is listed among supported Copilot Chat surfaces
C. Yes, but only on GHES
D. Only if you disable inline suggestions
Correct Answer(s): B
Explanation:
GitHub includes Windows Terminal among the supported surfaces for Copilot Chat. With Terminal Chat in Windows Terminal, you can ask Copilot for command suggestions, explanations, and one-off shell scripts directly in the terminal, alongside GitHub.com, IDEs, and GitHub Mobile.
Tips and Tricks:
Important
Copilot in Windows Terminal is available to eligible Copilot customers and still respects plan limits and enterprise policies. You must have Copilot access and configure Windows Terminal (Canary) → Terminal Chat to use GitHub Copilot as the provider.
Source:
Chat with GitHub Copilot in Windows Terminal (GitHub Docs)
Quickstart for GitHub Copilot in Windows Terminal (GitHub Docs)
GitHub Copilot Chat (GitHub Docs)
GitHub Copilot is now available in Windows Terminal (GitHub Blog)
Question: [102]
How can Copilot help with documentation tasks such as docstrings, README sections, and code comments?
Options:
A. It can only generate code, not docs
B. Use Copilot Chat and selection-based prompts to draft docstrings, comments, and README snippets
C. Docs are generated automatically on every commit
D. Only Enterprise plans can generate documentation
Correct Answer(s): B
Explanation:
Copilot supports documentation workflows. You can select code and ask Copilot Chat to “write docstrings,” “explain this function,” or “draft a README section”, and include style constraints such as tone, audience, and format. Inline suggestions also surface comment completions near the cursor. Copilot can iteratively refine these drafts based on follow up prompts, for example “shorter,” “more beginner friendly,” or “add examples.”
Tips and Tricks:
Important
Documentation assistance is available across supported surfaces such as IDEs and GitHub.com, but Copilot’s output is a draft. Always review for accuracy, confidentiality, and tone, especially for public READMEs or customer facing documentation.
Source:
Getting started with prompts for GitHub Copilot Chat (GitHub Docs)
Prompt engineering for GitHub Copilot Chat (GitHub Docs)
Asking GitHub Copilot questions in your IDE (GitHub Docs)
GitHub Copilot tutorials (GitHub Docs)
Question: [103]
How can Copilot assist with debugging and fixing errors?
Options:
A. It automatically fixes all errors at build time
B. Use Copilot Chat to explain error messages/stack traces and propose fixes or refactors
C. Errors must be fixed manually; Copilot doesn’t help
D. Only Enterprise users can use Chat for debugging
Correct Answer(s): B
Explanation:
Copilot Chat can analyze error messages, stack traces, and selected code, then propose root cause hypotheses, fixes, and refactors. You can paste error messages or stack traces into Chat, or invoke Copilot from debugging contexts, and ask it to explain the error, identify likely causes, and propose code changes. You remain in control, review the rationale, apply suggested changes via inline edits or Copilot Edits, and retest.
Tips and Tricks:
Important
Copilot augments debugging, it does not replace debuggers, tests, or reviews. Use it to reason about issues and propose fixes, while you still step through code, run tests, and perform code review before accepting changes.
Source:
Learning to debug with GitHub Copilot (GitHub Docs)
Debug errors with GitHub Copilot Chat (GitHub Docs)
Debug with GitHub Copilot in Visual Studio (Microsoft Learn)
How to debug code with GitHub Copilot (GitHub Blog)
Question: [104]
How can Copilot support a TDD (red→green→refactor) workflow?
Options:
A. By auto-approving PRs when tests pass
B. By generating test scaffolds/cases from selections and then helping refactor after tests pass
C. By disabling tests during development
D. By replacing the need for assertions
Correct Answer(s): B
Explanation:
Use Copilot Chat to generate tests from a selected function or acceptance criteria for the red stage, where tests initially fail. Then iterate on the implementation with inline suggestions or Copilot Edits until the tests pass, the green stage. After that, ask Copilot to propose refactors that preserve behavior while tests guard against regressions, the refactor stage. You control acceptance at each step and decide what to test, when to stop, and how far to refactor.
Tips and Tricks:
Important
Copilot can accelerate TDD by drafting tests and refactor ideas, but you own the quality bar. You decide test quality, coverage thresholds, and refactoring decisions, and Copilot’s suggestions remain subject to your standards for coverage, style, and review.
Source:
Writing tests with GitHub Copilot (GitHub Docs)
Generate unit tests with GitHub Copilot Chat (GitHub Docs)
Develop unit tests using GitHub Copilot tools (Microsoft Learn)
GitHub for Beginners: Test-driven development with GitHub Copilot (GitHub Blog)
Question: [105]
What actually happens when you accept an inline suggestion?
Options:
A. The code is auto-committed and pushed
B. The suggestion is inserted into your editor; you decide whether to keep, edit, or discard
C. The change is merged to main if CI passes
D. An audit log entry is always generated
Correct Answer(s): B
Explanation:
Inline suggestions appear near the cursor as ghost text and are accepted via Tab/Enter or editor-specific key bindings. When you accept an inline suggestion, the ghost text becomes real code in your editor buffer, just as if you had typed it yourself. The code is not automatically staged, committed, or pushed, and there is no auto-merge. You retain full control over further edits, staging, commits, and pushes through your normal Git workflow.
Tips and Tricks:
Important
Acceptance == insert into the editor, not approve or merge. Exams may try to confuse “accepting a suggestion” with “approving a change,” but branch protection rules, required reviews, and CI checks are still the gates that control what gets merged.
Source:
Code suggestions with GitHub Copilot (GitHub Docs)
Get IDE code suggestions with GitHub Copilot (GitHub Docs)
Getting code suggestions in your IDE with GitHub Copilot (GitHub Docs)
Quickstart for GitHub Copilot (GitHub Docs)
Question: [106]
How can Copilot help you understand an unfamiliar file or component in your codebase?
Options:
A. It can’t, Copilot only writes code
B. Use selection/file prompts in Copilot Chat to summarize purpose, dependencies, and risks
C. Only repository-aware chat in Enterprise can explain files
D. You must upload files to a third-party site
Correct Answer(s): B
Explanation:
In supported IDEs, you can select a file or key regions of code and ask Copilot Chat to summarize the purpose, describe dependencies, and highlight potential risks or edge cases. On GitHub.com, you can open a file or repository and use Copilot to explain what the component does and how it fits into the project. You can chain prompts such as “explain this module,” then “list external dependencies and side effects,” then “suggest tests for critical paths” to move from a high-level overview to deeper analysis.
Tips and Tricks:
Important
IDE Chat handles selection/file context in your editor, while repository-aware chat on GitHub.com (Enterprise) can reason over an indexed view of the entire repo. Both help you understand unfamiliar code, but Enterprise repo-aware chat scales better to large, multi-folder systems.
Source:
Asking GitHub Copilot questions in your IDE (GitHub Docs)
Using GitHub Copilot to explore projects (GitHub Docs)
Quickstart for GitHub Copilot (GitHub Docs)
About GitHub Copilot Chat (GitHub Docs)
Question: [107]
Which is the best-crafted prompt?
Options:
A. Write a function
B. Write a Python function to reverse a string using slicing
C. Function in code
D. Suggest some code
Correct Answer(s): B
Explanation:
Effective prompts specify language, task, and method/constraints. Stating “Python” + “reverse a string” + “using slicing” removes ambiguity and guides the model to the intended pattern. A well-crafted prompt usually names the language, the action, the object, and any key constraint in a single concise sentence, so Copilot does less guessing and more targeted completion.
Tips and Tricks:
Important
Prompt quality is about signal density, not verbosity. Pack intent, scope, and constraints into a short prompt. In exams, the best answer is usually the one that explicitly names the language, task, and technique, instead of a generic “do something” request.
Source:
Prompt engineering for GitHub Copilot Chat (GitHub Docs)
Concepts for prompting GitHub Copilot (GitHub Docs)
Getting started with prompts for GitHub Copilot Chat (GitHub Docs)
Introduction to prompt engineering with GitHub Copilot (Microsoft Learn)
Question: [108]
How do you improve ambiguous prompts?
Options:
A. Use shorter prompts
B. Provide more context and details
C. Avoid specifying language
D. Retry without changes
Correct Answer(s): B
Explanation:
Ambiguous prompts force the model to guess. Adding specific context (file or selection, domain facts), clear intent (what to build or change), and constraints (language, framework, format, performance or style requirements) reliably improves accuracy. Define inputs/outputs and minimal acceptance criteria so the model follows instead of guessing, matching the guidance to avoid ambiguity by adding project context, clear goals, and specific requirements.
Tips and Tricks:
Important
Ambiguity is resolved by specificity, not by brevity. High-signal prompts align on intent, constraints, and available context; low-signal prompts make the model infer too much. In multi-language or multi-framework repos, explicitly name the language/framework and exact output format (pytest test, JSON schema, SQL) instead of hoping Copilot will guess correctly.
Source:
Prompt engineering for GitHub Copilot Chat (GitHub Docs)
Concepts for prompting GitHub Copilot (GitHub Docs)
Getting started with prompts for GitHub Copilot Chat (GitHub Docs)
Introduction to prompt engineering with GitHub Copilot (Microsoft Learn)
Question: [109]
Which technique most reliably boosts output quality?
Options:
A. Use vague prompts
B. Use detailed instructions with examples
C. Avoid comments in code
D. Skip specifying input or output
Correct Answer(s): B
Explanation:
Combining explicit instructions with small, concrete examples anchors the format, tone, and structure the model should match, reducing guesswork. The prompt-engineering guidance explicitly encourages “give examples” so Copilot can see the pattern you want and apply it to new cases, instead of inventing style and structure on its own.
Tips and Tricks:
Important
Examples act as pattern anchors. Even a very small, high-quality example can constrain structure and tone better than multiple paragraphs of prose, raising signal density without bloat. Exam answers that mention instructions + examples usually reflect this best-practice guidance.
Source:
Prompt engineering for GitHub Copilot Chat (GitHub Docs)
Concepts for prompting GitHub Copilot (GitHub Docs)
GitHub Copilot Chat Cookbook (GitHub Docs)
Getting started with prompts for GitHub Copilot Chat (GitHub Docs)
Question: [110]
Why does adding examples help?
Options:
A. Examples distract the AI
B. They align output with the desired style/pattern
C. They reduce Copilot’s ability to generate code
D. They slow completions
Correct Answer(s): B
Explanation:
Short few-shot examples show the model the shape you want (naming, layout, docstrings, tests), so generated output conforms to your pattern. Examples effectively “show, not tell” the desired style, structure, and tone, which is why prompt-engineering guidance recommends “give examples” so Copilot can generalize from patterns instead of guessing.
Tips and Tricks:
Important
Examples increase signal by demonstrating format and tone. One precise, project-idiomatic example often outperforms multiple paragraphs of general guidance. Treat examples as pattern anchors that pull Copilot toward the structure and style you actually want.
Source:
Prompt engineering for GitHub Copilot Chat (GitHub Docs)
Concepts for prompting GitHub Copilot (GitHub Docs)
GitHub Copilot Chat cookbook (GitHub Docs)
Getting started with prompts for GitHub Copilot Chat (GitHub Docs)
Question: [111]
Why is context crucial in prompts?
Options:
A. It slows responses
B. It helps Copilot generate relevant and accurate suggestions
C. It prevents completions
D. It reduces security
Correct Answer(s): B
Explanation:
Copilot conditions on the surrounding code, the current file, and your prompt. Because Copilot does prediction, not execution, the closer and richer the context around the prompt, the more relevant and accurate the suggestions. Prompts that reference the right file/selection, types, interfaces, and data shapes give Copilot a much clearer target than generic, repo-agnostic requests.
Tips and Tricks:
Important
Context is one of the highest-impact signals. Context + constraints consistently outperform global prompts with no local code. Exam-wise, prefer answers that mention nearby code, selections, or file context over those that suggest asking Copilot “from scratch” with no context.
Source:
Code suggestions with GitHub Copilot (GitHub Docs)
Concepts for prompting GitHub Copilot (GitHub Docs)
Asking GitHub Copilot questions in your IDE (GitHub Docs)
Getting started with GitHub Copilot in VS Code (VS Code Docs)
Question: [112]
What should you do if Copilot generates irrelevant suggestions?
Options:
A. Stop using Copilot
B. Refine or rephrase the prompt with more context
C. Use shorter prompts
D. Disable duplication detection
Correct Answer(s): B
Explanation:
Irrelevance usually means missing or misaligned signals. Instead of giving up or just retrying, you improve quality by refining the prompt with more context and clearer intent: add a selection/file, specify inputs/outputs, name the language/tooling, and set constraints so Copilot has less room to guess.
Tips and Tricks:
Important
Each refinement should intentionally remove a known ambiguity. You’re not just “trying again”; you’re adding intent, context, and constraints so the next suggestion has less room to drift. Shorter prompts alone rarely fix irrelevance—better, richer prompts do.
Source:
Prompt engineering for GitHub Copilot Chat (GitHub Docs)
Concepts for prompting GitHub Copilot (GitHub Docs)
GitHub Copilot Chat cookbook (GitHub Docs)
Getting started with prompts for GitHub Copilot Chat (GitHub Docs)
Question: [113]
Which practice helps Copilot match project style?
Options:
A. Avoid adding comments
B. Add snippets and style examples
C. Use vague instructions
D. Skip output format
Correct Answer(s): B
Explanation:
Supplying style exemplars (naming, error handling, test layout, docstrings) guides Copilot to mirror your project’s conventions. Prompt-engineering guidance explicitly recommends providing project-idiomatic examples so Copilot can follow your patterns instead of inventing its own style.
Tips and Tricks:
Important
Style is best taught with code, not prose. A small, accurate snippet sets stronger constraints than a long paragraph describing conventions. For exams, answers that mention snippets and style examples are aligned with GitHub’s official prompt-engineering guidance.
Source:
Prompt engineering for GitHub Copilot Chat (GitHub Docs)
Concepts for prompting GitHub Copilot (GitHub Docs)
GitHub Copilot Chat Cookbook (GitHub Docs)
Introduction to prompt engineering with GitHub Copilot (Microsoft Learn)
Question: [114]
Which statement best describes what Copilot relies on during inference?
Options:
A. Runtime execution of your code
B. Your prompts, file contents, and surrounding code context
C. Web searches via Bing
D. Pre-stored templates
Correct Answer(s): B
Explanation:
Copilot performs contextual prediction, not execution. It uses your prompt, the current file and surrounding code, and other allowed context (for example, chat history or repo index) to generate suggestions. It does not run your program or call external web search at inference time; quality depends on how good the available context is.
Tips and Tricks:
Important
Copilot doesn’t execute your program or browse the web; it infers from available context (prompts + code). Exam answers that mention prompts, file contents, and surrounding code context are correct; options that mention runtime execution or Bing/web search are distractors.
Source:
Code suggestions with GitHub Copilot (GitHub Docs)
Prompt engineering for GitHub Copilot Chat (GitHub Docs)
Concepts for prompting GitHub Copilot (GitHub Docs)
Using GitHub Copilot in your IDE: Tips, tricks, and best practices (GitHub Blog)
Question: [115]
What does content exclusion let organizations do (and why is it relevant to prompting)?
Options:
A. Speed up Copilot
B. Prevent specified repos/paths/file types/patterns from being used as input context
C. Disable Copilot entirely
D. Publish private code
Correct Answer(s): B
Explanation:
Content exclusion lets organizations with Copilot Business or Enterprise define what input context Copilot can see. Admins can exclude specific repositories, directories/paths, file types, or pattern-based rules, preventing that content from being used as context for suggestions in IDEs and on GitHub.com, even if users prompt near it. This keeps secrets and sensitive code out of Copilot’s prompt context.
Tips and Tricks:
Important
Think inputs vs outputs: Content exclusion is an org/enterprise governance control that limits the inputs Copilot can see, starting at Business. Code referencing governs outputs and their similarity to public code. Exams frequently test this split, so map questions about “what Copilot can see” to content exclusion on Business/Enterprise plans.
Source:
Content exclusion for GitHub Copilot (GitHub Docs)
Configure content exclusion for GitHub Copilot (GitHub Docs)
Plans for GitHub Copilot (GitHub Docs)
Content exclusion for GitHub Copilot in IDEs is generally available (GitHub Blog)
Question: [116]
Which Copilot plans include content exclusion?
Options:
A. Free and Individual
B. Business and Enterprise
C. Individual only
D. Free only
Correct Answer(s): B
Explanation:
Content exclusion is an organization-level control available on Copilot Business and Copilot Enterprise. It lets org and enterprise admins define which repositories, directories/paths, file types, or patterns Copilot can use as input context in IDEs and on GitHub.com. Individual plans do not expose these admin governance features.
Tips and Tricks:
Important
Scope boundary: Content exclusion starts at Copilot Business and extends to Copilot Enterprise. It is an org/enterprise capability, not an individual plan feature, and it governs what Copilot can read as context, not whether outputs are similar to public code.
Source:
Content exclusion for GitHub Copilot (GitHub Docs)
Configure content exclusion for GitHub Copilot (GitHub Docs)
Plans for GitHub Copilot (GitHub Docs)
Choosing your enterprise’s plan for GitHub Copilot (GitHub Docs)
Question: [117]
Which prompt gives Copilot the clearest output target?
Options:
A. “Summarize this.”
B. “Summarize this function in 3 bullets for junior devs; include inputs, outputs, and one caveat.”
C. “Explain code.”
D. “Write notes.”
Correct Answer(s): B
Explanation:
Stating the audience, length/structure, and must-include items creates a concrete target. A prompt like “summarize this function in 3 bullets for junior devs; include inputs, outputs, and one caveat” defines format (bullets), length (3), audience (junior devs), and required content (inputs/outputs/caveat), so Copilot has a very clear output shape and less room to guess.
Tips and Tricks:
Important
Prompt quality is about signal density, not verbosity. The best exam answer is the one that explicitly defines structure, length, audience, and required fields, giving Copilot a clear, exam-ready target instead of an open-ended “summarize” or “explain” request.
Source:
Prompt engineering for GitHub Copilot Chat (GitHub Docs)
Concepts for prompting GitHub Copilot (GitHub Docs)
Getting started with prompts for GitHub Copilot Chat (GitHub Docs)
Introduction to prompt engineering with GitHub Copilot (Microsoft Learn)
Question: [118]
When asking Copilot to refactor code, which prompt is best?
Options:
A. “Improve this.”
B. “Refactor to pure functions; no side effects; keep same public API; add docstrings; return early on invalid input.”
C. “Make it cleaner.”
D. “Rewrite completely.”
Correct Answer(s): B
Explanation:
Refactor prompts work best when constraints are explicit: style (pure functions, no side effects), compatibility (keep the same public API and behavior), non-functional requirements (add docstrings), and guardrails (return early on invalid input). This matches refactoring guidance that structure should change but observable behavior must remain the same and tests should continue to pass.
Tips and Tricks:
Important
Make the target unambiguous: name what must change (structure, style) and what must not change (contract, behavior). Exam questions favor prompts that clearly protect API/behavior while guiding structural changes; options that say “rewrite completely” or omit compatibility constraints are usually wrong for refactor scenarios.
Source:
Refactor code with GitHub Copilot (GitHub Docs)
Develop unit tests using GitHub Copilot tools (Microsoft Learn)
Refactor existing code using GitHub Copilot (lab) (Microsoft Learn Lab)
Introduction to prompt engineering with GitHub Copilot (Microsoft Learn)
Question: [119]
Which prompt best reduces hallucinations when generating API code?
Options:
A. “Use the Foo API.”
B. “Use Foo API v3; only endpoints /users/{id}, /users/search; TypeScript; fetch; no undocumented fields; include error handling for 4xx/5xx.”
C. “Write users code.”
D. “Guess the latest endpoints.”
Correct Answer(s): B
Explanation:
Hallucinations drop when you fix the API version, restrict which endpoints can be used, and specify language and tooling. A prompt like “Use Foo API v3; only endpoints /users/{id}, /users/search; TypeScript; fetch; no undocumented fields; include error handling for 4xx/5xx” defines a constrained, testable contract instead of an open-ended request, and “no undocumented fields” explicitly discourages Copilot from inventing properties or methods.
Tips and Tricks:
Important
For hallucination questions, look for prompts that lock down version and endpoints and forbid undocumented fields. Vague prompts like “use the Foo API” or “guess the latest endpoints” actively encourage speculative behavior, while constrained prompts turn Copilot’s output into verifiable stubs you can test immediately.
Source:
Prompt engineering for GitHub Copilot Chat (GitHub Docs)
Concepts for prompting GitHub Copilot (GitHub Docs)
How to write better prompts for GitHub Copilot (GitHub Blog)
Introduction to prompt engineering with GitHub Copilot (Microsoft Learn)
Question: [120]
You want table-driven unit tests. Which prompt is best?
Options:
A. “Write tests.”
B. “Generate table-driven tests in Go for Parse(), covering empty, invalid, edge lengths; include names and wantErr.”
C. “Test everything.”
D. “Add some asserts.”
Correct Answer(s): B
Explanation:
Specifying language (Go), test style (“table-driven tests”), the target function (
Parse()), and cases/fields (“covering empty, invalid, edge lengths; include names and wantErr”) gives Copilot a concrete, idiomatic template to expand. This matches common Go patterns where tests use a[]struct{ name string; input …; want …; wantErr bool }table and iterate over named cases.Tips and Tricks:
testingpackage.”name,input,want, andwantErrso readability and diagnostics are built in.Important
Tests are format-sensitive: the more you constrain layout, style, and cases in the prompt (language, table-driven structure, edge cases, fields), the higher the chance Copilot matches your project’s testing standards on the first try. Exam answers that pin language + test style + edge cases + fields are usually the correct choice.
Source:
Writing tests with GitHub Copilot (GitHub Docs)
Asking GitHub Copilot questions in your IDE (GitHub Docs)
GitHub for Beginners: Test-driven development with GitHub Copilot (GitHub Blog)
Develop unit tests using GitHub Copilot tools (Microsoft Learn)
Beta Was this translation helpful? Give feedback.
All reactions