Skip to content

Overview

ritesh-1918 edited this page Apr 7, 2026 · 4 revisions

Models Banner

Note

Build your AI products—right inside GitHub. Helpdesk.AI uses these systems to create prompts, test models, and ship AI-powered features with built-in tools for prompt collaboration and lightweight CI/CD evaluation.


Integration Strategy

Create, evaluate, and iterate on prompts right inside the local repo configuration.

  • Prompt Centralization: Built using standard .prompt.yml architectures.
  • Evaluation Pipelines: CI/CD test triggers ensure new prompt versions do not degrade AI generation quality.
  • Model Playground: Allows rapid testing of new parameter weights before deployment.

Docs

Watch the 3-minute demo reel to understand exact capabilities.

Video Placeholder



Important

Python Inference Setup

Drop this snippet into your core logic code to instantiate the AI engine locally. Requires configured GitHub CLI secrets to execute safely.

import os
from azure.ai.inference import ChatCompletionsClient
from azure.ai.inference.models import SystemMessage, UserMessage
from azure.core.credentials import AzureKeyCredential

endpoint = "https://models.github.ai/inference"
model = "openai/gpt-4o"
token = os.environ["GH_MODELS_TOKEN"]

client = ChatCompletionsClient(
    endpoint=endpoint,
    credential=AzureKeyCredential(token),
)

response = client.complete(
    messages=[
        SystemMessage("You are the Helpdesk.AI routing agent."),
        UserMessage("My computer is crashing repeatedly."),
    ],
    temperature=0.4,
    top_p=0.9,
    model=model
)

print(response.choices[0].message.content)

Clone this wiki locally