This project demonstrates how to create and push structured prompts to LangSmith Hub for evaluator use cases. Examples are provided in both TypeScript and Python.
.
├── typescript/ # TypeScript examples
│ ├── sentiment-evaluator.ts
│ └── sentiment-evaluator-with-model.ts
├── python/ # Python examples
│ ├── sentiment_evaluator.py
│ ├── sentiment_evaluator_with_model.py
│ └── pyproject.toml
├── env.example # Environment variable template
└── README.md # This file
- Create a
.envfile in the project root (copy fromenv.example):
cp env.example .env- Edit
.envand add your API keys:
LANGSMITH_API_KEY=your-langsmith-api-key-here
OPENAI_API_KEY=your-openai-api-key-hereNote: The .env file is gitignored and will not be committed to the repository.
- Navigate to the
typescriptdirectory:
cd typescript- Install dependencies:
npm installcd typescript
npm run sentimentThis pushes a StructuredPrompt to the hub as sentiment-evaluator. The prompt includes:
- System and human messages for conversation sentiment evaluation
- JSON schema with
positive_sentimentboolean field strict: truefor schema validation
cd typescript
npm run sentiment:modelThis pushes a StructuredPrompt chained with a ChatOpenAI model to the hub as sentiment-evaluator-with-model. The chain includes:
- The same StructuredPrompt as above
- ChatOpenAI model (
gpt-4o-mini) with structured output - Ready to use without additional model configuration
typescript/sentiment-evaluator.ts- Pushes only the StructuredPrompt (no model)typescript/sentiment-evaluator-with-model.ts- Pushes StructuredPrompt with OpenAI model
- Install uv if you haven't already:
curl -LsSf https://astral.sh/uv/install.sh | sh- Navigate to the
pythondirectory:
cd python- Install dependencies using
uv:
uv syncOr install dependencies directly:
uv add langsmith langchain-core langchain-openai pydanticuv run python sentiment_evaluator.pyOr if dependencies are installed globally:
python sentiment_evaluator.pyThis pushes a StructuredPrompt to the hub as sentiment-evaluator. The prompt includes:
- System and human messages for conversation sentiment evaluation
- Pydantic schema with
positive_sentimentboolean field - Structured output validation
uv run python sentiment_evaluator_with_model.pyOr if dependencies are installed globally:
python sentiment_evaluator_with_model.pyThis pushes a StructuredPrompt chained with a ChatOpenAI model to the hub as sentiment-evaluator-with-model. The chain includes:
- The same StructuredPrompt as above
- ChatOpenAI model (
gpt-4o-mini) - StructuredPrompt automatically configures structured output - Pydantic model for schema validation
- Ready to use without additional model configuration
python/sentiment_evaluator.py- Pushes only the StructuredPrompt (no model)python/sentiment_evaluator_with_model.py- Pushes StructuredPrompt with OpenAI model (structured output configured automatically)
Both implementations create a StructuredPrompt for evaluating conversation sentiment:
- System Message: Instructions for evaluating user sentiment (positive/negative/neutral)
- Human Message: Template with
{all_messages}variable for the conversation - Schema:
- TypeScript: JSON schema with
positive_sentimentboolean field (required, strict mode) - Python: Pydantic model with
positive_sentimentboolean field
- TypeScript: JSON schema with
The prompts are pushed to LangSmith Hub and can be used in evaluators or other LangChain applications.
LANGSMITH_API_KEY(required) - Your LangSmith API keyOPENAI_API_KEY(required for scripts with model) - Your OpenAI API key
For more information on managing prompts programmatically with LangSmith, see the official documentation.