Automate and streamline the correction of imperfect OpenAPI specifications by extracting structured insights from textual descriptions.
Open Spec Correct is a Python package designed to analyze user-provided textual descriptions of APIs and generate structured corrections for OpenAPI specifications. It leverages pattern matching and AI-driven insights to identify inconsistencies, errors, or missing details in API documentation, helping developers create more accurate and maintainable OpenAPI specs faster.
- Text-to-Spec Correction: Extracts structured insights from unstructured textual descriptions.
- Pattern Matching: Uses regex-based validation to ensure output correctness.
- LLM Flexibility: Works with LLM7 by default (free tier included) or supports custom LLMs (OpenAI, Anthropic, Google, etc.).
- Retry Mechanisms: Ensures reliable extraction and correction.
- Environment-Aware: Respects
LLM7_API_KEYenvironment variable for seamless integration.
pip install open_spec_correctfrom open_spec_correct import open_spec_correct
user_input = """
API endpoint for fetching user data.
Path: /users/{id}
Method: GET
Query params: ?name=string, ?age=int
Response: 200 OK with JSON body { "id": int, "name": string }
"""
response = open_spec_correct(user_input)
print(response)You can replace the default ChatLLM7 with any LangChain-compatible LLM (e.g., OpenAI, Anthropic, Google Generative AI).
from langchain_openai import ChatOpenAI
from open_spec_correct import open_spec_correct
llm = ChatOpenAI(model_name="gpt-4") # Replace with your preferred model
response = open_spec_correct(user_input, llm=llm)
print(response)from langchain_anthropic import ChatAnthropic
from open_spec_correct import open_spec_correct
llm = ChatAnthropic(model="claude-2")
response = open_spec_correct(user_input, llm=llm)
print(response)from langchain_google_genai import ChatGoogleGenerativeAI
from open_spec_correct import open_spec_correct
llm = ChatGoogleGenerativeAI(model="gemini-pro")
response = open_spec_correct(user_input, llm=llm)
print(response)- Default: Uses
LLM7_API_KEYfrom environment variables. - Manual Override: Pass the API key directly:
response = open_spec_correct(user_input, api_key="your_llm7_api_key")
- Free API Key: Register at LLM7 Token to get started.
- LLM7 Free Tier: Sufficient for most use cases.
- Custom Rate Limits: Upgrade via LLM7 dashboard or use a different LLM.
| Parameter | Type | Description |
|---|---|---|
user_input |
str |
Textual description of the API to correct. |
api_key |
Optional[str] |
LLM7 API key (optional if LLM7_API_KEY is set in environment). |
llm |
Optional[BaseChatModel] |
Custom LangChain LLM (e.g., ChatOpenAI, ChatAnthropic). Defaults to ChatLLM7. |
Returns a list of structured corrections (e.g., OpenAPI-compatible snippets) extracted from the input text.
MIT License (see LICENSE).
Report bugs or request features at: 🔗 GitHub Issues
Eugene Evstafev 📧 hi@euegne.plus 🌐 GitHub: chigwell