Skip to content

Conversation

@eatyourpeas
Copy link
Member

Hide output card initially; show on generate/import; add import/dismiss for LLM specs; add back/start-new buttons; add edit-in-place for generated code; show spinner during requests.

- Add tabbed interface to generator.html with Manual Form and AI Assistant tabs
- Implement AI chat assistant using Azure Ollama (qwen model) for TOML spec generation
- Add parseMarkdown() function to properly format code blocks in chat responses
- Update chat_service.py to wrap TOML specs in markdown code block markers
- Fix Azure Ollama authentication: use OCP-Apim-Subscription-Key header instead of Bearer token
- Extend API timeout from 60s to 300s to accommodate Ollama inference latency
- Add python-dotenv to dependencies for local environment variable loading
- Update docker-compose.yml to load environment variables from .env file
- Configure environment variables: OLLAMA_BASE_URL, OLLAMA_API_KEY, OLLAMA_MODEL
- Add Magic Wand SVG icon for AI Assistant tab with sparkle effects
- Implement secure API proxy pattern: all external URLs/keys handled server-side only
- Add error handling with timeout detection and user-friendly error messages
- Chat messages displayed in bubbles with proper markdown formatting
- Maintain conversation history for multi-turn interactions

This enables users to generate medical calculator specifications through an intuitive
AI-assisted workflow while keeping all authentication credentials secure on the backend.
…ismiss for LLM specs; add back/start-new buttons; add edit-in-place for generated code; show spinner during requests
@eatyourpeas eatyourpeas merged commit 929510c into live Dec 20, 2025
1 check passed
@eatyourpeas eatyourpeas deleted the ollama branch December 20, 2025 21:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants