Skip to content

Conversation

@damtaba
Copy link

@damtaba damtaba commented Feb 2, 2026

Summary

This PR implements a FastAPI-based text summarization service that generates summaries and actionable CTAs using OpenAI's LLM. The implementation follows clean architecture principles with ports/adapters pattern and includes comprehensive error handling and documentation.

Key Features Implemented

  • Single Summary Generation - Synchronous endpoint for processing individual texts
  • Summary Retrieval - Fetch previously generated summaries by ID
  • Batch Async Processing - Concurrent processing of multiple texts with individual error handling
  • In-memory Storage - Repository pattern for storing generated summaries
  • Error Handling - Proper HTTP status codes and OpenAI error handling
  • API Documentation - Complete API documentation with examples (README_API.md)

Technical Stack

  • Python 3.12
  • FastAPI with OpenAPI documentation
  • OpenAI API integration
  • Poetry for dependency management
  • Pytest with coverage
  • Pre-commit hooks for code quality

Architecture Highlights

  • Ports & Adapters Pattern - Clean separation between domain logic and infrastructure
  • Dependency Injection - Using FastAPI's DI system
  • Type Safety - Pydantic models for request/response validation
  • Async Support - Batch processing with asyncio.gather for optimal performance

API Endpoints

  1. GET /summary_maker/get_summary_and_ctas - Generate summary and CTAs
  2. GET /summary_maker/get_summary_and_ctas_by_id - Retrieve by ID
  3. POST /summary_maker/async_get_summary_and_ctas - Batch async processing
  4. GET /extras/get_summaries_ids - List all summary IDs

Documentation

  • Interactive API docs at /docs (Swagger UI)
  • Comprehensive README_API.md with setup instructions and examples

Notes

  • GET vs POST: The first endpoint uses GET as specified in requirements, though POST would be more semantically appropriate for content generation
  • Storage: Currently uses in-memory storage; Redis caching suggested for production
  • Pre-commit: Added pre-commit hooks for code quality enforcement

damtaba added 6 commits January 29, 2026 15:19
- Added hiredis as a dependency in pyproject.toml.
- Introduced InMemorySummaryRepository for storing summaries.
- Updated configurations to include Redis settings.
- Enhanced main application with lifespan management for repository initialization.
- Created new extras router for handling summary requests.
- Refactored summary router to utilize dependency injection for LLM and repository.
- Improved request and response schemas for clarity and functionality.
- Added new dependencies: cfgv, distlib, filelock, identify, nodeenv, platformdirs, pre-commit, and ruff to poetry.lock and pyproject.toml.
- Updated .gitignore to exclude COMMS.md.
- Refactored code for better readability and consistency, including adjustments in configurations and dependency management.
- Removed obsolete TODO.md and unused files from the project.
- Enhanced error handling in the FastAPI application for OpenAI API interactions.
- Improved response schemas for clarity and added examples in the Pydantic models.
- Introduced a comprehensive README_API.md file detailing the API's functionality, setup, and usage.
- Added asynchronous support for summary generation in the LLM interface.
- Updated summary router to improve parameter handling and error management.
- Enhanced example usage in API endpoints for better clarity and usability.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant