Skip to content

Conversation

@GulSauce
Copy link
Member

@GulSauce GulSauce commented Dec 17, 2025

📢 설명

해당 Pull Request에 대해 간략하게 설명해주세요!

✅ 체크 리스트

  • 리뷰어가 체크할 내용을 작성해주세요!

Summary by CodeRabbit

  • New Features

    • Added references with titles and URLs to explanation responses
    • Implemented batch processing for concurrent quiz generation
    • Direct integration with OpenAI APIs for improved performance
  • Bug Fixes

    • Enhanced stability and response reliability
  • Refactor

    • Migrated quiz and explanation generation from Bedrock to OpenAI
    • Removed search functionality
    • Updated quiz formatting guidelines for consistency
    • Added performance monitoring and timing utilities

✏️ Tip: You can customize this high-level summary in your review settings.

@GulSauce GulSauce merged commit 621abb6 into develop Dec 17, 2025
1 of 2 checks passed
@coderabbitai
Copy link

coderabbitai bot commented Dec 17, 2025

Caution

Review failed

The pull request is closed.

Walkthrough

This pull request refactors the quiz and explanation generation architecture from a Bedrock-based asynchronous workflow coordinated via Redis pub/sub to a simpler GPT API batch processing model. It removes Bedrock-specific adapters, introduces new batch and single-request adapters for OpenAI, restructures prompt templates from procedural guidance to schema-focused specifications, and updates service layers to use the new batch processing flow with references support.

Changes

Cohort / File(s) Summary
Adapter Layer Removals
app/adapter/process_on_local.py, app/adapter/request_generate_quiz_to_bedrock.py, app/adapter/request_generate_specific_explanation_to_bedrock.py
Deleted three modules implementing Bedrock-based async workflows, Redis pub/sub coordination, SQS messaging, local Lambda invocation, and result deduplication logic.
New Batch & Single Request Adapters
app/adapter/request_batch.py, app/adapter/request_single.py
Added request_text_batch for concurrent GPT chat completion batches via asyncio.to_thread, and request_responses_output_text for Responses API text extraction with fallback traversal logic.
Client Factory
app/client/oepn_ai.py
Added new singleton OpenAI client factory get_gpt_client() with environment-based initialization and caching.
Redis Client Changes
app/client/redis.py
Removed public subscribe method; eliminates pub/sub subscription mechanism.
Data Transfer Objects
app/dto/request/search_request.py, app/dto/response/specific_explanation_response.py
Deleted SearchRequest model; extended SpecificExplanationResponse with new Reference model and references field (default empty list).
Prompt Templates Refactoring
app/prompt/core/blank.py, app/prompt/core/multiple.py, app/prompt/core/ox.py
Restructured all three prompt modules from procedural, example-driven guidelines to schema-focused, modular output-structure specifications with explicit formatting constraints and consolidated rules.
Router Updates
app/router/generate_router.py
Replaced SearchRequest import with SpecificExplanationRequest/Response; removed legacy search endpoint; added get_explanation_service() factory and updated /specific-explanation handler to delegate to ExplanationService.
Service Layer
app/service/explanation_service.py, app/service/generate_service.py
Added new ExplanationService with generate_specific_explanation method; refactored GenerateService to use batch text generation via request_text_batch, enforce JSON schema validation, increase chunk size thresholds, and supply page references in responses.
Utilities & Dependencies
app/util/timing.py, requirements.txt
Added log_elapsed context manager for instrumented timing; added openai==1.57.0 dependency.

Sequence Diagram

sequenceDiagram
    participant Router
    participant Service as ExplanationService
    participant Adapter as request_single.py
    participant GPT as OpenAI Client
    participant Response as Response API

    Router->>Service: generate_specific_explanation(request)
    activate Service
    Service->>Service: Build selection_text & prompt
    Service->>Adapter: request_responses_output_text(gpt_content)
    activate Adapter
    Adapter->>GPT: get_gpt_client()
    GPT-->>Adapter: OpenAI client
    Adapter->>Response: Send with model, messages, tools
    activate Response
    Response-->>Adapter: resp with output_text or output[]
    deactivate Response
    Adapter->>Adapter: Extract text (fallback traversal)
    Adapter-->>Service: combined_text
    deactivate Adapter
    Service->>Service: Create references list
    Service-->>Router: SpecificExplanationResponse
    deactivate Service
    Router-->>Router: Return response
Loading
sequenceDiagram
    participant Router
    participant Service as GenerateService
    participant Adapter as request_batch.py
    participant ThreadPool as asyncio.to_thread
    participant GPT as OpenAI Client

    Router->>Service: generate(generate_request)
    activate Service
    Service->>Service: Chunk text, enforce schema
    Service->>Service: Build batch requests[]
    Service->>Adapter: request_text_batch(requests)
    activate Adapter
    par Concurrent Processing
        Adapter->>ThreadPool: to_thread(request_chat_completion_text)
        ThreadPool->>GPT: Chat completion API call
        GPT-->>ThreadPool: response text
        ThreadPool-->>Adapter: result
    end
    Adapter->>Adapter: Aggregate results
    Adapter-->>Service: List[Optional[str]]
    deactivate Adapter
    Service->>Service: Parse, filter, map to GeneratedResult
    Service-->>Router: List[ProblemResponse]
    deactivate Service
    Router-->>Router: Return quiz generation
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Areas requiring extra attention:

  • Batch processing correctness: Review request_text_batch async aggregation logic, per-item error handling, and asyncio.to_thread thread pool interaction for potential deadlocks or race conditions.
  • Schema validation: Verify _enforce_additional_properties_false recursion correctness and its application to ProblemSet JSON schema to ensure validation enforcement.
  • Error handling consistency: Confirm error fallback strategies in request_responses_output_text text extraction (nested traversal) and request_text_batch batch failures return appropriate empty/None values.
  • Prompt template impact: Validate that restructured prompt templates (blank, multiple, ox) produce expected output structure matching new schema-focused format.
  • Generate service refactoring: Trace flow from batch text generation → result parsing → GeneratedResult creation and verify sequence-to-page mapping is correctly populated.
  • Service layer integration: Confirm ExplanationService.generate_specific_explanation correctly uses request_responses_output_text and initializes empty references list as expected by response model.

Poem

🐰 From Bedrock's async maze and Redis streams so deep,
We hop to GPT batches—simpler, cleaner sweep!
Prompts now tell their schema tales, no more procedural dance,
References bloom where explanations prance.
Thump thump! The new flow hops along just right. ✨

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch main

📜 Recent review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c00f72d and 841085c.

📒 Files selected for processing (17)
  • app/adapter/process_on_local.py (0 hunks)
  • app/adapter/request_batch.py (1 hunks)
  • app/adapter/request_generate_quiz_to_bedrock.py (0 hunks)
  • app/adapter/request_generate_specific_explanation_to_bedrock.py (0 hunks)
  • app/adapter/request_single.py (1 hunks)
  • app/client/oepn_ai.py (1 hunks)
  • app/client/redis.py (0 hunks)
  • app/dto/request/search_request.py (0 hunks)
  • app/dto/response/specific_explanation_response.py (1 hunks)
  • app/prompt/core/blank.py (1 hunks)
  • app/prompt/core/multiple.py (1 hunks)
  • app/prompt/core/ox.py (1 hunks)
  • app/router/generate_router.py (2 hunks)
  • app/service/explanation_service.py (1 hunks)
  • app/service/generate_service.py (5 hunks)
  • app/util/timing.py (1 hunks)
  • requirements.txt (1 hunks)

Comment @coderabbitai help to get the list of available commands and usage tips.

@coderabbitai coderabbitai bot mentioned this pull request Jan 21, 2026
1 task
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants