-
Notifications
You must be signed in to change notification settings - Fork 1
계승: v 1.4.4 (#75) #76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Caution Review failedThe pull request is closed. WalkthroughThis pull request refactors the quiz and explanation generation architecture from a Bedrock-based asynchronous workflow coordinated via Redis pub/sub to a simpler GPT API batch processing model. It removes Bedrock-specific adapters, introduces new batch and single-request adapters for OpenAI, restructures prompt templates from procedural guidance to schema-focused specifications, and updates service layers to use the new batch processing flow with references support. Changes
Sequence DiagramsequenceDiagram
participant Router
participant Service as ExplanationService
participant Adapter as request_single.py
participant GPT as OpenAI Client
participant Response as Response API
Router->>Service: generate_specific_explanation(request)
activate Service
Service->>Service: Build selection_text & prompt
Service->>Adapter: request_responses_output_text(gpt_content)
activate Adapter
Adapter->>GPT: get_gpt_client()
GPT-->>Adapter: OpenAI client
Adapter->>Response: Send with model, messages, tools
activate Response
Response-->>Adapter: resp with output_text or output[]
deactivate Response
Adapter->>Adapter: Extract text (fallback traversal)
Adapter-->>Service: combined_text
deactivate Adapter
Service->>Service: Create references list
Service-->>Router: SpecificExplanationResponse
deactivate Service
Router-->>Router: Return response
sequenceDiagram
participant Router
participant Service as GenerateService
participant Adapter as request_batch.py
participant ThreadPool as asyncio.to_thread
participant GPT as OpenAI Client
Router->>Service: generate(generate_request)
activate Service
Service->>Service: Chunk text, enforce schema
Service->>Service: Build batch requests[]
Service->>Adapter: request_text_batch(requests)
activate Adapter
par Concurrent Processing
Adapter->>ThreadPool: to_thread(request_chat_completion_text)
ThreadPool->>GPT: Chat completion API call
GPT-->>ThreadPool: response text
ThreadPool-->>Adapter: result
end
Adapter->>Adapter: Aggregate results
Adapter-->>Service: List[Optional[str]]
deactivate Adapter
Service->>Service: Parse, filter, map to GeneratedResult
Service-->>Router: List[ProblemResponse]
deactivate Service
Router-->>Router: Return quiz generation
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Areas requiring extra attention:
Poem
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: defaults Review profile: CHILL Plan: Pro 📒 Files selected for processing (17)
Comment |
📢 설명
해당 Pull Request에 대해 간략하게 설명해주세요!
✅ 체크 리스트
Summary by CodeRabbit
New Features
Bug Fixes
Refactor
✏️ Tip: You can customize this high-level summary in your review settings.