Skip to content

Conversation

@GulSauce
Copy link
Member

@GulSauce GulSauce commented Jan 25, 2026

📢 설명

해당 Pull Request에 대해 간략하게 설명해주세요!

✅ 체크 리스트

  • 리뷰어가 체크할 내용을 작성해주세요!

Summary by CodeRabbit

Release Notes

  • New Features

    • Quiz generation now uses streaming for progressive results delivery
  • Improvements

    • Enhanced error handling during quiz generation to prevent service disruptions
    • Optimized quiz distribution across document sections for more balanced coverage
    • Refined request processing for improved performance

✏️ Tip: You can customize this high-level summary in your review settings.

@GulSauce GulSauce merged commit d749fd2 into develop Jan 25, 2026
1 check was pending
@coderabbitai
Copy link

coderabbitai bot commented Jan 25, 2026

Caution

Review failed

The pull request is closed.

📝 Walkthrough

Walkthrough

This pull request refactors GPT request handling and quiz generation architecture. It consolidates the OpenAI client singleton into the adapter layer, removes batch processing in favor of streaming per-chunk responses, eliminates dependency injection from the router, updates chunk creation logic to accept page numbers directly, and introduces JSON schema validation for enforcing strict property constraints.

Changes

Cohort / File(s) Summary
GPT Client Consolidation
app/adapter/request_to_gpt.py, app/client/oepn_ai.py
Moves get_gpt_client() singleton factory from dedicated client module into adapter layer; removes old wrapper functions and adds new request_to_gpt_returning_text() with per-call timeout support via client.with_options().
Batch Request Removal
app/adapter/request_batch.py
Deletes async batch helpers (request_responses_output_text, request_responses_batch) that previously processed multiple GPT requests concurrently.
Service Layer Updates
app/service/explanation_service.py, app/service/generate_service.py
Updates explanation service to use new request adapter; completely refactors generate service to stream JSON results per chunk asynchronously instead of batching, adds dynamic model selection (gpt-4.1-mini vs gpt-5-mini), introduces process_single_chunk() helper, and reworkes problem/quiz assembly.
Router Dependency Injection Removal
app/router/generate_router.py
Removes Depends() pattern and injection factory functions; changes generate endpoint to return StreamingResponse and call service methods directly as class invocations.
Chunk & Schema Utilities
app/util/create_chunks.py, app/util/gpt_utils.py
Updates chunk creation to accept page_numbers: List[int] instead of page_count, adds gpt_content field to ChunkInfo, simplifies distribution logic; introduces new enforce_additional_properties_false() utility for recursive JSON schema validation.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Poem

A rabbit hops through refactored code,
Per-chunk processing down the road,
Streams of JSON, async and free,
Singleton clients dance with glee—
Dependency gone, now logic flows,
Schema blessed with truthful rows! 🐰✨

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants