Skip to content

Conversation

@GulSauce
Copy link
Member

@GulSauce GulSauce commented Jan 5, 2026

📢 설명

해당 Pull Request에 대해 간략하게 설명해주세요!

✅ 체크 리스트

  • 리뷰어가 체크할 내용을 작성해주세요!

Summary by CodeRabbit

  • New Features

    • Added batch request processing to handle multiple requests concurrently with configurable timeout support (40-second default)
  • Bug Fixes

    • Improved timeout error handling with appropriate HTTP error responses
    • Enhanced error reporting when operations complete without results
  • Improvements

    • Optimized API client retry behavior for more predictable request handling

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link

coderabbitai bot commented Jan 5, 2026

Caution

Review failed

The pull request is closed.

📝 Walkthrough

Walkthrough

The pull request introduces timeout handling and error management across the OpenAI request pipeline. It adds timeout support for API calls, implements concurrent batch request processing, disables automatic retries on the OpenAI client, and validates response accumulation with appropriate HTTP error signaling.

Changes

Cohort / File(s) Summary
Timeout handling in adapters
app/adapter/request_single.py, app/adapter/request_batch.py
Both files now load environment variables and read a TIME_OUT setting (default 40s). Single requests wrap API calls in try/except to catch APITimeoutError and raise HTTPException 429. Batch requests similarly handle timeouts during concurrent processing.
Batch processing capability
app/adapter/request_batch.py
New public function request_text_batch(requests: List[dict]) -> List[Optional[str]] executes multiple requests concurrently and returns results as a list.
OpenAI client configuration
app/client/oepn_ai.py
Client initialization now sets max_retries=0, disabling automatic retry behavior.
Service-level validation
app/service/generate_service.py
Added validation to raise HTTPException 429 with a Korean error message if the generated results list is empty after processing.

Sequence Diagram(s)

sequenceDiagram
    participant Service
    participant Adapter as Adapter<br/>(Batch or Single)
    participant OpenAI as OpenAI Client
    participant Client as Client<br/>(max_retries=0)
    
    Service->>Adapter: request_text_batch() or request_text()
    Adapter->>Client: Initialize with timeout
    Client->>OpenAI: chat.completions.create<br/>(timeout=TIME_OUT)
    
    alt Request Succeeds
        OpenAI-->>Client: Response
        Client-->>Adapter: Result text
        Adapter-->>Service: List[Optional[str]]
    else API Timeout
        OpenAI-->>Client: APITimeoutError
        Client-->>Adapter: APITimeoutError
        Adapter-->>Adapter: Log error
        Adapter-->>Service: HTTPException 429
    end
    
    Service->>Service: Validate results<br/>(not empty)
    alt Results Empty
        Service-->>Service: Raise HTTPException 429<br/>(Korean message)
    end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

🐰 A rabbit's ode to error-catching grace:
Timeouts now tamed with env config's embrace,
Batch requests dance in concurrent delight,
No retries allowed—we're lean and tight,
When silence falls, we answer with care. ✨

✨ Finishing touches
  • 📝 Generate docstrings

📜 Recent review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between da31094 and db592b2.

📒 Files selected for processing (4)
  • app/adapter/request_batch.py
  • app/adapter/request_single.py
  • app/client/oepn_ai.py
  • app/service/generate_service.py

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@GulSauce GulSauce merged commit d38bdaf into develop Jan 5, 2026
1 of 2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants