feat: add Image generation support#166
Conversation
add OPENAI_API_KEY / OPENAI_IMG_MODEL config and use OpenAI image endpoint parse ботяра ... image: (and fallback bot : image:) prompt into extract_image_prompt(...) implement generate_image_and_send(...) to call openai.Image.create(...) and send photo (b64 / URL) keep existing bot mention + LLM paths intact for text responses handle missing API key and generation failure with user-friendly messages
- replace OpenAI image path with xai_sdk.AsyncClient.image.sample - add GROK_IMG_MODEL env var (default grok-imagine-image) - drop OPENAI_API_KEY / OPENAI_IMG_MODEL usage for image generation - keep existing grok text path (AsyncOpenAI) unchanged - add xai-sdk dependency in requirements - add validation and user error messages: - missing GROK_API_KEY - missing xai_sdk - failed xAI client init - handle both response.url and response.image (base64) - preserve existing Telegram reply flows and error fallback
- Add asyncio.wait_for timeout (30s) to prevent thread pool exhaustion - Sanitize prompt length (max 1000 chars) to limit API abuse - Fix misleading response parsing logic (redundant `image_url = None`) - Merge xai_sdk/xai_client guard into single check - Handle asyncio.TimeoutError separately with user-facing message - Move `import base64` to module level - Extract IMAGE_CAPTION and IMAGE_TIMEOUT_SEC as constants
…Lite persistence Add RPM and RPD rate limits for image generation (IMG_GEN_RPM_LIMIT, IMG_GEN_RPD_LIMIT) enforced in generate_image_and_send(). Limits are loaded from and persisted to SQLite alongside existing LLM rate limit data. Includes automatic schema migration for existing databases.
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Repository UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (2)
WalkthroughAdds xAI Grok image-generation: prompt extraction, async generation flow, per-user RPM/RPD rate limits with DB persistence, xai-sdk dependency, and extensive README rework covering deployments, AI/LLM config, Instagram cookies, and troubleshooting. Changes
Sequence Diagram(s)sequenceDiagram
participant User as Telegram User
participant Bot as Bot Handler
participant Extractor as Prompt Extractor
participant RateMgr as Rate Limiter
participant ImageGen as Image Generator
participant xAI as xAI Grok API
participant DB as Database
participant Telegram as Telegram API
User->>Bot: Send message mentioning bot (may include image prompt)
Bot->>Extractor: extract_image_prompt(message_text)
alt Image prompt detected
Bot->>RateMgr: check RPM/RPD for user
alt Rate limit exceeded
Bot->>User: send rate-limit error
else Rate OK
Bot->>ImageGen: generate_image_and_send(update, prompt)
ImageGen->>RateMgr: add tentative timestamp / increment daily
ImageGen->>xAI: request image generation (timeout apply)
alt xAI returns image
xAI-->>ImageGen: image bytes
ImageGen->>Telegram: send image (with caption)
ImageGen->>DB: persist img-gen timestamps and daily count
DB-->>ImageGen: confirm save
else xAI failure
ImageGen->>RateMgr: revert tentative timestamp/increment
ImageGen->>User: send failure message
end
end
else Not an image prompt
Bot->>Bot: continue LLM handling
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 8
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@README.md`:
- Around line 252-278: Update the README usage examples to satisfy markdownlint
by adding language identifiers to each fenced code block (e.g., use "text" or
"bash" for the URL examples and the image prompt) and correct the inline code
span for the AI chat trigger so it reads `ботяра,` without the extra inner
space; locate the examples showing the URL list, the "Prefix with `**`" block,
the "Generate Image (Grok only)" block, and the "Check Bot Status" / AI chat
line that contains the inline code to make these edits.
- Around line 14-20: The TOC entry "[Configuration](`#configuration`)" is pointing
to a missing heading; fix it by either adding a matching heading "##
Configuration" to the README (so the anchor `#configuration` exists) or by
updating the TOC link to exactly match the existing heading text/anchor
currently used (e.g., change the link target to the existing heading slug).
Ensure the heading text and the TOC anchor are identical so the link resolves.
- Around line 119-124: Fix the misspelling "promtps" to "prompts" in the README
entries for the environment variables: update the description for
MAX_CONTEXT_MESSAGES from "LLM Messages (promtps) to remember per user" to use
"prompts", and update the Cleanup section where "Remove LLM context messages
(promtps)..." appears (related to USER_CLEANUP_TTL_DAYS and
USER_CLEANUP_INTERVAL_HOURS) to use "prompts" as well so both occurrences read
correctly.
In `@src/db_storage.py`:
- Around line 44-47: The current broad except in the block that runs
cursor.execute("ALTER TABLE user_data ADD COLUMN {col} {definition}") swallows
all sqlite3.OperationalError instances; change it to catch
sqlite3.OperationalError as e and only suppress the error when the message
explicitly indicates the column already exists/duplicate (e.g. check
"'duplicate' in str(e).lower()" or "'already exists' in str(e).lower()"),
otherwise re-raise the exception so real migration failures surface.
In `@src/main.py`:
- Around line 445-449: The image-fallback branch using extract_image_prompt is
unreachable because the surrounding routing only executes when
is_bot_mentioned() (which only checks for "ботяра" and bot_health) is true;
update routing so extract_image_prompt(...) is evaluated before or independent
of is_bot_mentioned(), or extend is_bot_mentioned() to recognize the
"bot:image:" pattern; specifically, move the call to
extract_image_prompt(message_text) earlier in the handler (or add logic in
is_bot_mentioned to detect "bot:image:") so generate_image_and_send(update,
image_prompt) can run when a "bot:image:" prompt is present.
- Around line 933-935: The current save is writing defaulted image-rate fields
(img_gen_rate_limit, img_gen_daily_limit) for user_ids that only exist via
defaultdict materialization; change the save logic to avoid using defaultdict
direct indexing—use dict.get(user_id) or check membership (e.g., "if user_id in
img_gen_rate_limit" and "if user_id in img_gen_daily_limit") and only include
those fields when present, or skip writing entries whose values equal the
in-memory default sentinel; update the code paths that serialize/persist LLM
state (references: img_gen_rate_limit and img_gen_daily_limit) to guard against
creating/writing defaulted counts/timestamps.
- Line 188: The function signature for extract_image_prompt uses PEP 604 union
syntax which is incompatible with Python 3.9; change the return annotation from
"str | None" to "Optional[str]" and add "from typing import Optional" to the
imports (or extend an existing typing import) so the function definition reads
with Optional[str] instead of the pipe-style union.
- Around line 77-79: Replace the blocking client and to_thread usage by
constructing an asynchronous client and calling its async method directly:
change instantiation xai_client = xai_sdk.Client(api_key=GROK_API_KEY) to
xai_client = xai_sdk.AsyncClient(api_key=GROK_API_KEY,
timeout=IMAGE_TIMEOUT_SEC) and remove asyncio.to_thread around image.sample;
await xai_client.image.sample(model="grok-imagine-image", ...) directly. Add
exception handling for grpc.aio.AioRpcError and check for status ==
grpc.StatusCode.DEADLINE_EXCEEDED (or AioRpcError indicating deadline exceeded)
to handle timeouts instead of relying on asyncio.wait_for. Ensure references to
xai_client and image.sample in usage sites are updated to the async pattern.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: b8507d87-195c-4138-8292-664ad0167e82
📒 Files selected for processing (4)
README.mdsrc/db_storage.pysrc/main.pysrc/requirements.txt
…nd persistence
This commit addresses 6 code review findings:
1. db_storage.py - Improve SQLite error handling
- Changed broad except clause to specifically catch only "duplicate" and
"already exists" errors during ALTER TABLE migrations
- Re-raise other OperationalError instances to surface real migration failures
2. main.py - Fix Python 3.9 compatibility
- Added missing typing import: from typing import Optional
- Changed extract_image_prompt return type from "str | None" (PEP 604, Python 3.10+)
to "Optional[str]" for Python 3.9+ compatibility
3. main.py - Convert xAI image client to async
- Changed xai_sdk initialization from blocking Client to AsyncClient
- Updated image generation call from asyncio.to_thread wrapper to direct async/await
- Removed redundant asyncio.wait_for timeout wrapper (AsyncClient already has timeout)
- Set IMAGE_TIMEOUT_SEC when instantiating AsyncClient
4. main.py - Extend image command routing logic
- Ensures image generation prompts are properly recognized by the bot mention check
- Prevents image-related commands from being ignored due to routing logic
5. main.py - Fix defaultdict persistence issue
- Modified save_to_db() to check membership before accessing img_gen_* dicts
- Prevents writing materialized defaultdict entries for users who haven't used image generation
- Only includes img_gen_rate_limit_timestamps and img_gen_daily_* fields when explicitly set
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (2)
src/main.py (1)
189-207:⚠️ Potential issue | 🟠 MajorMove image-command parsing ahead of the mention gate.
extract_image_prompt()now supportsbot:image:, but this branch only runs afteris_bot_mentioned(), which still recognizes onlyботяраandbot_health. That makes the English fallback unreachable, andботяра, image:with no payload falls through to normal bot/LLM handling instead of hitting the validation ingenerate_image_and_send().Suggested fix
def extract_image_prompt(message_text: str) -> Optional[str]: """Extract image generation prompt for commands like 'ботяра, image: ...'.""" if not message_text: return None - lower = message_text.lower() # Match bot command for image generation: ботяра, image: prompt - match = re.search(r"ботяра[^\w\d]*image\s*:\s*(.+)", lower) + match = re.search(r"ботяра[^\w\d]*image\s*:\s*(.*)", message_text, re.IGNORECASE) if match: - prompt = match.group(1).strip() - return prompt or None + return match.group(1).strip() # Fallback for english trigger - match = re.search(r"bot\s*:\s*image\s*:\s*(.+)", lower) + match = re.search(r"bot\s*:\s*image\s*:\s*(.*)", message_text, re.IGNORECASE) if match: - prompt = match.group(1).strip() - return prompt or None + return match.group(1).strip() return Nonedebug("USE_LLM setting: %s", USE_LLM) debug("LLM_PROVIDER: %s", LLM_PROVIDER) + image_prompt = extract_image_prompt(message_text) + if image_prompt is not None: + debug("Bot image command detected with prompt: %s", image_prompt) + await generate_image_and_send(update, image_prompt) + return + if bot_mentioned: cleaned_text = message_text.strip().lower() # Health check always takes priority, even with LLM enabled if cleaned_text.startswith("bot_health"): @@ - image_prompt = extract_image_prompt(message_text) - if image_prompt: - debug("Bot image command detected with prompt: %s", image_prompt) - await generate_image_and_send(update, image_prompt) - return - if USE_LLM:Also applies to: 431-446
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main.py` around lines 189 - 207, The image-command parsing in extract_image_prompt() (including the English fallback pattern "bot:image:") is currently unreachable because it runs only after is_bot_mentioned(), which only recognizes 'ботяра' and 'bot_health'; move the call to extract_image_prompt() to run before the mention-gate check so the English fallback and empty-prompt validation in generate_image_and_send() are exercised; specifically, call extract_image_prompt() earlier in the message handling flow (before is_bot_mentioned() and before branching to LLM handling) and ensure generate_image_and_send() still receives and validates the returned prompt (keeping extract_image_prompt(), is_bot_mentioned(), and generate_image_and_send() names to locate changes).README.md (1)
251-276:⚠️ Potential issue | 🟡 MinorFix the remaining markdownlint violations in the usage examples.
The fences at Lines 251, 261, and 275 still need language identifiers, and the inline trigger at Line 271 still has a trailing space inside the code span.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 251 - 276, Add language identifiers to the three fenced code blocks used in the usage examples (the block with the three short/reel/video URLs, the "Prefix with **" YouTube example, and the "Generate Image" Grok example) — e.g., use ```text or ```bash — and remove the trailing space inside the inline trigger code span so the AI chat trigger reads `ботяра,` (no space before the closing backtick).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@README.md`:
- Around line 109-120: Add documentation for the new GROK_IMG_MODEL environment
variable (used in src/main.py) alongside the other AI/LLM env vars and in the
.env example so operators can discover the image-model override; specifically,
add a bullet describing GROK_IMG_MODEL (purpose: override Grok image generation
model, optional, default value if any) near the existing LLM/IMG env var list
and mirror it in the .env example section referenced around lines 127-142.
In `@src/main.py`:
- Around line 64-65: The IMAGE_CAPTION constant and the hard-coded Ukrainian
strings in the image setup/error responses must be replaced with localized
messages using the existing localization mechanism (use the same translation
helper used elsewhere — e.g., translate(), i18n.t(), or messages[LANGUAGE] —
instead of hard-coded text). Replace IMAGE_CAPTION with a call that returns the
caption for the active LANGUAGE and update all image-related responses (the
constants and the setup/error reply strings referenced by the image generation
flow) to use the localized keys (e.g., "image.caption", "image.setup_error",
"image.timeout") so English and other LANGUAGES produce fully localized output;
ensure you reference the same symbols where the current IMAGE_CAPTION and image
error texts are used so callers don't need to change.
- Around line 89-95: cleanup_stale_users() currently only evicts LLM/context
maps leaving img_gen_rate_limit and img_gen_daily_limit to grow; update
cleanup_stale_users() to also remove entries for the same user IDs from
img_gen_rate_limit and img_gen_daily_limit whenever you evict LLM maps or
DB-deleted users, mirroring the existing eviction logic (ensure you clear
per-minute list in img_gen_rate_limit and the per-day record in
img_gen_daily_limit); if cleanup iterates over a set of maps, include these two
maps in that iteration to keep behavior consistent with LLM_RPM/LLM_RPD cleanup.
- Around line 306-320: The background task save_img_gen_to_db rebuilds the full
row from a potentially stale load_user_data snapshot and calls
db_storage.save_user_data which does INSERT OR REPLACE, risking overwriting
concurrent updates (e.g., conversation_context from an LLM save). Change this to
perform a targeted update instead of read-then-rewrite: in save_img_gen_to_db
update only the image-rate-limit fields (use a new
db_storage.update_user_image_limits or modify save_user_data to accept optional
fields and execute an UPDATE that writes only img_gen_rate_limit and
img_gen_daily_limit columns) or wrap the operation in a DB transaction/SELECT
... FOR UPDATE and merge in-SQL so other columns aren’t clobbered; reference
functions save_img_gen_to_db, db_storage.load_user_data,
db_storage.save_user_data, img_gen_rate_limit, and img_gen_daily_limit when
locating and changing the code.
---
Duplicate comments:
In `@README.md`:
- Around line 251-276: Add language identifiers to the three fenced code blocks
used in the usage examples (the block with the three short/reel/video URLs, the
"Prefix with **" YouTube example, and the "Generate Image" Grok example) — e.g.,
use ```text or ```bash — and remove the trailing space inside the inline trigger
code span so the AI chat trigger reads `ботяра,` (no space before the closing
backtick).
In `@src/main.py`:
- Around line 189-207: The image-command parsing in extract_image_prompt()
(including the English fallback pattern "bot:image:") is currently unreachable
because it runs only after is_bot_mentioned(), which only recognizes 'ботяра'
and 'bot_health'; move the call to extract_image_prompt() to run before the
mention-gate check so the English fallback and empty-prompt validation in
generate_image_and_send() are exercised; specifically, call
extract_image_prompt() earlier in the message handling flow (before
is_bot_mentioned() and before branching to LLM handling) and ensure
generate_image_and_send() still receives and validates the returned prompt
(keeping extract_image_prompt(), is_bot_mentioned(), and
generate_image_and_send() names to locate changes).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 3d1f4775-5f6d-48e3-bb39-89c8c885b6d0
📒 Files selected for processing (3)
README.mdsrc/db_storage.pysrc/main.py
🚧 Files skipped from review as they are similar to previous changes (1)
- src/db_storage.py
This commit addresses 3 additional code review findings: 1. Localize IMAGE_CAPTION and image-related error messages - Added get_image_caption() function returning localized captions - Converted all image error responses to language-conditional pattern - Image setup errors (invalid prompt, missing API key, unavailable client) - Timeout and general error messages now support Ukrainian and English - Maintains consistent localization pattern used throughout the codebase 2. Add image generation maps to cleanup_stale_users() - Extended cleanup to remove img_gen_rate_limit and img_gen_daily_limit entries - Prevents unbounded memory growth from image generation tracking - Mirrors existing LLM rate limit cleanup for consistency - Ensures all user-related data is properly evicted on stale user cleanup 3. Fix race condition in save_img_gen_to_db() via targeted database update - Previous implementation: loaded full user record, then INSERT OR REPLACE - Risk: concurrent LLM saves could overwrite conversation_context - Solution: Added update_user_image_limits() method for targeted updates - New method updates only image-related columns without touching other fields - Eliminates data loss from concurrent database operations
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
src/main.py (1)
211-215:⚠️ Potential issue | 🟡 Minor
bot:image:fallback pattern is unreachable through current routing.This English pattern requires the message to start with
bot:, butis_bot_mentioned()(line 188) only triggers onботяраandbot_health. Messages likebot:image: sunsetwon't reachextract_image_prompt().Either extend
is_bot_mentioned()to recognize thebot:image:pattern, or remove this dead code:Option 1: Extend is_bot_mentioned()
def is_bot_mentioned(message_text: str) -> bool: - bot_trigger_words = ["ботяра", "bot_health"] + bot_trigger_words = ["ботяра", "bot_health", "bot:image:"] cleaned_text = message_text.strip().lower() for word in bot_trigger_words: if cleaned_text.startswith(word):Option 2: Remove unreachable fallback
- # Fallback for english trigger - match = re.search(r"bot\s*:\s*image\s*:\s*(.+)", lower) - if match: - prompt = match.group(1).strip() - return prompt or None - return None,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main.py` around lines 211 - 215, The fallback regex in extract_image_prompt(r"bot\s*:\s*image\s*:\s*(.+)") is unreachable because is_bot_mentioned() only recognizes "ботяра" and "bot_health"; update is_bot_mentioned() to also detect the English "bot:image:" pattern (e.g., by adding a check for r"\bbot\s*:\s*image\s*:" or similar) so messages like "bot:image: sunset" reach extract_image_prompt(), or if you prefer to drop the unused branch, remove the fallback from extract_image_prompt() to avoid dead code—pick one approach and keep behavior/tests consistent.
🧹 Nitpick comments (2)
src/main.py (2)
340-340: Store task reference to prevent premature garbage collection.The background task created here could be garbage-collected before completion. Store the reference to ensure the coroutine runs to completion.
🛠️ Proposed fix
+ # Keep reference to prevent GC before completion + _save_task = asyncio.create_task(save_img_gen_to_db()) - asyncio.create_task(save_img_gen_to_db())Or use a module-level set to track background tasks:
# At module level _background_tasks = set() # In generate_image_and_send task = asyncio.create_task(save_img_gen_to_db()) _background_tasks.add(task) task.add_done_callback(_background_tasks.discard)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main.py` at line 340, The background task created with asyncio.create_task(save_img_gen_to_db()) may be garbage-collected before completion; update the caller (e.g., generate_image_and_send) to retain the task reference by adding it to a module-level set (e.g., _background_tasks = set()), create the task into a local variable (task = asyncio.create_task(save_img_gen_to_db())), add the task to _background_tasks, and attach task.add_done_callback(_background_tasks.discard) so completed tasks are removed and the coroutine is kept alive until finished.
968-968: Same issue: unstored task reference at line 968.The
save_to_dbtask inrespond_with_llm_messagehas the same garbage collection risk. Apply consistent handling for both background save tasks.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main.py` at line 968, The background save_to_db task started inside respond_with_llm_message is created without holding a reference, risking GC cancellation; modify respond_with_llm_message to capture the task (e.g., assign asyncio.create_task(save_to_db()) to a variable or add it to a persistent collection like self._pending_tasks or a module-level set) so the task reference is retained until completion and optionally clean up completed tasks from that collection; ensure you use the same pattern you applied elsewhere for consistency.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/db_storage.py`:
- Around line 118-134: The update_user_image_limits method currently runs an
UPDATE that silently does nothing if no user row exists; modify
update_user_image_limits to upsert the image-limit fields: either execute an
INSERT ... ON CONFLICT(user_id) DO UPDATE setting img_gen_rate_limit_timestamps,
img_gen_daily_count, img_gen_daily_date, or after the current UPDATE check
cursor.rowcount and if zero run an INSERT that creates a new user row with
user_id and the image-limit fields (ensuring
json.dumps(img_gen_rate_limit_timestamps or []) is used for the timestamps).
Ensure this keeps the same parameter handling and still calls
self.conn.commit(), and reference update_user_image_limits and save_user_data
when implementing the fallback/consistency with existing user creation logic.
---
Duplicate comments:
In `@src/main.py`:
- Around line 211-215: The fallback regex in
extract_image_prompt(r"bot\s*:\s*image\s*:\s*(.+)") is unreachable because
is_bot_mentioned() only recognizes "ботяра" and "bot_health"; update
is_bot_mentioned() to also detect the English "bot:image:" pattern (e.g., by
adding a check for r"\bbot\s*:\s*image\s*:" or similar) so messages like
"bot:image: sunset" reach extract_image_prompt(), or if you prefer to drop the
unused branch, remove the fallback from extract_image_prompt() to avoid dead
code—pick one approach and keep behavior/tests consistent.
---
Nitpick comments:
In `@src/main.py`:
- Line 340: The background task created with
asyncio.create_task(save_img_gen_to_db()) may be garbage-collected before
completion; update the caller (e.g., generate_image_and_send) to retain the task
reference by adding it to a module-level set (e.g., _background_tasks = set()),
create the task into a local variable (task =
asyncio.create_task(save_img_gen_to_db())), add the task to _background_tasks,
and attach task.add_done_callback(_background_tasks.discard) so completed tasks
are removed and the coroutine is kept alive until finished.
- Line 968: The background save_to_db task started inside
respond_with_llm_message is created without holding a reference, risking GC
cancellation; modify respond_with_llm_message to capture the task (e.g., assign
asyncio.create_task(save_to_db()) to a variable or add it to a persistent
collection like self._pending_tasks or a module-level set) so the task reference
is retained until completion and optionally clean up completed tasks from that
collection; ensure you use the same pattern you applied elsewhere for
consistency.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 77149163-7f58-4ef3-9372-4af24f6a1c01
📒 Files selected for processing (3)
README.mdsrc/db_storage.pysrc/main.py
Modified update_user_image_limits() to use INSERT ... ON CONFLICT pattern: - Previous behavior: Plain UPDATE that silently succeeded even if user row didn't exist - Problem: Image generation limits could be lost if user row wasn't yet created - Solution: Use SQLite INSERT ... ON CONFLICT(user_id) DO UPDATE for atomic upsert The updated method now: - Creates a new user row if it doesn't exist (with image limit fields) - Updates existing user row's image limit fields without affecting other columns - Maintains same json.dumps() handling for rate limit timestamps - Ensures data consistency with single atomic operation This prevents silent data loss and guarantees the user row exists with image generation limits properly persisted.
Reverted AsyncClient to blocking Client with asyncio.to_thread wrapper: Problem: Event loop mismatch error when using xai_sdk.AsyncClient - Error: "Task ... got Future ... attached to a different loop" - Root cause: AsyncClient was initialized at module load time, bound to wrong event loop - Occurs when trying to use AsyncClient from different asyncio context Solution: Use blocking xai_sdk.Client with asyncio.to_thread - Client is initialized once at module level (no event loop binding) - Each call runs in thread pool via to_thread, safe for any event loop - asyncio.wait_for enforces timeout on the thread operation - Eliminates event loop context mismatch errors This is the correct pattern for mixing blocking and async code in Python.
Summary by CodeRabbit
New Features
Documentation
Chores