Conversation
📝 WalkthroughWalkthroughThis pull request adds an optional Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
Claude finished @elliot14A's task —— View job Pull Request Review: Max Tokens Parameter for AI GenerationI've completed my comprehensive review of this PR that adds max tokens parameter support across the AI generation system. Here's my detailed feedback: Review Checklist:
SummaryThis PR successfully adds Code Quality & Best Practices ✅Strengths:
Suggestions:
Potential Issues 🔍
Performance Considerations 📈
Security Concerns 🔒Good practices:
Recommendations:
Test Coverage 📋Missing Coverage:
Recommendations: # Should add tests for:
go test ./server/infrastructure/openai/... -v
go test ./server/application/services/... -v
go test ./server/interfaces/http/routes/... -vTechnical Debt & Improvements 🔧
Architecture Assessment ⭐Excellent architectural decisions:
Final VerdictThis is a solid implementation that adds valuable functionality for controlling AI response length and costs. The code quality is good, follows existing patterns, and maintains the clean architecture. Required Before Merge:
Recommended Improvements:
Overall Rating: 8/10 - Well-implemented feature with minor issues that should be addressed before merging. |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (4)
server/infrastructure/openai/openai.go (2)
89-91: Consider validatingmaxTokensvalue.The code correctly checks for
nil, but doesn't validate whether the value is positive. Passing zero or negative values to the OpenAI API may cause unexpected behavior or API errors.🔎 Proposed validation
if maxTokens != nil { + if *maxTokens <= 0 { + return nil, fmt.Errorf("maxTokens must be a positive integer") + } req.MaxTokens = *maxTokens }
211-212: Context parameter is ignored.The
ctxparameter is accepted butGenerateChatResponseFuncinternally usescontext.Background()(line 178). This prevents cancellation and timeout propagation from the caller.🔎 Proposed fix to propagate context
Update
GenerateChatResponseFuncto accept and use a context parameter:-func (c *OpenAIClient) GenerateChatResponseFunc(userMsg string, prevMsgs []*models.D_ChatMessage, maxTokens *int) (string, error) { +func (c *OpenAIClient) GenerateChatResponseFunc(ctx context.Context, userMsg string, prevMsgs []*models.D_ChatMessage, maxTokens *int) (string, error) { // ... existing code ... - ctx := context.Background() req := openai.ChatCompletionRequest{Then update the call site:
func (c *OpenAIClient) GenerateChatResponse(ctx context.Context, userMessage string, prevMessages []*models.D_ChatMessage, maxTokens *int) (*models.D_AiChatResponse, error) { - resp, err := c.GenerateChatResponseFunc(userMessage, prevMessages, maxTokens) + resp, err := c.GenerateChatResponseFunc(ctx, userMessage, prevMessages, maxTokens)server/interfaces/http/routes/api/chats/create.go (1)
25-30: Inconsistent field type and JSON tag.The
MaxTokensfield inchatWithAgentRequestBodyusesintwith"max_tokens"JSON tag, while all other request bodies in this PR use*intwith"maxTokens"tag. This creates API inconsistency.Consider aligning with the pattern used elsewhere in this PR for consistency:
type chatWithAgentRequestBody struct { Messages []models.AIChatMessage `json:"messages" validate:"required,dive"` Stream bool `json:"stream" validate:"omitempty" default:"true"` Temperature float64 `json:"temperature" validate:"omitempty"` - MaxTokens int `json:"max_tokens" validate:"omitempty"` + MaxTokens *int `json:"maxTokens,omitempty" example:"1000"` }Note: This would be a breaking change if clients already use
max_tokens. If backward compatibility is needed, consider accepting both.server/application/repositories/ai.go (1)
9-13: Good use of pointer type for optional parameter.The addition of
maxTokens *intas an optional parameter is idiomatic Go. The pointer type correctly allowsnilto represent "use default behavior," and the consistent placement as the last parameter across all methods maintains good API ergonomics.Consider adding documentation comments to clarify the behavior when
maxTokensisnilvs. when a specific value is provided:// AiRepository defines AI operations for data analysis and SQL generation. type AiRepository interface { // GenerateSql converts natural language to SQL. // If maxTokens is nil, the default token limit is used. GenerateSql(nl string, maxTokens *int) (string, error) // ... similar comments for other methods }
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
server/application/repositories/ai.goserver/application/services/ai.goserver/application/services/chat.goserver/infrastructure/openai/openai.goserver/interfaces/http/routes/api/ai/columns.goserver/interfaces/http/routes/api/chats/create.goserver/interfaces/http/routes/api/chats/create_chat.goserver/interfaces/http/routes/api/nl2sql.goserver/interfaces/http/routes/source/database/create.go
🧰 Additional context used
🧬 Code graph analysis (2)
server/application/services/chat.go (2)
server/infrastructure/postgres/gen/chats.sql.go (1)
CreateChatParams(44-49)web/src/lib/api-client.ts (1)
ChatWithMessages(139-141)
server/interfaces/http/routes/api/chats/create.go (1)
server/infrastructure/postgres/gen/chats.sql.go (1)
CreateChatParams(44-49)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: claude-review
- GitHub Check: Analyze (go)
🔇 Additional comments (18)
server/infrastructure/openai/openai.go (1)
68-117: LGTM on the generic JSON response function.The
GenerateResponseJSONfunction correctly handles the optionalmaxTokensparameter with proper nil-checking, and the implementation maintains backward compatibility whenmaxTokensis not provided.server/interfaces/http/routes/api/chats/create_chat.go (2)
14-19: LGTM on the request body extension.The
MaxTokensfield is correctly added as an optional pointer type with appropriate JSON tag and swagger documentation.
82-91: LGTM on propagating maxTokens to service.The
body.MaxTokensis correctly passed through toCreateChat. The nil-pointer handling is delegated to the service layer appropriately.server/interfaces/http/routes/api/nl2sql.go (2)
17-24: LGTM on the NL2SQL request extension.The
MaxTokensfield is correctly added with consistent naming and documentation.
125-133: LGTM on passing maxTokens to AI service.The parameter is correctly threaded through to the AI service call.
server/application/services/chat.go (3)
97-110: LGTM on CreateChat signature update.The
maxTokensparameter is correctly added and handled. The empty messages case (lines 102-110) doesn't require AI generation, so not passingmaxTokensthere is appropriate.
128-140: LGTM on GenerateTitle call with maxTokens.The
maxTokensparameter is correctly passed toGenerateTitlefor title generation, with appropriate fallback handling when AI fails.
50-57: Pre-existing issue:prevMsgsis always empty.The
prevMsgsvariable is initialized as an emptyPaginationView(line 52) and passed directly toGenerateChatResponse(line 54) without ever being populated. This means previous conversation history is never included in the AI context when generating responses.This appears to be a pre-existing issue unrelated to the
maxTokenschanges introduced in this PR.server/interfaces/http/routes/api/ai/columns.go (4)
10-18: LGTM on genColumnsDescBody extension.The
MaxTokensfield is correctly added with consistent naming and documentation.
43-47: LGTM on GenerateColumnDescriptions call.The
body.MaxTokensis correctly passed to the service.
54-68: LGTM on genDatasetDescBody extension.The
MaxTokensfield is correctly added, consistent with the other request body struct.
90-101: LGTM on GenerateDatasetDescription call.The
body.MaxTokensis correctly passed as the last parameter.server/interfaces/http/routes/api/chats/create.go (1)
344-354: LGTM on the maxTokens handling for new chat creation.The conversion from
intto*intcorrectly handles the zero-value case by only settingmaxTokenswhenbody.MaxTokens > 0.server/interfaces/http/routes/source/database/create.go (3)
64-86: LGTM on the request body extension.The
MaxTokensfield is correctly added with consistent naming, type, and documentation matching other endpoints.
215-224: LGTM on GenerateColumnDescriptions call.The
body.MaxTokensis correctly passed to the service.
235-247: LGTM on GenerateDatasetDescription call.The
body.MaxTokensis correctly passed as the last parameter.server/application/services/ai.go (1)
17-27: LGTM on AiDriver method updates.All three methods are consistently updated to accept and pass through the
maxTokensparameter. The implementation is straightforward.Note: Using
context.Background()prevents context propagation from HTTP handlers. Consider accepting context as a parameter for cancellation support in a future iteration.server/application/repositories/ai.go (1)
15-18: Consistent design across both interfaces.The
AiChatRepositoryinterface follows the same pattern asAiRepository, maintaining consistency throughout the file.
| type AiRepository interface { | ||
| GenerateSql(nl string) (string, error) | ||
| GenerateColumnDescriptions(ctx context.Context, rows string, summary string) (map[string]string, error) | ||
| GenerateDatasetDescription(ctx context.Context, datasetName string, columnNames []string, columnDescriptions map[string]string, rows string, summary string) (string, error) | ||
| GenerateSql(nl string, maxTokens *int) (string, error) | ||
| GenerateColumnDescriptions(ctx context.Context, rows string, summary string, maxTokens *int) (map[string]string, error) | ||
| GenerateDatasetDescription(ctx context.Context, datasetName string, columnNames []string, columnDescriptions map[string]string, rows string, summary string, maxTokens *int) (string, error) | ||
| } | ||
|
|
||
| type AiChatRepository interface { | ||
| GenerateChatResponse(ctx context.Context, userMessage string, prevMessage []*models.D_ChatMessage) (*models.D_AiChatResponse, error) | ||
| GenerateTitle(ctx context.Context, userMessage string) (*models.D_AiChatResponse, error) | ||
| GenerateChatResponse(ctx context.Context, userMessage string, prevMessage []*models.D_ChatMessage, maxTokens *int) (*models.D_AiChatResponse, error) | ||
| GenerateTitle(ctx context.Context, userMessage string, maxTokens *int) (*models.D_AiChatResponse, error) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find implementations of AiRepository and AiChatRepository interface methods
# Check for GenerateSql, GenerateColumnDescriptions, GenerateDatasetDescription,
# GenerateChatResponse, and GenerateTitle implementations
echo "=== Searching for GenerateSql implementations ==="
rg "func.*GenerateSql" --type go -A 2
echo -e "\n=== Searching for GenerateColumnDescriptions implementations ==="
rg "func.*GenerateColumnDescriptions" --type go -A 2
echo -e "\n=== Searching for GenerateDatasetDescription implementations ==="
rg "func.*GenerateDatasetDescription" --type go -A 2
echo -e "\n=== Searching for GenerateChatResponse implementations ==="
rg "func.*GenerateChatResponse" --type go -A 2
echo -e "\n=== Searching for GenerateTitle implementations ==="
rg "func.*GenerateTitle" --type go -A 2Repository: factly/gopie
Length of output: 3439
🏁 Script executed:
#!/bin/bash
# Search specifically for AiDriver implementations
echo "=== All GenerateChatResponse methods ==="
rg "GenerateChatResponse" --type go -B 1 -A 3
echo -e "\n=== All GenerateTitle methods ==="
rg "GenerateTitle" --type go -B 1 -A 3Repository: factly/gopie
Length of output: 3669
🏁 Script executed:
cat -n server/application/services/ai.goRepository: factly/gopie
Length of output: 1139
Fix critical signature mismatches in AiDriver implementations and add missing AiChatRepository methods.
The interface definitions were updated but the AiDriver wrapper implementation in server/application/services/ai.go has not been fully updated:
-
Signature mismatch:
GenerateColumnDescriptionsandGenerateDatasetDescriptionare missing thectx context.Contextparameter required by theAiRepositoryinterface. These methods passcontext.Background()when calling the underlying implementation, but their own signatures don't accept a context parameter. -
Missing implementations:
GenerateChatResponseandGenerateTitlemethods from theAiChatRepositoryinterface are not implemented in AiDriver at all. These are called fromserver/application/services/chat.go, so the AiDriver must implement them.
Update AiDriver to match both interface signatures exactly and implement all required methods from both AiRepository and AiChatRepository interfaces.
🤖 Prompt for AI Agents
In server/application/repositories/ai.go around lines 9 to 18, the AiRepository
and AiChatRepository signatures were changed but the AiDriver wrapper in
server/application/services/ai.go was not updated: add the missing ctx
context.Context parameter to AiDriver.GenerateColumnDescriptions and
AiDriver.GenerateDatasetDescription so their signatures match the interface and
forward that ctx when calling the underlying implementation (remove hardcoded
context.Background()), and implement the two missing AiChatRepository methods
GenerateChatResponse(ctx context.Context, userMessage string, prevMessage
[]*models.D_ChatMessage, maxTokens *int) and GenerateTitle(ctx context.Context,
userMessage string, maxTokens *int) in AiDriver to delegate to the underlying ai
implementation, passing all parameters and returning its results/errors.
Summary by CodeRabbit
Release Notes
✏️ Tip: You can customize this high-level summary in your review settings.