Add model parameter to InvokeLLM for per-call model selection#137
Add model parameter to InvokeLLM for per-call model selection#137
Conversation
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Claude finished @faygade's task —— View job Pull Request ReviewReview Tasks
SummaryThis PR adds an optional Code Quality and Best Practices ✅
Potential Bugs or Issues
|
🚀 Package Preview Available!Install this PR's preview build with npm: npm i @base44-preview/sdk@0.8.20-pr.137.2242f71Prefer not to change any import paths? Install using npm alias so your code still imports npm i "@base44/sdk@npm:@base44-preview/sdk@0.8.20-pr.137.2242f71"Or add it to your {
"dependencies": {
"@base44/sdk": "npm:@base44-preview/sdk@0.8.20-pr.137.2242f71"
}
}
Preview published to npm registry — try new features instantly! |
- Update JSDoc examples to use valid model value ('gpt_5') instead of 'gpt-4'
- Add unit tests for InvokeLLM model parameter pass-through
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Claude finished @faygade's task —— View job Pull Request ReviewReview Tasks
SummaryThis PR successfully adds an optional Previous Issues Resolved ✅
Code Quality and Best Practices ✅
Potential Bugs or Issues ✅No issues found. The implementation correctly:
Performance Considerations ✅
Security Concerns ✅
Test Coverage ✅Excellent test coverage added:
The tests properly mock the API calls and verify the expected behavior. Implementation Note
|
No description provided.