Releases: brainlid/langchain
Releases · brainlid/langchain
v0.4.1
What's Changed
- OpenAI responses API improvements by @arjan in #391
- Support Anthropic
disable_parallel_tool_usetool_choice setting by @vlymar in #390 - Update gettext dependency version to 1.0 by @bijanbwb in #393
- Add DeepSeek chat model integration by @gilbertwong96 in #394
- Loosen the gettext dependency by @montebrown in #399
- add MessageDelta.merge_deltas/2 by @brainlid in #401
- formatting update by @brainlid in #402
- Added an :cache_messages option for ChatAnthropic, can improve cache utilization. by @montebrown in #398
- Add support for Anthropic API PDF reading to the ChatAnthropic model. by @jadengis in #403
- feat: Add support for
:file_urlto ChatAnthropic too by @jadengis in #404 - Support reasoning_content of deepseek model by @gilbertwong96 in #407
- Add req_opts to ChatAnthropic by @stevehodgkiss in #408
- Open AI Responses API: Add support for file_url with link to file by @reetou in #395
- Add strict tool use support to ChatAnthropic by @stevehodgkiss in #409
- Add strict to function of ChatModels.ChatOpenAI by @nallwhy in #301
- Allow multi-part tool responses. by @montebrown in #410
- prep for v0.4.1 release by @brainlid in #411
New Contributors
- @vlymar made their first contribution in #390
- @bijanbwb made their first contribution in #393
- @gilbertwong96 made their first contribution in #394
- @reetou made their first contribution in #395
Full Changelog: v0.4.0...v0.4.1
v0.4.0
What's Changed since v0.3.3
- Add OpenAI and Claude thinking support - v0.4.0-rc.0 by @brainlid in #297
- vertex ai file url support by @ahsandar in #296
- Update docs for Vertex AI by @ahsandar in #304
- Fix ContentPart migration by @mathieuripert in #309
- Fix tests for content_part_for_api/2 of ChatOpenAI in v0.4.0-rc0 by @nallwhy in #300
- Fix
tool_callsnilmessages by @udoschneider in #314 - feat: Add structured output support to ChatMistralAI by @mathieuripert in #312
- feat: add configurable tokenizer to text splitters by @mathieuripert in #310
- simple formatting issue by @Bodhert in #307
- Update Message.new_system spec to accurately accept [ContentPart.t()]… by @rtorresware in #315
- Fix: Add token usage to ChatGoogleAI message metadata by @mathieuripert in #316
- feat: include raw API responses in LLM error objects for better debug… by @TwistingTwists in #317
- expanded docs and test coverage for prompt caching by @brainlid in #325
- Fix AWS Bedrock stream decoder ordering issue by @stevehodgkiss in #327
- significant updates for v0.4.0-rc.1 by @brainlid in #328
- filter out empty lists in message responses by @brainlid in #333
- fix: Require gettext ~> 0.26 by @mweidner037 in #332
- Add
retry: transientto Req for Anthropic models in stream mode by @jonator in #329 - fixed issue with poorly matching list in case by @brainlid in #334
- feat: Add organization ID as a parameter by @hjemmel in #337
- Add missing verbose_api field to ChatOllamaAI for streaming compatibility by @gur-xyz in #341
- Added usage data to the VertexAI Message response. by @raulchedrese in #335
- feat: add run mode: step by @CaiqueMitsuoka in #343
- feat: add support for multiple tools in run_until_tool_used by @fortmarek in #345
- Fix ChatOllamaAI stop sequences: change from string to array type by @gur-xyz in #342
- expanded logging for ChatAnthropic API errors by @brainlid in #349
- Prevent crash when ToolResult with string in ChatGoogleAI.for_api/1 by @nallwhy in #352
- Bedrock OpenAI-compatible API compatibility fix by @stevehodgkiss in #356
- added xAI Grok chat model support by @alexfilatov in #338
- Support thinking to ChatGoogleAI by @nallwhy in #354
- Add req_config to ChatMode.ChatGoogleAI by @nallwhy in #357
- Clean up treating MessageDelta in ChatModels.ChatGoogleAI by @nallwhy in #353
- Expose full response headers through a new on_llm_response_headers callback by @brainlid in #358
- only include "user" with OpenAI request when a value is provided by @brainlid in #364
- Handle no content parts responses in ChatGoogleAI by @nallwhy in #365
- Adds support for gpt-image-1 in LangChain.Images.OpenAIImage by @Ven109 in #360
- Pref for release v0.4.0-rc.2 by @brainlid in #366
- fix: handle missing finish_reason in streaming responses for LiteLLM compatibility by @fbettag in #367
- Add support for native tool calls to ChatVertexAI by @raulchedrese in #359
- Adds should_continue? optional function to mode step by @CaiqueMitsuoka in #361
- Add OpenAI Deep Research integration by @fbettag in #336
- Add
parallel_tool_callsoption toChatOpenAImodel by @martosaur in #371 - Add optional AWS session token handling in BedrockHelpers by @quangngd in #372
- fix: handle LiteLLM responses with null b64_json in OpenAIImage by @fbettag in #368
- Add Orq AI chat by @arjan in #377
- Add req_config to ChatModels.ChatOpenAI by @koszta in #376
- fix(ChatGoogleAI): Handle cumulative token usage by @mweidner037 in #373
- fix(ChatGoogleAI): Prevent error from thinking content parts by @mweidner037 in #374
- feat(ChatGoogleAI): Full thinking config by @mweidner037 in #375
- Support verbosity parameter for ChatOpenAI by @rohan-b99 in #379
- add retry_on_fallback? to chat model definition and all models by @brainlid in #350
- Prep for v0.4.o-rc.3 by @brainlid in #380
- Use moduledoc instead of doc for LLMChain documentation by @xxdavid in #384
- Support OTP 28 in CI by @kianmeng in #382
- OpenAI responses by @vasspilka in #381
- Add AGENTS.md and CLAUDE.md file support by @brainlid in #385
- Suppress the compiler warning messages for ChatBumblebee by @brainlid in #386
- fix: Support for json-schema in OpenAI responses API by @vasspilka in #387
- Prepare for v0.4.0 release by @brainlid in #388
New Contributors
- @ahsandar made their first contribution in #296
- @mathieuripert made their first contribution in #309
- @udoschneider made their first contribution in #314
- @Bodhert made their first contribution in #307
- @rtorresware made their first contribution in #315
- @TwistingTwists made their first contribution in #317
- @mweidner037 made their first contribution in #332
- @jonator made their first contribution in #329
- @hjemmel made their first contribution in #337
- @gur-xyz made their first contribution in #341
- @CaiqueMitsuoka made their first contribution in #343
- @fortmarek made their first contribution in #345
- @alexfilatov made their first contribution in #338
- @Ven109 made their first contribution in #360
- @martosaur made their first contribution in #371
- @quangngd made their first contribution in #372
- @arjan made their first contribution in #377
- @koszta made their first contribution in #376
- @rohan-b99 made their first contribution in #379
- @xxdavid made their first contribution in #384
Full Changelog: v0.3.3...v0.4.0
v0.4.0-rc.3
What's Changed
- fix: handle missing finish_reason in streaming responses for LiteLLM compatibility by @fbettag in #367
- Add support for native tool calls to ChatVertexAI by @raulchedrese in #359
- Adds should_continue? optional function to mode step by @CaiqueMitsuoka in #361
- Add OpenAI Deep Research integration by @fbettag in #336
- Add
parallel_tool_callsoption toChatOpenAImodel by @martosaur in #371 - Add optional AWS session token handling in BedrockHelpers by @quangngd in #372
- fix: handle LiteLLM responses with null b64_json in OpenAIImage by @fbettag in #368
- Add Orq AI chat by @arjan in #377
- Add req_config to ChatModels.ChatOpenAI by @koszta in #376
- fix(ChatGoogleAI): Handle cumulative token usage by @mweidner037 in #373
- fix(ChatGoogleAI): Prevent error from thinking content parts by @mweidner037 in #374
- feat(ChatGoogleAI): Full thinking config by @mweidner037 in #375
- Support verbosity parameter for ChatOpenAI by @rohan-b99 in #379
- add retry_on_fallback? to chat model definition and all models by @brainlid in #350
- Prep for v0.4.o-rc.3 by @brainlid in #380
New Contributors
- @martosaur made their first contribution in #371
- @quangngd made their first contribution in #372
- @arjan made their first contribution in #377
- @koszta made their first contribution in #376
- @rohan-b99 made their first contribution in #379
Full Changelog: v0.4.0-rc.2...v0.4.0-rc.3
v0.4.0-rc.2
What's Changed
- filter out empty lists in message responses by @brainlid in #333
- fix: Require gettext ~> 0.26 by @mweidner037 in #332
- Add
retry: transientto Req for Anthropic models in stream mode by @jonator in #329 - fixed issue with poorly matching list in case by @brainlid in #334
- feat: Add organization ID as a parameter by @hjemmel in #337
- Add missing verbose_api field to ChatOllamaAI for streaming compatibility by @gur-xyz in #341
- Added usage data to the VertexAI Message response. by @raulchedrese in #335
- feat: add run mode: step by @CaiqueMitsuoka in #343
- feat: add support for multiple tools in run_until_tool_used by @fortmarek in #345
- Fix ChatOllamaAI stop sequences: change from string to array type by @gur-xyz in #342
- expanded logging for ChatAnthropic API errors by @brainlid in #349
- Prevent crash when ToolResult with string in ChatGoogleAI.for_api/1 by @nallwhy in #352
- Bedrock OpenAI-compatible API compatibility fix by @stevehodgkiss in #356
- added xAI Grok chat model support by @alexfilatov in #338
- Support thinking to ChatGoogleAI by @nallwhy in #354
- Add req_config to ChatMode.ChatGoogleAI by @nallwhy in #357
- Clean up treating MessageDelta in ChatModels.ChatGoogleAI by @nallwhy in #353
- Expose full response headers through a new on_llm_response_headers callback by @brainlid in #358
- only include "user" with OpenAI request when a value is provided by @brainlid in #364
- Handle no content parts responses in ChatGoogleAI by @nallwhy in #365
- Adds support for gpt-image-1 in LangChain.Images.OpenAIImage by @Ven109 in #360
- Pref for release v0.4.0-rc.2 by @brainlid in #366
New Contributors
- @mweidner037 made their first contribution in #332
- @jonator made their first contribution in #329
- @hjemmel made their first contribution in #337
- @gur-xyz made their first contribution in #341
- @CaiqueMitsuoka made their first contribution in #343
- @fortmarek made their first contribution in #345
- @alexfilatov made their first contribution in #338
- @Ven109 made their first contribution in #360
Full Changelog: v0.4.0-rc.1...v0.4.0-rc.2
v0.4.0-rc.1
Refer to the CHANGELOG.md for notes on breaking changes and migrating.
What's Changed
- vertex ai file url support by @ahsandar in #296
- Update docs for Vertex AI by @ahsandar in #304
- Fix ContentPart migration by @mathieuripert in #309
- Fix tests for content_part_for_api/2 of ChatOpenAI in v0.4.0-rc0 by @nallwhy in #300
- Fix
tool_callsnilmessages by @udoschneider in #314 - feat: Add structured output support to ChatMistralAI by @mathieuripert in #312
- feat: add configurable tokenizer to text splitters by @mathieuripert in #310
- simple formatting issue by @Bodhert in #307
- Update Message.new_system spec to accurately accept [ContentPart.t()]… by @rtorresware in #315
- Fix: Add token usage to ChatGoogleAI message metadata by @mathieuripert in #316
- feat: include raw API responses in LLM error objects for better debug… by @TwistingTwists in #317
- expanded docs and test coverage for prompt caching by @brainlid in #325
- Fix AWS Bedrock stream decoder ordering issue by @stevehodgkiss in #327
- significant updates for v0.4.0-rc.1 by @brainlid in #328
New Contributors
- @ahsandar made their first contribution in #296
- @mathieuripert made their first contribution in #309
- @udoschneider made their first contribution in #314
- @Bodhert made their first contribution in #307
- @rtorresware made their first contribution in #315
- @TwistingTwists made their first contribution in #317
Full Changelog: v0.4.0-rc.0...v0.4.0-rc.1
v0.4.0-rc.0
What's Changed
Introduces breaking changes while adding expanded support for thinking models.
NOTE: See the CHANGELOG.md for more details
IMPORTANT: Not all models are supported with this RC.
Full Changelog: v0.3.3...v0.4.0-rc.0
v0.3.3
What's Changed
- upgrade gettext and migrate by @brainlid in #271
- Support caching tool results for Anthropic calls by @ci in #269
- Fix OpenAI verbose_api by @aaparmeggiani in #274
- Support choice of Anthropic beta headers by @ci in #273
- Fix specifying media uris for google vertex by @mattmatters in #242
- feat: add support for pdf content with OpenAI model by @bwan-nan in #275
- feat: File urls for Google by @vasspilka in #286
- support streaming responses from mistral by @manukall in #287
- Support for json_response in ChatModels.ChatGoogleAI by @nallwhy in #277
- Fix options being passed to the ollama chat api by @alappe in #179
- Support for file with file_id in ChatOpenAI by @nallwhy in #283
- added LLMChain.run_until_tool_used/3 by @brainlid in #292
- adds telemetry by @epinault in #284
New Contributors
- @ci made their first contribution in #269
- @aaparmeggiani made their first contribution in #274
- @mattmatters made their first contribution in #242
- @vasspilka made their first contribution in #286
- @manukall made their first contribution in #287
- @epinault made their first contribution in #284
Full Changelog: v0.3.2...v0.3.3
v0.3.2
What's Changed
- add on_message_processed callback when tool response is created by @brainlid in #248
- typos: Update Example for Syntax Issues by @bradschwartz in #249
- ensure consistent capitalization by @JoaquinIglesiasTurina in #257
- adds tool calls and usage for mistral ai. by @fbettag in #253
- Feature/support sys instruction vertexai by @vseng in #260
- Enable tool support for ollama by @alappe in #164
- Adds Perplexity AI by @fbettag in #261
- Fix typos by @kianmeng in #264
- Feat/add text splitter by @JoaquinIglesiasTurina in #256
- CI housekeeping by @kianmeng in #265
- Redact api-key from models by @raulpe7eira in #266
- add native tool functionality (e.g.
google_searchfor Gemini) by @avergin in #250 - prep for v0.3.2 release by @brainlid in #270
New Contributors
- @bradschwartz made their first contribution in #249
- @JoaquinIglesiasTurina made their first contribution in #257
- @vseng made their first contribution in #260
- @kianmeng made their first contribution in #264
- @raulpe7eira made their first contribution in #266
Full Changelog: v0.3.1...v0.3.2
v0.3.1
What's Changed
- support LMStudio when using ChatOpenAI by @brainlid in #243
- Include stacktrace context in messages for caught exceptions from LLM functions & function callbacks. by @montebrown in #241
- fix issue with OpenAI converting a message to JSON by @brainlid in #245
- prep for v0.3.1 release by @brainlid in #246
Full Changelog: v0.3.0...v0.3.1
v0.3.0
Lots of changes that includes the RC releases as well.
What's Changed
- fix openai content part media by @brainlid in #112
- ContentPart image media option updates by @brainlid in #113
- Updates for ContentPart images with messages to support ChatGPT's "detail" level option by @brainlid in #114
- add openai image endpoint support (aka DALL-E-2 & DALL-E-3) by @brainlid in #116
- allow PromptTemplates to convert to ContentParts by @brainlid in #117
- Fix elixir 1.17 warnings by @MrYawe in #123
- updates to README by @petrus-jvrensburg in #125
- Add ChatVertexAI by @raulchedrese in #124
- Major update. Preparing for v0.3.0-rc.0 - breaking changes by @brainlid in #131
- update calculator tool by @brainlid in #132
- support receiving rate limit info by @brainlid in #133
- upgrade abacus dep by @brainlid in #134
- add support for TokenUsage through callbacks by @brainlid in #137
- Big update - RC ready by @brainlid in #138
- Improvements to docs by @brainlid in #145
- ChatGoogleAI fixes and updates by @brainlid in #152
- fix: typespec error on Message.new_user/1 by @bwan-nan in #151
- Convert to use mimic for mocking calls by @brainlid in #155
- Remove ApiOverride reference in mix.exs project.docs by @stevehodgkiss in #157
- Fix OpenAI chat stream hanging by @stevehodgkiss in #156
- Fix streaming error when using Azure OpenAI Service by @stevehodgkiss in #158
- Update Azure OpenAI Service streaming fix by @stevehodgkiss in #161
- Fix ChatOllamaAI streaming response by @alappe in #162
- Fix PromptTemplate example by @joelpaulkoch in #167
- adds OpenAI project authentication. by @fbettag in #166
- Anthropic support for streamed tool calls with parameters by @brainlid in #169
- change return of LLMChain.run/2 - breaking change by @brainlid in #170
- 🐛 cast tool_calls arguments correctly inside message_deltas by @rparcus in #175
- Do not duplicate tool call parameters if they are identical by @michalwarda in #174
- Structured Outputs by supplying
strict: truein #173 - feat: add OpenAI's new structured output API by @monotykamary in #180
- Support system instructions for Google AI by @elliotb in #182
- Handle empty text parts from GoogleAI responses by @elliotb in #181
- Handle missing token usage fields for Google AI by @elliotb in #184
- Handle functions with no parameters for Google AI by @elliotb in #183
- Add AWS Bedrock support to ChatAnthropic by @stevehodgkiss in #154
- Handle all possible finishReasons for ChatGoogleAI by @elliotb in #188
- Remove unused assignment from ChatGoogleAI by @elliotb in #187
- Add support for passing safety settings to Google AI by @elliotb in #186
- Add tool_choice for OpenAI and Anthropic by @avergin in #142
- add support for examples to title chain by @brainlid in #191
- add "processed_content" to ToolResult struct and support storing Elixir data from function results by @brainlid in #192
- Revamped error handling and handles Anthropic's "overload_error" by @brainlid in #194
- Documenting AWS Bedrock support with Anthropic Claude by @brainlid in #195
- Cancel a message delta when we receive "overloaded" error by @brainlid in #196
- implement initial support for fallbacks by @brainlid in #207
- Fix content-part encoding and decoding for Google API. by @vkryukov in #212
- Fix specs and examples by @vkryukov in #211
- Ability to Summarize an LLM Conversation by @brainlid in #216
- Prepare for v0.3.0-rc.1 by @brainlid in #217
- add explicit message support in summarizer by @brainlid in #220
- Change abacus to optional dep by @nallwhy in #223
- Remove constraint of alternating user, assistant by @GenericJam in #222
- Breaking change: consolidate LLM callback functions by @brainlid in #228
- feat: Enable :inet6 for Req.new for Ollama by @mpope9 in #227
- fix: enable verbose_deltas in #197
- Prep for v0.3.0-rc.2 - update version and docs outline by @brainlid in #229
- Add Bumblebee Phi-4 by @marcnnn in #233
- feat: apply chat template from callback by @joelpaulkoch in #231
- support for o1 OpenAI model by @brainlid in #234
- feat: Support for Ollama keep_alive API parameter by @mpope9 in #237
- Add prompt caching support for Claude. by @montebrown in #226
- Add raw field to TokenUsage by @nallwhy in #236
- Add LLAMA 3.1 Json tool call with Bumblebee by @marcnnn in #198
- prep for v0.3.0 release by @brainlid in #238
New Contributors
- @MrYawe made their first contribution in #123
- @petrus-jvrensburg made their first contribution in #125
- @raulchedrese made their first contribution in #124
- @bwan-nan made their first contribution in #151
- @stevehodgkiss made their first contribution in #157
- @alappe made their first contribution in #162
- @joelpaulkoch made their first contribution in #167
- @fbettag made their first contribution in #166
- @rparcus made their first contribution in #175
- @monotykamary made their first contribution in #180
- @elliotb made their first contribution in #182
- @avergin made their first contribution in #142
- @vkryukov made their first contribution in #212
- @nallwhy made their first contribution in #223
- @GenericJam made their first contribution in #222
- @mpope9 made their first contribution in #227
- @marcnnn made their first contribution in #233
- @montebrown made their first contribution in #226
Full Changelog: v0.2.0...v0.3.0
Thanks to all the contributors!