diff --git a/src/oss/javascript/integrations/chat/google_generative_ai.mdx b/src/oss/javascript/integrations/chat/google_generative_ai.mdx index 74b992e3f2..9527893b70 100644 --- a/src/oss/javascript/integrations/chat/google_generative_ai.mdx +++ b/src/oss/javascript/integrations/chat/google_generative_ai.mdx @@ -159,7 +159,7 @@ console.log(aiMsg.content) J'adore programmer. ``` -## Safety Settings +## Safety settings Gemini models have default safety settings that can be overridden. If you are receiving lots of "Safety Warnings" from your models, you can try tweaking the safety_settings attribute of the model. For example, to turn off safety blocking for dangerous content, you can import enums from the `@google/generative-ai` package, then construct your LLM as follows: @@ -441,7 +441,7 @@ console.dir(searchRetrievalResult.response_metadata?.groundingMetadata, { depth: } ``` -### Code Execution +### Code execution Google Generative AI also supports code execution. Using the built in `CodeExecutionTool`, you can make the model generate code, execute it, and use the results in a final completion: @@ -540,7 +540,7 @@ The output of the code was: Therefore, the answer to your question is 21. ``` -## Context Caching +## Context caching Context caching allows you to pass some content to the model once, cache the input tokens, and then refer to the cached tokens for subsequent requests to reduce cost. You can create a `CachedContent` object using `GoogleAICacheManager` class and then pass the `CachedContent` object to your `ChatGoogleGenerativeAIModel` with `enableCachedContent()` method. @@ -596,7 +596,7 @@ await model.invoke("Summarize the video"); - The minimum input token count for context caching is 32,768, and the maximum is the same as the maximum for the given model. -## Gemini Prompting FAQs +## Gemini prompting FAQs As of the time this doc was written (2023/12/12), Gemini has some restrictions on the types and structure of prompts it accepts. Specifically: diff --git a/src/oss/python/integrations/chat/google_generative_ai.mdx b/src/oss/python/integrations/chat/google_generative_ai.mdx index 7bd797c9e7..fd2051b0d2 100644 --- a/src/oss/python/integrations/chat/google_generative_ai.mdx +++ b/src/oss/python/integrations/chat/google_generative_ai.mdx @@ -435,7 +435,7 @@ Code execution result: 4 2*2 is 4. ``` -## Thinking Support +## Thinking support See the [Gemini API docs](https://ai.google.dev/gemini-api/docs/thinking) for more info. diff --git a/src/oss/python/integrations/llms/google_ai.mdx b/src/oss/python/integrations/llms/google_ai.mdx index 3ea92c5391..7d8f2e9346 100644 --- a/src/oss/python/integrations/llms/google_ai.mdx +++ b/src/oss/python/integrations/llms/google_ai.mdx @@ -151,7 +151,7 @@ For in their embrace, we find a peace profound, A frozen world, with magic all around. ``` -### Safety Settings +### Safety settings Gemini models have default safety settings that can be overridden. If you are receiving lots of "Safety Warnings" from your models, you can try tweaking the `safety_settings` attribute of the model. For example, to turn off safety blocking for dangerous content, you can construct your LLM as follows: