diff --git a/.archive/CodeGemma/[CodeGemma_1]Common_use_cases.ipynb b/.archive/CodeGemma/[CodeGemma_1]Common_use_cases.ipynb index 1fd69701..b0a8a30f 100644 --- a/.archive/CodeGemma/[CodeGemma_1]Common_use_cases.ipynb +++ b/.archive/CodeGemma/[CodeGemma_1]Common_use_cases.ipynb @@ -41,7 +41,7 @@ "This notebook demonstrates the basic task that Gemma can solve by using the right prompting.\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/CodeGemma/[CodeGemma_1]Finetune_with_SQL.ipynb b/.archive/CodeGemma/[CodeGemma_1]Finetune_with_SQL.ipynb index 2bf7832e..82e7136e 100644 --- a/.archive/CodeGemma/[CodeGemma_1]Finetune_with_SQL.ipynb +++ b/.archive/CodeGemma/[CodeGemma_1]Finetune_with_SQL.ipynb @@ -53,7 +53,7 @@ "This notebook demonstrates how to load, fine-tune and deploy CodeGemma model on SQL by utilising Hugging Face.\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Demos/Emoji-Gemma-on-Web/resources/Convert_Gemma_3_270M_to_LiteRT_for_MediaPipe_LLM_Inference_API.ipynb b/.archive/Demos/Emoji-Gemma-on-Web/resources/Convert_Gemma_3_270M_to_LiteRT_for_MediaPipe_LLM_Inference_API.ipynb index bcac87ef..1d74684b 100644 --- a/.archive/Demos/Emoji-Gemma-on-Web/resources/Convert_Gemma_3_270M_to_LiteRT_for_MediaPipe_LLM_Inference_API.ipynb +++ b/.archive/Demos/Emoji-Gemma-on-Web/resources/Convert_Gemma_3_270M_to_LiteRT_for_MediaPipe_LLM_Inference_API.ipynb @@ -39,7 +39,7 @@ "source": [ "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] @@ -60,7 +60,7 @@ "4. Package the model with the MediaPipe Task bundler\n", "5. Download the model\n", "\n", - "Gemma 3 270M is designed for task-specific fine-tuning and engineered for efficient performance on mobile, web, and edge devices. You can fine-tune your own model using this [notebook](https://colab.research.google.com/github/google-gemini/gemma-cookbook/blob/main/Demos/Emoji-Gemma-on-Web/resources/Fine_tune_Gemma_3_270M_for_emoji_generation.ipynb) and run it in a demo [web app](https://github.com/google-gemini/gemma-cookbook/tree/main/Demos/Emoji-Gemma-on-Web/app-mediapipe) once converted.\n", + "Gemma 3 270M is designed for task-specific fine-tuning and engineered for efficient performance on mobile, web, and edge devices. You can fine-tune your own model using this [notebook](https://colab.research.google.com/github/google-gemma/cookbook/blob/main/.archive/Demos/Emoji-Gemma-on-Web/resources/Fine_tune_Gemma_3_270M_for_emoji_generation.ipynb) and run it in a demo [web app](https://github.com/google-gemini/gemma-cookbook/tree/main/Demos/Emoji-Gemma-on-Web/app-mediapipe) once converted.\n", "\n", "## Set up development environment\n", "\n", diff --git a/.archive/Demos/Emoji-Gemma-on-Web/resources/Convert_Gemma_3_270M_to_ONNX.ipynb b/.archive/Demos/Emoji-Gemma-on-Web/resources/Convert_Gemma_3_270M_to_ONNX.ipynb index d55acf8e..3487c940 100644 --- a/.archive/Demos/Emoji-Gemma-on-Web/resources/Convert_Gemma_3_270M_to_ONNX.ipynb +++ b/.archive/Demos/Emoji-Gemma-on-Web/resources/Convert_Gemma_3_270M_to_ONNX.ipynb @@ -39,7 +39,7 @@ "source": [ "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] @@ -59,7 +59,7 @@ "3. Convert the model with Optimum conversion script\n", "4. Test, evaluate, and save the model for further use\n", "\n", - "Gemma 3 270M is designed for task-specific fine-tuning and engineered for efficient performance on mobile, web, and edge devices. You can fine-tune your own model in this [notebook](https://colab.research.google.com/github/google-gemini/gemma-cookbook/blob/main/Demos/Emoji-Gemma-on-Web/resources/Fine_tune_Gemma_3_270M_for_emoji_generation.ipynb) and run it in a demo [web app](https://github.com/google-gemini/gemma-cookbook/tree/main/Demos/Emoji-Gemma-on-Web/app-transformersjs) once converted.\n", + "Gemma 3 270M is designed for task-specific fine-tuning and engineered for efficient performance on mobile, web, and edge devices. You can fine-tune your own model in this [notebook](https://colab.research.google.com/github/google-gemma/cookbook/blob/main/.archive/Demos/Emoji-Gemma-on-Web/resources/Fine_tune_Gemma_3_270M_for_emoji_generation.ipynb) and run it in a demo [web app](https://github.com/google-gemini/gemma-cookbook/tree/main/Demos/Emoji-Gemma-on-Web/app-transformersjs) once converted.\n", "\n", "## Set up development environment\n", "\n", diff --git a/.archive/Demos/Emoji-Gemma-on-Web/resources/Fine_tune_Gemma_3_270M_for_emoji_generation.ipynb b/.archive/Demos/Emoji-Gemma-on-Web/resources/Fine_tune_Gemma_3_270M_for_emoji_generation.ipynb index 7e453124..d058cad6 100644 --- a/.archive/Demos/Emoji-Gemma-on-Web/resources/Fine_tune_Gemma_3_270M_for_emoji_generation.ipynb +++ b/.archive/Demos/Emoji-Gemma-on-Web/resources/Fine_tune_Gemma_3_270M_for_emoji_generation.ipynb @@ -39,7 +39,7 @@ "source": [ "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] @@ -681,8 +681,8 @@ "\n", "This notebook covered how to efficiently fine-tune Gemma 3 270M for emoji generation. Continue on to the conversion and quantization steps to get it ready for on-device deployment. You can follow the steps to either:\n", "\n", - "1. [Convert for use with MediaPipe LLM Inference API](https://colab.research.google.com/github/google-gemini/gemma-cookbook/blob/main/Demos/Emoji-Gemma-on-Web/resources/Convert_Gemma_3_270M_to_LiteRT_for_MediaPipe_LLM_Inference_API.ipynb)\n", - "2. [Convert for use with Transformers.js via ONNX Runtime](https://colab.research.google.com/github/google-gemini/gemma-cookbook/blob/main/Demos/Emoji-Gemma-on-Web/resources/Convert_Gemma_3_270M_to_ONNX.ipynb)" + "1. [Convert for use with MediaPipe LLM Inference API](https://colab.research.google.com/github/google-gemma/cookbook/blob/main/.archive/Demos/Emoji-Gemma-on-Web/resources/Convert_Gemma_3_270M_to_LiteRT_for_MediaPipe_LLM_Inference_API.ipynb)\n", + "2. [Convert for use with Transformers.js via ONNX Runtime](https://colab.research.google.com/github/google-gemma/cookbook/blob/main/.archive/Demos/Emoji-Gemma-on-Web/resources/Convert_Gemma_3_270M_to_ONNX.ipynb)" ] } ], diff --git a/.archive/Demos/business-email-assistant/model-tuning/notebook/bakery_inquiry_model_tuned_with_gemma.ipynb b/.archive/Demos/business-email-assistant/model-tuning/notebook/bakery_inquiry_model_tuned_with_gemma.ipynb index 32dc6f73..30b1f73c 100644 --- a/.archive/Demos/business-email-assistant/model-tuning/notebook/bakery_inquiry_model_tuned_with_gemma.ipynb +++ b/.archive/Demos/business-email-assistant/model-tuning/notebook/bakery_inquiry_model_tuned_with_gemma.ipynb @@ -43,7 +43,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
\n" ] diff --git a/.archive/Demos/spoken-language-tasks/k-gemma-it/spoken_language_tasks_with_gemma.ipynb b/.archive/Demos/spoken-language-tasks/k-gemma-it/spoken_language_tasks_with_gemma.ipynb index 3d7b3461..a8f4bd81 100644 --- a/.archive/Demos/spoken-language-tasks/k-gemma-it/spoken_language_tasks_with_gemma.ipynb +++ b/.archive/Demos/spoken-language-tasks/k-gemma-it/spoken_language_tasks_with_gemma.ipynb @@ -43,7 +43,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/FunctionGemma/[FunctionGemma]Finetune_FunctionGemma_270M_for_Mobile_Actions_with_Hugging_Face.ipynb b/.archive/FunctionGemma/[FunctionGemma]Finetune_FunctionGemma_270M_for_Mobile_Actions_with_Hugging_Face.ipynb index 1a3c7ea2..b4479554 100644 --- a/.archive/FunctionGemma/[FunctionGemma]Finetune_FunctionGemma_270M_for_Mobile_Actions_with_Hugging_Face.ipynb +++ b/.archive/FunctionGemma/[FunctionGemma]Finetune_FunctionGemma_270M_for_Mobile_Actions_with_Hugging_Face.ipynb @@ -39,7 +39,7 @@ "source": [ "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/FunctionGemma/[FunctionGemma]Finetune_FunctionGemma_270M_for_Mobile_Actions_with_Tunix.ipynb b/.archive/FunctionGemma/[FunctionGemma]Finetune_FunctionGemma_270M_for_Mobile_Actions_with_Tunix.ipynb index 4fab0ef1..215682e1 100644 --- a/.archive/FunctionGemma/[FunctionGemma]Finetune_FunctionGemma_270M_for_Mobile_Actions_with_Tunix.ipynb +++ b/.archive/FunctionGemma/[FunctionGemma]Finetune_FunctionGemma_270M_for_Mobile_Actions_with_Tunix.ipynb @@ -39,7 +39,7 @@ "source": [ "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_1]Advanced_Prompting_Techniques.ipynb b/.archive/Gemma/[Gemma_1]Advanced_Prompting_Techniques.ipynb index 1e873068..719823d3 100644 --- a/.archive/Gemma/[Gemma_1]Advanced_Prompting_Techniques.ipynb +++ b/.archive/Gemma/[Gemma_1]Advanced_Prompting_Techniques.ipynb @@ -42,7 +42,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_1]Basics_with_HF.ipynb b/.archive/Gemma/[Gemma_1]Basics_with_HF.ipynb index 6e3352c9..ad9c8c10 100644 --- a/.archive/Gemma/[Gemma_1]Basics_with_HF.ipynb +++ b/.archive/Gemma/[Gemma_1]Basics_with_HF.ipynb @@ -41,7 +41,7 @@ "This notebook demonstrates how to load, fine-tune and deploy Gemma model by utilising Hugging Face.\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_1]Common_use_cases.ipynb b/.archive/Gemma/[Gemma_1]Common_use_cases.ipynb index ed2537a1..07ed7ba4 100644 --- a/.archive/Gemma/[Gemma_1]Common_use_cases.ipynb +++ b/.archive/Gemma/[Gemma_1]Common_use_cases.ipynb @@ -41,7 +41,7 @@ "This notebook demonstrates the basic task that Gemma can solve by using the right prompting.\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_1]Finetune_distributed.ipynb b/.archive/Gemma/[Gemma_1]Finetune_distributed.ipynb index 8ebda9b2..fdd1c190 100644 --- a/.archive/Gemma/[Gemma_1]Finetune_distributed.ipynb +++ b/.archive/Gemma/[Gemma_1]Finetune_distributed.ipynb @@ -43,7 +43,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_1]Finetune_with_LLaMA_Factory.ipynb b/.archive/Gemma/[Gemma_1]Finetune_with_LLaMA_Factory.ipynb index fc341eec..655390e3 100644 --- a/.archive/Gemma/[Gemma_1]Finetune_with_LLaMA_Factory.ipynb +++ b/.archive/Gemma/[Gemma_1]Finetune_with_LLaMA_Factory.ipynb @@ -43,7 +43,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_1]Finetune_with_XTuner.ipynb b/.archive/Gemma/[Gemma_1]Finetune_with_XTuner.ipynb index a7196958..15bf16ee 100644 --- a/.archive/Gemma/[Gemma_1]Finetune_with_XTuner.ipynb +++ b/.archive/Gemma/[Gemma_1]Finetune_with_XTuner.ipynb @@ -43,7 +43,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_1]Inference_on_TPU.ipynb b/.archive/Gemma/[Gemma_1]Inference_on_TPU.ipynb index 774594b3..3ccf80fa 100644 --- a/.archive/Gemma/[Gemma_1]Inference_on_TPU.ipynb +++ b/.archive/Gemma/[Gemma_1]Inference_on_TPU.ipynb @@ -42,7 +42,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_1]Minimal_RAG.ipynb b/.archive/Gemma/[Gemma_1]Minimal_RAG.ipynb index f828de78..ffef0fad 100644 --- a/.archive/Gemma/[Gemma_1]Minimal_RAG.ipynb +++ b/.archive/Gemma/[Gemma_1]Minimal_RAG.ipynb @@ -43,7 +43,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_1]RAG_with_ChromaDB.ipynb b/.archive/Gemma/[Gemma_1]RAG_with_ChromaDB.ipynb index 82eea9ed..47848f90 100644 --- a/.archive/Gemma/[Gemma_1]RAG_with_ChromaDB.ipynb +++ b/.archive/Gemma/[Gemma_1]RAG_with_ChromaDB.ipynb @@ -43,7 +43,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_1]Using_with_Ollama.ipynb b/.archive/Gemma/[Gemma_1]Using_with_Ollama.ipynb index a684bf7b..300711d1 100644 --- a/.archive/Gemma/[Gemma_1]Using_with_Ollama.ipynb +++ b/.archive/Gemma/[Gemma_1]Using_with_Ollama.ipynb @@ -43,7 +43,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_1]Using_with_OneTwo.ipynb b/.archive/Gemma/[Gemma_1]Using_with_OneTwo.ipynb index d7755f88..9cad6977 100644 --- a/.archive/Gemma/[Gemma_1]Using_with_OneTwo.ipynb +++ b/.archive/Gemma/[Gemma_1]Using_with_OneTwo.ipynb @@ -43,7 +43,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_1]data_parallel_inference_in_jax_tpu.ipynb b/.archive/Gemma/[Gemma_1]data_parallel_inference_in_jax_tpu.ipynb index 205780da..f8873853 100644 --- a/.archive/Gemma/[Gemma_1]data_parallel_inference_in_jax_tpu.ipynb +++ b/.archive/Gemma/[Gemma_1]data_parallel_inference_in_jax_tpu.ipynb @@ -40,7 +40,7 @@ "# Unlocking Gemma's Power: Data-Parallel Inference on TPUs with JAX\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_2]Agentic_AI.ipynb b/.archive/Gemma/[Gemma_2]Agentic_AI.ipynb index a28a6c7e..c223f128 100644 --- a/.archive/Gemma/[Gemma_2]Agentic_AI.ipynb +++ b/.archive/Gemma/[Gemma_2]Agentic_AI.ipynb @@ -41,7 +41,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
\n" ] diff --git a/.archive/Gemma/[Gemma_2]Aligning_DPO.ipynb b/.archive/Gemma/[Gemma_2]Aligning_DPO.ipynb index fc87a651..73511b9e 100644 --- a/.archive/Gemma/[Gemma_2]Aligning_DPO.ipynb +++ b/.archive/Gemma/[Gemma_2]Aligning_DPO.ipynb @@ -43,7 +43,7 @@ "This notebook demonstrates how to align a Gemma-2 model using DPO (Direct Preference Optimization).\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
\n" ] diff --git a/.archive/Gemma/[Gemma_2]Constrained_generation.ipynb b/.archive/Gemma/[Gemma_2]Constrained_generation.ipynb index 8cbf4cb6..0a886eb7 100644 --- a/.archive/Gemma/[Gemma_2]Constrained_generation.ipynb +++ b/.archive/Gemma/[Gemma_2]Constrained_generation.ipynb @@ -52,7 +52,7 @@ "\n", "\n", "\n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_2]Custom_Vocabulary.ipynb b/.archive/Gemma/[Gemma_2]Custom_Vocabulary.ipynb index 4ba4c0d9..ea6642e8 100644 --- a/.archive/Gemma/[Gemma_2]Custom_Vocabulary.ipynb +++ b/.archive/Gemma/[Gemma_2]Custom_Vocabulary.ipynb @@ -58,7 +58,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
\n" ] diff --git a/.archive/Gemma/[Gemma_2]DeFi_Protocol_Development.ipynb b/.archive/Gemma/[Gemma_2]DeFi_Protocol_Development.ipynb index 16d99430..c586033f 100644 --- a/.archive/Gemma/[Gemma_2]DeFi_Protocol_Development.ipynb +++ b/.archive/Gemma/[Gemma_2]DeFi_Protocol_Development.ipynb @@ -41,7 +41,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
\n", "
\n", @@ -907,7 +907,7 @@ "- Get a professional security audit before any mainnet deployment\n", "\n", "---\n", - "*For smart contract security analysis, see the companion notebook: [Smart Contract Auditing](https://colab.research.google.com/github/google-gemini/gemma-cookbook/blob/main/Gemma/[Gemma_2]Smart_Contract_Auditing.ipynb)*" + "*For smart contract security analysis, see the companion notebook: [Smart Contract Auditing](https://colab.research.google.com/github/google-gemma/cookbook/blob/main/.archive/Gemma/[Gemma_2]Smart_Contract_Auditing.ipynb)*" ] } ], diff --git a/.archive/Gemma/[Gemma_2]Deploy_in_Vertex_AI.ipynb b/.archive/Gemma/[Gemma_2]Deploy_in_Vertex_AI.ipynb index b724df72..a1c50b32 100644 --- a/.archive/Gemma/[Gemma_2]Deploy_in_Vertex_AI.ipynb +++ b/.archive/Gemma/[Gemma_2]Deploy_in_Vertex_AI.ipynb @@ -43,7 +43,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_2]Deploy_with_vLLM.ipynb b/.archive/Gemma/[Gemma_2]Deploy_with_vLLM.ipynb index 1e669576..4d8c0f25 100644 --- a/.archive/Gemma/[Gemma_2]Deploy_with_vLLM.ipynb +++ b/.archive/Gemma/[Gemma_2]Deploy_with_vLLM.ipynb @@ -43,7 +43,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_2]Finetune_with_Axolotl.ipynb b/.archive/Gemma/[Gemma_2]Finetune_with_Axolotl.ipynb index 8799713b..25da1c5d 100644 --- a/.archive/Gemma/[Gemma_2]Finetune_with_Axolotl.ipynb +++ b/.archive/Gemma/[Gemma_2]Finetune_with_Axolotl.ipynb @@ -44,7 +44,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_2]Finetune_with_CALM.ipynb b/.archive/Gemma/[Gemma_2]Finetune_with_CALM.ipynb index 66710a04..d6af5aea 100644 --- a/.archive/Gemma/[Gemma_2]Finetune_with_CALM.ipynb +++ b/.archive/Gemma/[Gemma_2]Finetune_with_CALM.ipynb @@ -60,7 +60,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_2]Finetune_with_Function_Calling.ipynb b/.archive/Gemma/[Gemma_2]Finetune_with_Function_Calling.ipynb index cac3b9a5..cd3092d4 100644 --- a/.archive/Gemma/[Gemma_2]Finetune_with_Function_Calling.ipynb +++ b/.archive/Gemma/[Gemma_2]Finetune_with_Function_Calling.ipynb @@ -56,7 +56,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
\n", "

\n", diff --git a/.archive/Gemma/[Gemma_2]Finetune_with_JORA.ipynb b/.archive/Gemma/[Gemma_2]Finetune_with_JORA.ipynb index 41f5be0a..199cd3ec 100644 --- a/.archive/Gemma/[Gemma_2]Finetune_with_JORA.ipynb +++ b/.archive/Gemma/[Gemma_2]Finetune_with_JORA.ipynb @@ -51,7 +51,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
\n", "

\n", diff --git a/.archive/Gemma/[Gemma_2]Finetune_with_LORA.ipynb b/.archive/Gemma/[Gemma_2]Finetune_with_LORA.ipynb index 908f582f..11f0bcb3 100644 --- a/.archive/Gemma/[Gemma_2]Finetune_with_LORA.ipynb +++ b/.archive/Gemma/[Gemma_2]Finetune_with_LORA.ipynb @@ -41,7 +41,7 @@ " \n", "\n", " \n", diff --git a/.archive/Gemma/[Gemma_2]Finetune_with_LitGPT.ipynb b/.archive/Gemma/[Gemma_2]Finetune_with_LitGPT.ipynb index 1921cada..e34e8613 100644 --- a/.archive/Gemma/[Gemma_2]Finetune_with_LitGPT.ipynb +++ b/.archive/Gemma/[Gemma_2]Finetune_with_LitGPT.ipynb @@ -48,7 +48,7 @@ "\n", "
\n", - " \n", + " \n", " \"Google
Open in Colab\n", "
\n", "
\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_2]Finetune_with_Torch_XLA.ipynb b/.archive/Gemma/[Gemma_2]Finetune_with_Torch_XLA.ipynb index 74405ba7..11aec8d0 100644 --- a/.archive/Gemma/[Gemma_2]Finetune_with_Torch_XLA.ipynb +++ b/.archive/Gemma/[Gemma_2]Finetune_with_Torch_XLA.ipynb @@ -57,7 +57,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
\n", "

\n", diff --git a/.archive/Gemma/[Gemma_2]Finetune_with_Unsloth.ipynb b/.archive/Gemma/[Gemma_2]Finetune_with_Unsloth.ipynb index c6a139b8..d2d78412 100644 --- a/.archive/Gemma/[Gemma_2]Finetune_with_Unsloth.ipynb +++ b/.archive/Gemma/[Gemma_2]Finetune_with_Unsloth.ipynb @@ -51,7 +51,7 @@ "\n", "\n", "\n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_2]Function_Calling_with_Groq_Langchain.ipynb b/.archive/Gemma/[Gemma_2]Function_Calling_with_Groq_Langchain.ipynb index 9cfcf3f9..89e7ba16 100644 --- a/.archive/Gemma/[Gemma_2]Function_Calling_with_Groq_Langchain.ipynb +++ b/.archive/Gemma/[Gemma_2]Function_Calling_with_Groq_Langchain.ipynb @@ -15,7 +15,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_2]Game_Design_Brainstorming.ipynb b/.archive/Gemma/[Gemma_2]Game_Design_Brainstorming.ipynb index 294c23fb..24377129 100644 --- a/.archive/Gemma/[Gemma_2]Game_Design_Brainstorming.ipynb +++ b/.archive/Gemma/[Gemma_2]Game_Design_Brainstorming.ipynb @@ -43,7 +43,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_2]Gradio_Chatbot.ipynb b/.archive/Gemma/[Gemma_2]Gradio_Chatbot.ipynb index 2969ad02..cc8c04ef 100644 --- a/.archive/Gemma/[Gemma_2]Gradio_Chatbot.ipynb +++ b/.archive/Gemma/[Gemma_2]Gradio_Chatbot.ipynb @@ -41,7 +41,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_2]Guess_the_word.ipynb b/.archive/Gemma/[Gemma_2]Guess_the_word.ipynb index 980c4799..38b235bb 100644 --- a/.archive/Gemma/[Gemma_2]Guess_the_word.ipynb +++ b/.archive/Gemma/[Gemma_2]Guess_the_word.ipynb @@ -43,7 +43,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_2]Keras_Quickstart.ipynb b/.archive/Gemma/[Gemma_2]Keras_Quickstart.ipynb index fce62ddc..9c57ad26 100644 --- a/.archive/Gemma/[Gemma_2]Keras_Quickstart.ipynb +++ b/.archive/Gemma/[Gemma_2]Keras_Quickstart.ipynb @@ -47,7 +47,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_2]Keras_Quickstart_Chat.ipynb b/.archive/Gemma/[Gemma_2]Keras_Quickstart_Chat.ipynb index 8cf62057..a7ff164c 100644 --- a/.archive/Gemma/[Gemma_2]Keras_Quickstart_Chat.ipynb +++ b/.archive/Gemma/[Gemma_2]Keras_Quickstart_Chat.ipynb @@ -47,7 +47,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_2]LangChain_chaining.ipynb b/.archive/Gemma/[Gemma_2]LangChain_chaining.ipynb index afd499e2..d4320bee 100644 --- a/.archive/Gemma/[Gemma_2]LangChain_chaining.ipynb +++ b/.archive/Gemma/[Gemma_2]LangChain_chaining.ipynb @@ -43,7 +43,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
\n" ] diff --git a/.archive/Gemma/[Gemma_2]Prompt_chaining.ipynb b/.archive/Gemma/[Gemma_2]Prompt_chaining.ipynb index 636bd5fb..7e5be984 100644 --- a/.archive/Gemma/[Gemma_2]Prompt_chaining.ipynb +++ b/.archive/Gemma/[Gemma_2]Prompt_chaining.ipynb @@ -41,7 +41,7 @@ "This notebook demonstrates how to use prompt chaining and iterative generation with Gemma through a story writing example.\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_2]RAG_LlamaIndex.ipynb b/.archive/Gemma/[Gemma_2]RAG_LlamaIndex.ipynb index b85b2528..1f383521 100644 --- a/.archive/Gemma/[Gemma_2]RAG_LlamaIndex.ipynb +++ b/.archive/Gemma/[Gemma_2]RAG_LlamaIndex.ipynb @@ -42,7 +42,7 @@ "This notebook demonstrates how to integrate Gemma model with [LlamaIndex](https://www.llamaindex.ai/) library to build a basic RAG application.\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_2]RAG_PDF_Search_in_multiple_documents_on_Colab.ipynb b/.archive/Gemma/[Gemma_2]RAG_PDF_Search_in_multiple_documents_on_Colab.ipynb index 26e0a08a..02dbab09 100644 --- a/.archive/Gemma/[Gemma_2]RAG_PDF_Search_in_multiple_documents_on_Colab.ipynb +++ b/.archive/Gemma/[Gemma_2]RAG_PDF_Search_in_multiple_documents_on_Colab.ipynb @@ -59,7 +59,7 @@ "source": [ "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_2]Smart_Contract_Auditing.ipynb b/.archive/Gemma/[Gemma_2]Smart_Contract_Auditing.ipynb index 118d1071..f7b5b2ac 100644 --- a/.archive/Gemma/[Gemma_2]Smart_Contract_Auditing.ipynb +++ b/.archive/Gemma/[Gemma_2]Smart_Contract_Auditing.ipynb @@ -41,7 +41,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
\n", "
\n", diff --git a/.archive/Gemma/[Gemma_2]Synthetic_data_generation.ipynb b/.archive/Gemma/[Gemma_2]Synthetic_data_generation.ipynb index d47afcf5..325bbed1 100644 --- a/.archive/Gemma/[Gemma_2]Synthetic_data_generation.ipynb +++ b/.archive/Gemma/[Gemma_2]Synthetic_data_generation.ipynb @@ -41,7 +41,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
\n" ] diff --git a/.archive/Gemma/[Gemma_2]Translator_of_Old_Korean_Literature.ipynb b/.archive/Gemma/[Gemma_2]Translator_of_Old_Korean_Literature.ipynb index 24220503..6d13cf5d 100644 --- a/.archive/Gemma/[Gemma_2]Translator_of_Old_Korean_Literature.ipynb +++ b/.archive/Gemma/[Gemma_2]Translator_of_Old_Korean_Literature.ipynb @@ -52,7 +52,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_2]Using_Gemini_and_Gemma_with_RouteLLM.ipynb b/.archive/Gemma/[Gemma_2]Using_Gemini_and_Gemma_with_RouteLLM.ipynb index 47a33313..c824cca8 100644 --- a/.archive/Gemma/[Gemma_2]Using_Gemini_and_Gemma_with_RouteLLM.ipynb +++ b/.archive/Gemma/[Gemma_2]Using_Gemini_and_Gemma_with_RouteLLM.ipynb @@ -50,7 +50,7 @@ "\n", "\n", "\n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_2]Using_with_Elasticsearch_and_LangChain.ipynb b/.archive/Gemma/[Gemma_2]Using_with_Elasticsearch_and_LangChain.ipynb index 4d0069f9..ecc8f720 100644 --- a/.archive/Gemma/[Gemma_2]Using_with_Elasticsearch_and_LangChain.ipynb +++ b/.archive/Gemma/[Gemma_2]Using_with_Elasticsearch_and_LangChain.ipynb @@ -76,7 +76,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_2]Using_with_Firebase_Genkit_and_Ollama.ipynb b/.archive/Gemma/[Gemma_2]Using_with_Firebase_Genkit_and_Ollama.ipynb index 7f23f189..b42c1e35 100644 --- a/.archive/Gemma/[Gemma_2]Using_with_Firebase_Genkit_and_Ollama.ipynb +++ b/.archive/Gemma/[Gemma_2]Using_with_Firebase_Genkit_and_Ollama.ipynb @@ -67,7 +67,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
\n" ] diff --git a/.archive/Gemma/[Gemma_2]Using_with_LLM_Comparator.ipynb b/.archive/Gemma/[Gemma_2]Using_with_LLM_Comparator.ipynb index 5490811b..5e4e7365 100644 --- a/.archive/Gemma/[Gemma_2]Using_with_LLM_Comparator.ipynb +++ b/.archive/Gemma/[Gemma_2]Using_with_LLM_Comparator.ipynb @@ -57,7 +57,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_2]Using_with_LangChain.ipynb b/.archive/Gemma/[Gemma_2]Using_with_LangChain.ipynb index 9342b046..e4a36464 100644 --- a/.archive/Gemma/[Gemma_2]Using_with_LangChain.ipynb +++ b/.archive/Gemma/[Gemma_2]Using_with_LangChain.ipynb @@ -41,7 +41,7 @@ "This notebook demonstrates how to use Gemma (2B) model with LangChain library.\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_2]Using_with_Langfun_and_LlamaCpp.ipynb b/.archive/Gemma/[Gemma_2]Using_with_Langfun_and_LlamaCpp.ipynb index 06f2797c..5aa9ad32 100644 --- a/.archive/Gemma/[Gemma_2]Using_with_Langfun_and_LlamaCpp.ipynb +++ b/.archive/Gemma/[Gemma_2]Using_with_Langfun_and_LlamaCpp.ipynb @@ -54,7 +54,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_2]Using_with_Langfun_and_LlamaCpp_Python_Bindings.ipynb b/.archive/Gemma/[Gemma_2]Using_with_Langfun_and_LlamaCpp_Python_Bindings.ipynb index 09794d43..6b610543 100644 --- a/.archive/Gemma/[Gemma_2]Using_with_Langfun_and_LlamaCpp_Python_Bindings.ipynb +++ b/.archive/Gemma/[Gemma_2]Using_with_Langfun_and_LlamaCpp_Python_Bindings.ipynb @@ -54,7 +54,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
\n" ] diff --git a/.archive/Gemma/[Gemma_2]Using_with_LlamaCpp.ipynb b/.archive/Gemma/[Gemma_2]Using_with_LlamaCpp.ipynb index 3b79173b..3212fccb 100644 --- a/.archive/Gemma/[Gemma_2]Using_with_LlamaCpp.ipynb +++ b/.archive/Gemma/[Gemma_2]Using_with_LlamaCpp.ipynb @@ -51,7 +51,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_2]Using_with_Llamafile.ipynb b/.archive/Gemma/[Gemma_2]Using_with_Llamafile.ipynb index 59e796ed..37f8c2e4 100644 --- a/.archive/Gemma/[Gemma_2]Using_with_Llamafile.ipynb +++ b/.archive/Gemma/[Gemma_2]Using_with_Llamafile.ipynb @@ -49,7 +49,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_2]Using_with_LocalGemma.ipynb b/.archive/Gemma/[Gemma_2]Using_with_LocalGemma.ipynb index 7488953c..44b688b9 100644 --- a/.archive/Gemma/[Gemma_2]Using_with_LocalGemma.ipynb +++ b/.archive/Gemma/[Gemma_2]Using_with_LocalGemma.ipynb @@ -48,7 +48,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_2]Using_with_Mesop.ipynb b/.archive/Gemma/[Gemma_2]Using_with_Mesop.ipynb index a73f9060..a617f766 100644 --- a/.archive/Gemma/[Gemma_2]Using_with_Mesop.ipynb +++ b/.archive/Gemma/[Gemma_2]Using_with_Mesop.ipynb @@ -43,7 +43,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_2]Using_with_Ollama_Python.ipynb b/.archive/Gemma/[Gemma_2]Using_with_Ollama_Python.ipynb index f67f6a8d..ee770e9c 100644 --- a/.archive/Gemma/[Gemma_2]Using_with_Ollama_Python.ipynb +++ b/.archive/Gemma/[Gemma_2]Using_with_Ollama_Python.ipynb @@ -48,7 +48,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_2]Using_with_SGLang.ipynb b/.archive/Gemma/[Gemma_2]Using_with_SGLang.ipynb index a362b835..8a25f665 100644 --- a/.archive/Gemma/[Gemma_2]Using_with_SGLang.ipynb +++ b/.archive/Gemma/[Gemma_2]Using_with_SGLang.ipynb @@ -47,7 +47,7 @@ "In this notebook, you will learn how to prompt Gemma 2 model in various ways using the **SGLang** http server, backend runtime and frontend language in a Google Colab environment.\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_2]Using_with_Xinference.ipynb b/.archive/Gemma/[Gemma_2]Using_with_Xinference.ipynb index 2651a5bb..871b1ec7 100644 --- a/.archive/Gemma/[Gemma_2]Using_with_Xinference.ipynb +++ b/.archive/Gemma/[Gemma_2]Using_with_Xinference.ipynb @@ -61,7 +61,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_2]Using_with_mistral_rs.ipynb b/.archive/Gemma/[Gemma_2]Using_with_mistral_rs.ipynb index a06507c8..b29b76e2 100644 --- a/.archive/Gemma/[Gemma_2]Using_with_mistral_rs.ipynb +++ b/.archive/Gemma/[Gemma_2]Using_with_mistral_rs.ipynb @@ -47,7 +47,7 @@ "In this notebook, you will learn how to prompt the Gemma 2 model in various ways using the **mistral.rs** Python APIs in a Google Colab environment.\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_2]evaluation.ipynb b/.archive/Gemma/[Gemma_2]evaluation.ipynb index 791ea787..43950a9f 100644 --- a/.archive/Gemma/[Gemma_2]evaluation.ipynb +++ b/.archive/Gemma/[Gemma_2]evaluation.ipynb @@ -42,7 +42,7 @@ "This notebook demonstrates using EleautherAI's Language Model Evaluation Harness to perform model performance benchmark on Gemma2 2B, specifically using a subset of MMLU.\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_2]for_Japan_using_Transformers_and_PyTorch.ipynb b/.archive/Gemma/[Gemma_2]for_Japan_using_Transformers_and_PyTorch.ipynb index 4a799e50..db38111d 100644 --- a/.archive/Gemma/[Gemma_2]for_Japan_using_Transformers_and_PyTorch.ipynb +++ b/.archive/Gemma/[Gemma_2]for_Japan_using_Transformers_and_PyTorch.ipynb @@ -45,7 +45,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_2]on_Groq.ipynb b/.archive/Gemma/[Gemma_2]on_Groq.ipynb index 7abe5f0c..2ff65fa1 100644 --- a/.archive/Gemma/[Gemma_2]on_Groq.ipynb +++ b/.archive/Gemma/[Gemma_2]on_Groq.ipynb @@ -43,7 +43,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_3]Activation_Hacking.ipynb b/.archive/Gemma/[Gemma_3]Activation_Hacking.ipynb index 8f21a54f..73c039b3 100644 --- a/.archive/Gemma/[Gemma_3]Activation_Hacking.ipynb +++ b/.archive/Gemma/[Gemma_3]Activation_Hacking.ipynb @@ -53,7 +53,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_3]Chess.ipynb b/.archive/Gemma/[Gemma_3]Chess.ipynb index b2d6391a..b138afe5 100644 --- a/.archive/Gemma/[Gemma_3]Chess.ipynb +++ b/.archive/Gemma/[Gemma_3]Chess.ipynb @@ -10,7 +10,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_3]Function_Calling_Routing_and_Monitoring_using_Gemma_Google_Genai.ipynb b/.archive/Gemma/[Gemma_3]Function_Calling_Routing_and_Monitoring_using_Gemma_Google_Genai.ipynb index af4749dd..aa74c663 100644 --- a/.archive/Gemma/[Gemma_3]Function_Calling_Routing_and_Monitoring_using_Gemma_Google_Genai.ipynb +++ b/.archive/Gemma/[Gemma_3]Function_Calling_Routing_and_Monitoring_using_Gemma_Google_Genai.ipynb @@ -41,7 +41,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_3]Function_Calling_with_HF.ipynb b/.archive/Gemma/[Gemma_3]Function_Calling_with_HF.ipynb index 4138994e..66d7532e 100644 --- a/.archive/Gemma/[Gemma_3]Function_Calling_with_HF.ipynb +++ b/.archive/Gemma/[Gemma_3]Function_Calling_with_HF.ipynb @@ -43,7 +43,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_3]Function_Calling_with_HF_document_summarizer.ipynb b/.archive/Gemma/[Gemma_3]Function_Calling_with_HF_document_summarizer.ipynb index aa390105..d7d7ef07 100644 --- a/.archive/Gemma/[Gemma_3]Function_Calling_with_HF_document_summarizer.ipynb +++ b/.archive/Gemma/[Gemma_3]Function_Calling_with_HF_document_summarizer.ipynb @@ -43,7 +43,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_3]Gradio_LlamaCpp_Chatbot.ipynb b/.archive/Gemma/[Gemma_3]Gradio_LlamaCpp_Chatbot.ipynb index 4fddce27..082a5c87 100644 --- a/.archive/Gemma/[Gemma_3]Gradio_LlamaCpp_Chatbot.ipynb +++ b/.archive/Gemma/[Gemma_3]Gradio_LlamaCpp_Chatbot.ipynb @@ -48,7 +48,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_3]In-context_Learning.ipynb b/.archive/Gemma/[Gemma_3]In-context_Learning.ipynb index 54301f6d..5e36da07 100644 --- a/.archive/Gemma/[Gemma_3]In-context_Learning.ipynb +++ b/.archive/Gemma/[Gemma_3]In-context_Learning.ipynb @@ -47,11 +47,11 @@ "\n", "The larger context windows in models like Gemma 3 are particularly beneficial for in-context learning, as they allow for more examples and more complex instructions to be included within a single prompt, potentially leading to improved performance on a wider range of tasks without explicit fine-tuning.\n", "\n", - "In this notebook, we'll apply in-context learning to replicate the result of our previous \"[Translator of Old Korean Literature](https://colab.research.google.com/github/google-gemini/gemma-cookbook/blob/main/Gemma/[Gemma_2]Translator_of_Old_Korean_Literature.ipynb)\" fine-tuning example.\n", + "In this notebook, we'll apply in-context learning to replicate the result of our previous \"[Translator of Old Korean Literature](https://colab.research.google.com/github/google-gemma/cookbook/blob/main/.archive/Gemma/[Gemma_2]Translator_of_Old_Korean_Literature.ipynb)\" fine-tuning example.\n", "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
\n" ] diff --git a/.archive/Gemma/[Gemma_3]Inference_images_and_videos.ipynb b/.archive/Gemma/[Gemma_3]Inference_images_and_videos.ipynb index 54c1e275..86214082 100644 --- a/.archive/Gemma/[Gemma_3]Inference_images_and_videos.ipynb +++ b/.archive/Gemma/[Gemma_3]Inference_images_and_videos.ipynb @@ -53,7 +53,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_3]Local_Agentic_RAG.ipynb b/.archive/Gemma/[Gemma_3]Local_Agentic_RAG.ipynb index 3e0344bb..f80307b3 100644 --- a/.archive/Gemma/[Gemma_3]Local_Agentic_RAG.ipynb +++ b/.archive/Gemma/[Gemma_3]Local_Agentic_RAG.ipynb @@ -46,7 +46,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
\n", "\n", diff --git a/.archive/Gemma/[Gemma_3]Meme_Generator.ipynb b/.archive/Gemma/[Gemma_3]Meme_Generator.ipynb index cd378d78..ccf86440 100644 --- a/.archive/Gemma/[Gemma_3]Meme_Generator.ipynb +++ b/.archive/Gemma/[Gemma_3]Meme_Generator.ipynb @@ -41,7 +41,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
\n" ] diff --git a/.archive/Gemma/[Gemma_3]RAG_with_EmbeddingGemma.ipynb b/.archive/Gemma/[Gemma_3]RAG_with_EmbeddingGemma.ipynb index b85eb062..3e409ad4 100644 --- a/.archive/Gemma/[Gemma_3]RAG_with_EmbeddingGemma.ipynb +++ b/.archive/Gemma/[Gemma_3]RAG_with_EmbeddingGemma.ipynb @@ -41,7 +41,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_3]Speculative_Decoding.ipynb b/.archive/Gemma/[Gemma_3]Speculative_Decoding.ipynb index 9ad96a4a..43e35630 100644 --- a/.archive/Gemma/[Gemma_3]Speculative_Decoding.ipynb +++ b/.archive/Gemma/[Gemma_3]Speculative_Decoding.ipynb @@ -49,7 +49,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_3]Using_with_Ollama_Python_Inference_with_Images.ipynb b/.archive/Gemma/[Gemma_3]Using_with_Ollama_Python_Inference_with_Images.ipynb index 898d288d..f70b9199 100644 --- a/.archive/Gemma/[Gemma_3]Using_with_Ollama_Python_Inference_with_Images.ipynb +++ b/.archive/Gemma/[Gemma_3]Using_with_Ollama_Python_Inference_with_Images.ipynb @@ -48,7 +48,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_3]Using_with_Transformersjs.ipynb b/.archive/Gemma/[Gemma_3]Using_with_Transformersjs.ipynb index 3b0eae7b..714dfda1 100644 --- a/.archive/Gemma/[Gemma_3]Using_with_Transformersjs.ipynb +++ b/.archive/Gemma/[Gemma_3]Using_with_Transformersjs.ipynb @@ -48,7 +48,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_3]Visual_Document_Extraction_to_JSON.ipynb b/.archive/Gemma/[Gemma_3]Visual_Document_Extraction_to_JSON.ipynb index 62cf2565..12e34798 100644 --- a/.archive/Gemma/[Gemma_3]Visual_Document_Extraction_to_JSON.ipynb +++ b/.archive/Gemma/[Gemma_3]Visual_Document_Extraction_to_JSON.ipynb @@ -48,7 +48,7 @@ "The code is intended to be readable, extensible, and suitable for open-source\n", "cookbook or reference implementations.\n", "\n", - "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/google-gemini/gemma-cookbook/blob/main/Gemma/[Gemma_3]Visual_Document_Extraction.ipynb)" + "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/google-gemma/cookbook/blob/main/.archive/Gemma/[Gemma_3]Visual_Document_Extraction.ipynb)" ] }, { diff --git a/.archive/Gemma/[Gemma_3n]Audio_understanding_with_HF.ipynb b/.archive/Gemma/[Gemma_3n]Audio_understanding_with_HF.ipynb index c4e9388a..ea798cdc 100644 --- a/.archive/Gemma/[Gemma_3n]Audio_understanding_with_HF.ipynb +++ b/.archive/Gemma/[Gemma_3n]Audio_understanding_with_HF.ipynb @@ -41,7 +41,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_3n]Finetuned_LoRA_Unsloth_on_Mental_Health_dataset.ipynb b/.archive/Gemma/[Gemma_3n]Finetuned_LoRA_Unsloth_on_Mental_Health_dataset.ipynb index 0521f53e..f9adc0ae 100644 --- a/.archive/Gemma/[Gemma_3n]Finetuned_LoRA_Unsloth_on_Mental_Health_dataset.ipynb +++ b/.archive/Gemma/[Gemma_3n]Finetuned_LoRA_Unsloth_on_Mental_Health_dataset.ipynb @@ -39,7 +39,7 @@ "source": [ " \n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
\n" ] diff --git a/.archive/Gemma/[Gemma_3n]MatFormer_Lab.ipynb b/.archive/Gemma/[Gemma_3n]MatFormer_Lab.ipynb index 66a9bc1e..a3ba4ac3 100644 --- a/.archive/Gemma/[Gemma_3n]MatFormer_Lab.ipynb +++ b/.archive/Gemma/[Gemma_3n]MatFormer_Lab.ipynb @@ -50,7 +50,7 @@ "source": [ " \n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
\n" ] diff --git a/.archive/Gemma/[Gemma_3n]Multimodal_understanding_with_HF.ipynb b/.archive/Gemma/[Gemma_3n]Multimodal_understanding_with_HF.ipynb index 6bff096a..d165bb49 100644 --- a/.archive/Gemma/[Gemma_3n]Multimodal_understanding_with_HF.ipynb +++ b/.archive/Gemma/[Gemma_3n]Multimodal_understanding_with_HF.ipynb @@ -41,7 +41,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Gemma/[Gemma_3n]Using_with_Transformersjs.ipynb b/.archive/Gemma/[Gemma_3n]Using_with_Transformersjs.ipynb index b1b7c5e8..3ab988dc 100644 --- a/.archive/Gemma/[Gemma_3n]Using_with_Transformersjs.ipynb +++ b/.archive/Gemma/[Gemma_3n]Using_with_Transformersjs.ipynb @@ -48,7 +48,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/PaliGemma/[PaliGemma_1]Common_use_cases.ipynb b/.archive/PaliGemma/[PaliGemma_1]Common_use_cases.ipynb index 08eb6a31..dacb0157 100644 --- a/.archive/PaliGemma/[PaliGemma_1]Common_use_cases.ipynb +++ b/.archive/PaliGemma/[PaliGemma_1]Common_use_cases.ipynb @@ -41,7 +41,7 @@ "This notebook demonstrates the basic task that Gemma can solve by using the right prompting.\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/PaliGemma/[PaliGemma_1]Finetune_with_Keras.ipynb b/.archive/PaliGemma/[PaliGemma_1]Finetune_with_Keras.ipynb index 7e719b16..6bdaf0ed 100644 --- a/.archive/PaliGemma/[PaliGemma_1]Finetune_with_Keras.ipynb +++ b/.archive/PaliGemma/[PaliGemma_1]Finetune_with_Keras.ipynb @@ -43,7 +43,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/PaliGemma/[PaliGemma_1]Finetune_with_image_captioning.ipynb b/.archive/PaliGemma/[PaliGemma_1]Finetune_with_image_captioning.ipynb index dec95e1e..360ef7dd 100644 --- a/.archive/PaliGemma/[PaliGemma_1]Finetune_with_image_captioning.ipynb +++ b/.archive/PaliGemma/[PaliGemma_1]Finetune_with_image_captioning.ipynb @@ -48,7 +48,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/PaliGemma/[PaliGemma_1]Finetune_with_image_description.ipynb b/.archive/PaliGemma/[PaliGemma_1]Finetune_with_image_description.ipynb index dee4c08e..c450bd7f 100644 --- a/.archive/PaliGemma/[PaliGemma_1]Finetune_with_image_description.ipynb +++ b/.archive/PaliGemma/[PaliGemma_1]Finetune_with_image_description.ipynb @@ -50,7 +50,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/PaliGemma/[PaliGemma_1]Finetune_with_object_detection.ipynb b/.archive/PaliGemma/[PaliGemma_1]Finetune_with_object_detection.ipynb index 14c44bca..f6ca4026 100644 --- a/.archive/PaliGemma/[PaliGemma_1]Finetune_with_object_detection.ipynb +++ b/.archive/PaliGemma/[PaliGemma_1]Finetune_with_object_detection.ipynb @@ -48,7 +48,7 @@ "source": [ "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/PaliGemma/[PaliGemma_1]Image_captioning.ipynb b/.archive/PaliGemma/[PaliGemma_1]Image_captioning.ipynb index 0a30b7ea..ab259c56 100644 --- a/.archive/PaliGemma/[PaliGemma_1]Image_captioning.ipynb +++ b/.archive/PaliGemma/[PaliGemma_1]Image_captioning.ipynb @@ -45,7 +45,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
\n" ] diff --git a/.archive/PaliGemma/[PaliGemma_1]Referring_expression_segmentation_in_images.ipynb b/.archive/PaliGemma/[PaliGemma_1]Referring_expression_segmentation_in_images.ipynb index 4c27c9e7..0c44b4ac 100644 --- a/.archive/PaliGemma/[PaliGemma_1]Referring_expression_segmentation_in_images.ipynb +++ b/.archive/PaliGemma/[PaliGemma_1]Referring_expression_segmentation_in_images.ipynb @@ -64,7 +64,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/PaliGemma/[PaliGemma_1]Referring_expression_segmentation_in_videos.ipynb b/.archive/PaliGemma/[PaliGemma_1]Referring_expression_segmentation_in_videos.ipynb index 8abad508..10bd4913 100644 --- a/.archive/PaliGemma/[PaliGemma_1]Referring_expression_segmentation_in_videos.ipynb +++ b/.archive/PaliGemma/[PaliGemma_1]Referring_expression_segmentation_in_videos.ipynb @@ -64,7 +64,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/PaliGemma/[PaliGemma_1]Using_with_Mesop.ipynb b/.archive/PaliGemma/[PaliGemma_1]Using_with_Mesop.ipynb index 26c345a1..c4114326 100644 --- a/.archive/PaliGemma/[PaliGemma_1]Using_with_Mesop.ipynb +++ b/.archive/PaliGemma/[PaliGemma_1]Using_with_Mesop.ipynb @@ -41,7 +41,7 @@ "This notebook demonstrates how to use [PaliGemma](https://ai.google.dev/gemma/docs/paligemma) models with [Mesop](https://google.github.io/mesop/) to create a simple GUI application.\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/PaliGemma/[PaliGemma_1]Zero_shot_object_detection_in_images.ipynb b/.archive/PaliGemma/[PaliGemma_1]Zero_shot_object_detection_in_images.ipynb index f66c19b8..c28ede79 100644 --- a/.archive/PaliGemma/[PaliGemma_1]Zero_shot_object_detection_in_images.ipynb +++ b/.archive/PaliGemma/[PaliGemma_1]Zero_shot_object_detection_in_images.ipynb @@ -64,7 +64,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/PaliGemma/[PaliGemma_1]Zero_shot_object_detection_in_videos.ipynb b/.archive/PaliGemma/[PaliGemma_1]Zero_shot_object_detection_in_videos.ipynb index 20f83246..803d992c 100644 --- a/.archive/PaliGemma/[PaliGemma_1]Zero_shot_object_detection_in_videos.ipynb +++ b/.archive/PaliGemma/[PaliGemma_1]Zero_shot_object_detection_in_videos.ipynb @@ -64,7 +64,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/PaliGemma/[PaliGemma_2]Convert_PaliGemma2_to_ONNX.ipynb b/.archive/PaliGemma/[PaliGemma_2]Convert_PaliGemma2_to_ONNX.ipynb index 1d4c844c..75e0d2db 100644 --- a/.archive/PaliGemma/[PaliGemma_2]Convert_PaliGemma2_to_ONNX.ipynb +++ b/.archive/PaliGemma/[PaliGemma_2]Convert_PaliGemma2_to_ONNX.ipynb @@ -66,7 +66,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
\n" ] diff --git a/.archive/PaliGemma/[PaliGemma_2]Finetune_with_JAX.ipynb b/.archive/PaliGemma/[PaliGemma_2]Finetune_with_JAX.ipynb index d7239dff..8acfb823 100644 --- a/.archive/PaliGemma/[PaliGemma_2]Finetune_with_JAX.ipynb +++ b/.archive/PaliGemma/[PaliGemma_2]Finetune_with_JAX.ipynb @@ -44,7 +44,7 @@ "\n", "\n", "\n", "
\n", - "Run in Google Colab\n", + "Run in Google Colab\n", "\n", "View source on GitHub\n", diff --git a/.archive/PaliGemma/[PaliGemma_2]Finetune_with_Keras.ipynb b/.archive/PaliGemma/[PaliGemma_2]Finetune_with_Keras.ipynb index 5c581157..c61e7b95 100644 --- a/.archive/PaliGemma/[PaliGemma_2]Finetune_with_Keras.ipynb +++ b/.archive/PaliGemma/[PaliGemma_2]Finetune_with_Keras.ipynb @@ -43,7 +43,7 @@ "\n", "\n", "\n", "
\n", - "Run in Google Colab\n", + "Run in Google Colab\n", "\n", "View source on GitHub\n", diff --git a/.archive/PaliGemma/[PaliGemma_2]Inference_PaliGemma2_with_Transformers_js.ipynb b/.archive/PaliGemma/[PaliGemma_2]Inference_PaliGemma2_with_Transformers_js.ipynb index c86c9b98..3dac7c6c 100644 --- a/.archive/PaliGemma/[PaliGemma_2]Inference_PaliGemma2_with_Transformers_js.ipynb +++ b/.archive/PaliGemma/[PaliGemma_2]Inference_PaliGemma2_with_Transformers_js.ipynb @@ -57,7 +57,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
\n" ] diff --git a/.archive/PaliGemma/[PaliGemma_2]Keras_Quickstart.ipynb b/.archive/PaliGemma/[PaliGemma_2]Keras_Quickstart.ipynb index fa49bdd2..6262406c 100644 --- a/.archive/PaliGemma/[PaliGemma_2]Keras_Quickstart.ipynb +++ b/.archive/PaliGemma/[PaliGemma_2]Keras_Quickstart.ipynb @@ -45,7 +45,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", " \n", " View source on GitHub\n", diff --git a/.archive/PaliGemma/[PaliGemma_2]Using_with_Transformersjs.ipynb b/.archive/PaliGemma/[PaliGemma_2]Using_with_Transformersjs.ipynb index c6c176f3..bae31a6b 100644 --- a/.archive/PaliGemma/[PaliGemma_2]Using_with_Transformersjs.ipynb +++ b/.archive/PaliGemma/[PaliGemma_2]Using_with_Transformersjs.ipynb @@ -48,7 +48,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Workshops/Workshop_How_to_Fine_tuning_Gemma.ipynb b/.archive/Workshops/Workshop_How_to_Fine_tuning_Gemma.ipynb index b6dbe5b4..7fed1f19 100644 --- a/.archive/Workshops/Workshop_How_to_Fine_tuning_Gemma.ipynb +++ b/.archive/Workshops/Workshop_How_to_Fine_tuning_Gemma.ipynb @@ -43,7 +43,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Workshops/Workshop_How_to_Fine_tuning_Gemma_Transformers_Edition.ipynb b/.archive/Workshops/Workshop_How_to_Fine_tuning_Gemma_Transformers_Edition.ipynb index ca44701a..d9420dba 100644 --- a/.archive/Workshops/Workshop_How_to_Fine_tuning_Gemma_Transformers_Edition.ipynb +++ b/.archive/Workshops/Workshop_How_to_Fine_tuning_Gemma_Transformers_Edition.ipynb @@ -43,7 +43,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Workshops/[Gemma_1]Self_extend.ipynb b/.archive/Workshops/[Gemma_1]Self_extend.ipynb index 9e817b73..cc9c493d 100644 --- a/.archive/Workshops/[Gemma_1]Self_extend.ipynb +++ b/.archive/Workshops/[Gemma_1]Self_extend.ipynb @@ -43,7 +43,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ] diff --git a/.archive/Workshops/[Gemma_2]control_vectors.ipynb b/.archive/Workshops/[Gemma_2]control_vectors.ipynb index 15355bbe..e6091b35 100644 --- a/.archive/Workshops/[Gemma_2]control_vectors.ipynb +++ b/.archive/Workshops/[Gemma_2]control_vectors.ipynb @@ -46,7 +46,7 @@ "\n", "\n", " \n", "
\n", - " Run in Google Colab\n", + " Run in Google Colab\n", "
" ]