.openai.azure.com\"\n",
+ "\n",
+ "openai.api_base = RESOURCE_ENDPOINT\n",
+ "openai.api_type = os.getenv(\"OPENAI_API_TYPE\")\n",
+ "openai.api_version = os.getenv(\"OPENAI_API_VERSION\")\n",
+ "openai.azure_ad_token_provider = token_provider\n",
+ "chat_model=os.getenv(\"CHAT_MODEL_NAME\")\n",
+ "\n",
+ "client = AzureOpenAI(\n",
+ " azure_endpoint=RESOURCE_ENDPOINT,\n",
+ " azure_ad_token_provider=token_provider,\n",
+ " api_version=os.getenv(\"OPENAI_API_VERSION\")\n",
+ ")\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "source": [
+ "#### 1.1.2 Parameter Explanation\n",
+ "We specify the hyperparameters for the Azure OpenAI models within the helper functions. Users can tune the parameters according to different needs.\n",
+ "\n",
+ "\n",
+ "###### **Temperature**\n",
+ "Temperature ranges from 0 to 2.\n",
+ "Here is a quick breakdown of how it works:\n",
+ "- Low temperature (0 to 0.3): More focused, coherent, and conservative outputs.\n",
+ "- Medium temperature (0.3 to 0.7): Balanced creativity and coherence.\n",
+ "- High temperature (> 0.7): Highly creative and diverse, but potentially less coherent.\n",
+ "\n",
+ "###### **Top_p**\n",
+ "Sets the probability mass cutoff for token sampling, affecting the breadth of options the AI considers. \n",
+ "Higher values lead to more randomness, while lower values result in more focused outputs.\n",
+ "\n",
+ "The \u201ctop_p\u201d parameter is like a filter that controls how many different words or phrases the language model considers when it\u2019s trying to predict the next word. If you set the \u201ctop p\u201d value to 0.5, the language model will only consider the 50 most likely words or phrases that might come next. But if you set the \u201ctop p\u201d value to 0.9, the language model will consider the 90 most likely words or phrases.\n",
+ "\n",
+ "And as \"top_p\" and \"temperature\" performs similar job as hyperparameter, we usually only tune one of them instead of both.\n",
+ "\n",
+ "###### **Max_tokens**\n",
+ "Max_tokens determine the maximum length of the generated text. By setting a limit, you can control how much text the LLM model will return, making sure it doesn't give too long of an answer.\n",
+ "\n",
+ "###### **Frequency_penalty**\n",
+ "Frequency_penalty makes sure that the text that is generated is varied by giving a penalty to tokens that have already been used in the response.\n",
+ "\n",
+ "It ranges from -2.0 to 2.0, with higher values resulting in more diverse output.\n",
+ "\n",
+ "Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "source": [
+ "#### 1.1.3 Helper Function\n",
+ "For this hack, make sure you use the latest model as instructed by your coach. If you are doing this independently, feel free to use one of the latest Azure OpenAI GPT models.\n",
+ "\n",
+ "This helper function will make it easier to use prompts and look at the generated outputs.\n",
+ "\n",
+ "**get_chat_completion** helps create the OpenAI response using the chat model of your choice.\n",
+ "\n",
+ "**get_completion_from_messages** helps create the OpenAI response using the chat model of your choice, enabling chat history.\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "gather": {
+ "logged": 1686938673045
+ },
+ "jupyter": {
+ "outputs_hidden": false,
+ "source_hidden": false
+ },
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "outputs": [],
+ "source": [
+ "def get_chat_completion(prompt, model=chat_model):\n",
+ " messages = [{\"role\": \"user\", \"content\": prompt}]\n",
+ " response = client.chat.completions.create(\n",
+ " model=chat_model,\n",
+ " messages=messages,\n",
+ " temperature=0, # this is the degree of randomness of the model's output\n",
+ " max_tokens = 200,\n",
+ " top_p = 1.0\n",
+ " )\n",
+ " return response.choices[0].message.content"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "gather": {
+ "logged": 1686938550664
+ },
+ "jupyter": {
+ "outputs_hidden": false,
+ "source_hidden": false
+ },
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "outputs": [],
+ "source": [
+ "def get_completion_from_messages(messages, model=chat_model, temperature=0):\n",
+ " response = client.chat.completions.create(\n",
+ " model=chat_model,\n",
+ " messages=messages,\n",
+ " temperature=temperature # this is the degree of randomness of the model's output\n",
+ " )\n",
+ "\n",
+ " return response.choices[0].message.content\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "source": [
+ "#### Try out helper functions"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "gather": {
+ "logged": 1686938676516
+ },
+ "jupyter": {
+ "outputs_hidden": false,
+ "source_hidden": false
+ },
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "outputs": [],
+ "source": [
+ "messages = f\"\"\"\n",
+ "tell me a joke.\n",
+ "\"\"\"\n",
+ "response = get_chat_completion(messages)\n",
+ "print(response)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "gather": {
+ "logged": 1686938564787
+ },
+ "jupyter": {
+ "outputs_hidden": false,
+ "source_hidden": false
+ },
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "outputs": [],
+ "source": [
+ "messages = [ \n",
+ " {'role':'user', 'content':'tell me a joke'}, \n",
+ " {'role':'assistant', 'content':'Why did the chicken cross the road'}, \n",
+ " {'role':'user', 'content':'I don\\'t know'}\n",
+ "]\n",
+ "response = get_completion_from_messages(messages, temperature=1)\n",
+ "print(response)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "source": [
+ "#### Student Tasks (open questions):\n",
+ "1. Create a completion function for a higher temperature between (0,1).\n",
+ "2. Create a completion function for a lower max_token for shorter response.\n",
+ "3. Create acompletion functions with 2 different diversity penalty values between (0,2).\n",
+ "\n",
+ "Try out the completion functions you create on the previous case, compare the result you get."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "jupyter": {
+ "outputs_hidden": false,
+ "source_hidden": false
+ },
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "outputs": [],
+ "source": [
+ "# Try out a completion function for a higher temperature between (0,1)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "jupyter": {
+ "outputs_hidden": false,
+ "source_hidden": false
+ },
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "outputs": [],
+ "source": [
+ "# Try out a completion function for a lower max_token for shorter response"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "jupyter": {
+ "outputs_hidden": false,
+ "source_hidden": false
+ },
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "outputs": [],
+ "source": [
+ "# Try out completion functions with 2 different diversity penalty values between (0,2)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "source": [
+ "### 1.2 System Message Engineering\n",
+ "Users can achieve the response from models in their desired tone through adjusting the system message."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "source": [
+ "#### 1.2.1 Change of Tone"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "gather": {
+ "logged": 1685476487849
+ },
+ "jupyter": {
+ "outputs_hidden": false,
+ "source_hidden": false
+ },
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "outputs": [],
+ "source": [
+ "messages = [ \n",
+ " {'role':'assistant', 'content':'How can I help you?'},\n",
+ " {'role':'user', 'content':'tell me a joke'}\n",
+ "]\n",
+ "response = get_completion_from_messages(messages, temperature=1)\n",
+ "print(response)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "gather": {
+ "logged": 1685059077359
+ },
+ "jupyter": {
+ "outputs_hidden": false,
+ "source_hidden": false
+ },
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "outputs": [],
+ "source": [
+ "messages = [ \n",
+ " {'role':'system', 'content':'You are an assistant that speaks like Shakespeare.'}, \n",
+ " {'role':'assistant', 'content':'How can I help you?'},\n",
+ " {'role':'user', 'content':'tell me a joke'}\n",
+ "]\n",
+ "response = get_completion_from_messages(messages, temperature=1)\n",
+ "print(response)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "source": [
+ "#### Student Task (open question):\n",
+ "Make the assistant tell the joke in the tone of your favorite character by editing the system message.\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "gather": {
+ "logged": 1686939630607
+ },
+ "jupyter": {
+ "outputs_hidden": false,
+ "source_hidden": false
+ },
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "outputs": [],
+ "source": [
+ "# Make the assistant tell the joke in the tone of your favorite character by editing the system message."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "source": [
+ "#### 1.2.2 Remind the Company Name"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "gather": {
+ "logged": 1685059130793
+ },
+ "jupyter": {
+ "outputs_hidden": false,
+ "source_hidden": false
+ },
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "outputs": [],
+ "source": [
+ "messages = [ \n",
+ "{'role':'system', 'content':'You are friendly chatbot.'}, \n",
+ "{'role':'user', 'content':'Hi, my name is Mel.'},\n",
+ "{'role':'assistant', 'content': \"Hi! It's nice to meet you. \\\n",
+ "Is there anything I can help you with today?\"}, \n",
+ "{'role':'user', 'content':'Yes, can you remind me which company I work for?'} ]\n",
+ "response = get_completion_from_messages(messages, temperature=1)\n",
+ "print(response)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "source": [
+ "#### Student Task: \n",
+ "Make the assistant give a reliable reminder of company name by giving context in the system message."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "gather": {
+ "logged": 1686939642577
+ },
+ "jupyter": {
+ "outputs_hidden": false,
+ "source_hidden": false
+ },
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "outputs": [],
+ "source": [
+ "# Make the assistant give a reliable reminder of company name by giving context in system message."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "source": [
+ "## 2. Iterative Prompting Principles\n",
+ "It is generally a good practice to perform iterative prompting so the model could generate the most appropriate response given the user's specification. \n",
+ "- **Principle 1: Write clear and specific instructions**\n",
+ "- **Principle 2: Give the model time to \u201cthink\u201d**\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "source": [
+ "### 2.1 Write clear and specific instructions"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "source": [
+ "#### Tactic 1: Delimiters"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "source": [
+ "#### Student Task:\n",
+ "Use delimiters to clearly indicate distinct parts of the input\n",
+ "\n",
+ "Delimiters can be anything like: ````, \"\"\", < >, ` `, `:`"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "gather": {
+ "logged": 1685081594233
+ },
+ "jupyter": {
+ "outputs_hidden": false,
+ "source_hidden": false
+ },
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "outputs": [],
+ "source": [
+ "text = f\"\"\"\n",
+ " The 2020 Summer Olympics, officially the Games of the XXXII Olympiad and also known as Tokyo 2020, was an international multi-sport event held from 23 July to 8 August 2021 in Tokyo, Japan, with some preliminary events that began on 21 July 2021. Tokyo was selected as the host city during the 125th IOC Session in Buenos Aires, Argentina, on 7 September 2013.Originally scheduled to take place from 24 July to 9 August 2020, the event was postponed to 2021 on 24 March 2020 due to the global COVID-19 pandemic, the first such instance in the history of the Olympic Games (previous games had been cancelled but not rescheduled). However, the event retained the Tokyo 2020 branding for marketing purposes. It was largely held behind closed doors with no public spectators permitted due to the declaration of a state of emergency in the Greater Tokyo Area in response to the pandemic, the first and only Olympic Games to be held without official spectators. The Games were the most expensive ever, with total spending of over $20 billion.The Games were the fourth Olympic Games to be held in Japan, following the 1964 Summer Olympics (Tokyo), 1972 Winter Olympics (Sapporo), and 1998 Winter Olympics (Nagano). Tokyo became the first city in Asia to hold the Summer Olympic Games twice. The 2020 Games were the second of three consecutive Olympics to be held in East Asia, following the 2018 Winter Olympics in Pyeongchang, South Korea and preceding the 2022 Winter Olympics in Beijing, China. Due to the one-year postponement, Tokyo 2020 was the first and only Olympic Games to have been held in an odd-numbered year and the first Summer Olympics since 1900 to be held in a non-leap year.\\nNew events were introduced in existing sports, including 3x3 basketball, freestyle BMX and mixed gender team events in a number of existing sports, as well as the return of madison cycling for men and an introduction of the same event for women. New IOC policies also allowed the host organizing committee to add new sports to the Olympic program for just one Games. The disciplines added by the Japanese Olympic Committee were baseball and softball, karate, sport climbing, surfing and skateboarding, the last four of which made their Olympic debuts, and the last three of which will remain on the Olympic program.The United States topped the medal count by both total golds (39) and total medals (113), with China finishing second by both respects (38 and 89). Host nation Japan finished third, setting a record for the most gold medals and total medals ever won by their delegation at an Olympic Games with 27 and 58. Great Britain finished fourth, with a total of 22 gold and 64 medals. The Russian delegation competing as the ROC finished fifth with 20 gold medals and third in the overall medal count, with 71 medals. Bermuda, the Philippines and Qatar won their first-ever Olympic gold medals. Burkina Faso, San Marino and Turkmenistan also won their first-ever Olympic medals.'\n",
+ "\"\"\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "jupyter": {
+ "outputs_hidden": false,
+ "source_hidden": false
+ },
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "outputs": [],
+ "source": [
+ "# Use delimiters to clearly indicate distinct parts of the input, and ask the model to summarize the text."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "source": [
+ "#### Tactic 2: Summarization: specify word counts, extract information"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "source": [
+ "#### Text to summarize"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "gather": {
+ "logged": 1685059771050
+ },
+ "jupyter": {
+ "outputs_hidden": false,
+ "source_hidden": false
+ },
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "outputs": [],
+ "source": [
+ "game_review = \"\"\"\n",
+ "The 2020 Summer Olympics, officially the Games of the XXXII Olympiad and also known as Tokyo 2020, \\\n",
+ "was an international multi-sport event held from 23 July to 8 August 2021 in Tokyo, Japan, \\\n",
+ "with some preliminary events that began on 21 July 2021. Tokyo was selected as the host city during the 125th IOC Session in Buenos Aires, Argentina, \\\n",
+ "on 7 September 2013.Originally scheduled to take place from 24 July to 9 August 2020, the event was postponed to 2021 on 24 March 2020 due to the global COVID-19 pandemic, \\\n",
+ "the first such instance in the history of the Olympic Games (previous games had been cancelled but not rescheduled). \\\n",
+ "However, the event retained the Tokyo 2020 branding for marketing purposes. \\\n",
+ "It was largely held behind closed doors with no public spectators permitted due to the declaration of a state of emergency in the Greater Tokyo Area in response to the pandemic, the first and only Olympic Games to be held without official spectators. \\\n",
+ "The Games were the most expensive ever, with total spending of over $20 billion. \\\n",
+ "The Games were the fourth Olympic Games to be held in Japan, following the 1964 Summer Olympics (Tokyo), 1972 Winter Olympics (Sapporo), \\\n",
+ "and 1998 Winter Olympics (Nagano). Tokyo became the first city in Asia to hold the Summer Olympic Games twice. \\\n",
+ "The 2020 Games were the second of three consecutive Olympics to be held in East Asia, following the 2018 Winter Olympics in Pyeongchang, \\\n",
+ "South Korea and preceding the 2022 Winter Olympics in Beijing, China. \\\n",
+ "Due to the one-year postponement, Tokyo 2020 was the first and only Olympic Games to have been held in an odd-numbered year and the first Summer Olympics since 1900 to be held in a non-leap year.\n",
+ "New events were introduced in existing sports, including 3x3 basketball, \\\n",
+ "freestyle BMX and mixed gender team events in a number of existing sports, as well as the return of madison cycling for men and an introduction of the same event for women. \\\n",
+ "New IOC policies also allowed the host organizing committee to add new sports to the Olympic program for just one Games. \\\n",
+ "The disciplines added by the Japanese Olympic Committee were baseball and softball, karate, \\\n",
+ "sport climbing, surfing and skateboarding, the last four of which made their Olympic debuts, and the last three of which will remain on the Olympic program. \\\n",
+ "The United States topped the medal count by both total golds (39) and total medals (113), with China finishing second by both respects (38 and 89). \\\n",
+ "Host nation Japan finished third, setting a record for the most gold medals and total medals ever won by their delegation at an Olympic Games with 27 and 58. \\\n",
+ "Great Britain finished fourth, with a total of 22 gold and 64 medals. \\\n",
+ "The Russian delegation competing as the ROC finished fifth with 20 gold medals and third in the overall medal count, with 71 medals. \\\n",
+ "Bermuda, the Philippines and Qatar won their first-ever Olympic gold medals. Burkina Faso, San Marino and Turkmenistan also won their first-ever Olympic medals.\n",
+ "\"\"\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "source": [
+ "#### Student Task:\n",
+ "Summarize with a focus on the special parts of the 2020 Summer Olympics and with a word limit"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "jupyter": {
+ "outputs_hidden": false,
+ "source_hidden": false
+ },
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "outputs": [],
+ "source": [
+ "# Summarize with a focus on the special parts of the 2020 Summer Olympics and with a word limit"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "source": [
+ "#### Student Task:\n",
+ "Try \"extract\" instead of \"summarize\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "jupyter": {
+ "outputs_hidden": false,
+ "source_hidden": false
+ },
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "outputs": [],
+ "source": [
+ "# Try \"extract\" instead of \"summarize\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "source": [
+ "#### Tactic 3: Inferring: ask for emotions, sentiment, or topics "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "source": [
+ "#### Student Task:\n",
+ "Identify types of emotions and sentiment (positive/negative) of the review below\n",
+ "\n",
+ "Format in a JSON object"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "jupyter": {
+ "outputs_hidden": false,
+ "source_hidden": false
+ },
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "outputs": [],
+ "source": [
+ "review = \"\"\"\n",
+ "Philip Barker of Inside the Games opined that for many athletes and supporters, \\\n",
+ "the tone of the ceremony was dignified and appropriate. Hashimoto stated in a press interview that the flame would \"quietly go out\", \\\n",
+ "which he felt that \"It was an apt description of a dignified and low key Ceremony which conveyed a sense of gratitude that the Games had been able to take place at all.\"\\\n",
+ "Dominic Patten of Deadline Hollywood argued that the ceremony was an \"uneven mixtape\" of contrasts, \\\n",
+ "comparing the low-key \"celebration of the culture of the Asian power and brow moping acknowledgement of the pandemic\" to the jubilant Paris segment, \\\n",
+ "as well as clich\u00c3\u00a9-filled speech of Thomas Bach. Alan Tyres of The Daily Telegraph discussed the IOC updated motto as a sign of things to come. \\\n",
+ "He stated, \"The updated Olympic motto of 'faster, higher, \\\n",
+ "stronger \u00e2\u20ac\u201c together' fits with how sport is covered and contextualised at this moment in history: \\\n",
+ "inclusion, diversity, justice and a duty of care to the athletes must be taken into consideration as much as performance.\" \\\n",
+ "He also discussed the strangeness of the ceremony, as it was performed without a stadium audience.\n",
+ "\"\"\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "jupyter": {
+ "outputs_hidden": false,
+ "source_hidden": false
+ },
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "outputs": [],
+ "source": [
+ "# Identify types of emotions and sentiment (positive/negative) of the review above. Format in a JSON object."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "source": [
+ "#### Student Challenge:\n",
+ "Infer 3 topics of the story below"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "jupyter": {
+ "outputs_hidden": false,
+ "source_hidden": false
+ },
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "outputs": [],
+ "source": [
+ "story = \"\"\"\n",
+ "The 2020 Summer Olympics, officially the Games of the XXXII Olympiad and also known as Tokyo 2020, \\\n",
+ "was an international multi-sport event held from 23 July to 8 August 2021 in Tokyo, Japan, \\\n",
+ "with some preliminary events that began on 21 July 2021. Tokyo was selected as the host city during the 125th IOC Session in Buenos Aires, Argentina, \\\n",
+ "on 7 September 2013.Originally scheduled to take place from 24 July to 9 August 2020, the event was postponed to 2021 on 24 March 2020 due to the global COVID-19 pandemic, \\\n",
+ "the first such instance in the history of the Olympic Games (previous games had been cancelled but not rescheduled). \\\n",
+ "However, the event retained the Tokyo 2020 branding for marketing purposes. \\\n",
+ "It was largely held behind closed doors with no public spectators permitted due to the declaration of a state of emergency in the Greater Tokyo Area in response to the pandemic, the first and only Olympic Games to be held without official spectators. \\\n",
+ "The Games were the most expensive ever, with total spending of over $20 billion. \\\n",
+ "The Games were the fourth Olympic Games to be held in Japan, following the 1964 Summer Olympics (Tokyo), 1972 Winter Olympics (Sapporo), \\\n",
+ "and 1998 Winter Olympics (Nagano). Tokyo became the first city in Asia to hold the Summer Olympic Games twice. \\\n",
+ "The 2020 Games were the second of three consecutive Olympics to be held in East Asia, following the 2018 Winter Olympics in Pyeongchang, \\\n",
+ "South Korea and preceding the 2022 Winter Olympics in Beijing, China. \\\n",
+ "Due to the one-year postponement, Tokyo 2020 was the first and only Olympic Games to have been held in an odd-numbered year and the first Summer Olympics since 1900 to be held in a non-leap year.\n",
+ "New events were introduced in existing sports, including 3x3 basketball, \\\n",
+ "freestyle BMX and mixed gender team events in a number of existing sports, as well as the return of madison cycling for men and an introduction of the same event for women. \\\n",
+ "New IOC policies also allowed the host organizing committee to add new sports to the Olympic program for just one Games. \\\n",
+ "The disciplines added by the Japanese Olympic Committee were baseball and softball, karate, \\\n",
+ "sport climbing, surfing and skateboarding, the last four of which made their Olympic debuts, and the last three of which will remain on the Olympic program. \\\n",
+ "The United States topped the medal count by both total golds (39) and total medals (113), with China finishing second by both respects (38 and 89). \\\n",
+ "Host nation Japan finished third, setting a record for the most gold medals and total medals ever won by their delegation at an Olympic Games with 27 and 58. \\\n",
+ "Great Britain finished fourth, with a total of 22 gold and 64 medals. \\\n",
+ "The Russian delegation competing as the ROC finished fifth with 20 gold medals and third in the overall medal count, with 71 medals. \\\n",
+ "Bermuda, the Philippines and Qatar won their first-ever Olympic gold medals. Burkina Faso, San Marino and Turkmenistan also won their first-ever Olympic medals.\n",
+ "\"\"\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "jupyter": {
+ "outputs_hidden": false,
+ "source_hidden": false
+ },
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "outputs": [],
+ "source": [
+ "# Infer 3 topics of the story above. "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "source": [
+ "#### Tactic 4: Transforming: specify target language and writing style, and ask for grammar check"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "source": [
+ "#### Student Task:\n",
+ "Universal Translator\n",
+ "\n",
+ "People all over the world want to know the Olympic game news in their native language. In this case, the news needs to be translated into different languages. Translate each news item below into both Korean and English."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "jupyter": {
+ "outputs_hidden": false,
+ "source_hidden": false
+ },
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "outputs": [],
+ "source": [
+ "news = [\n",
+ "\"Palestine received a universality invitation from FINA to send two top-ranked swimmers (one per gender) in their respective individual events to the Olympics, \\\n",
+ "based on the FINA Points System of June 28, 2021.\",\n",
+ "\"\u6bd4\u8d5b\u5305\u62ec\u4e24\u8f6e\uff1a\u9884\u8d5b\u548c\u51b3\u8d5b\u3002\u9884\u8d5b\u6210\u7ee9\u6700\u597d\u76848\u6b21\u63a5\u529b\u961f\u664b\u7ea7\u51b3\u8d5b\u3002\u5fc5\u8981\u65f6\u4f7f\u7528\u6e38\u6cf3\u6bd4\u8d5b\u6765\u6253\u7834\u5e73\u5c40\u4ee5\u664b\u7ea7\u4e0b\u4e00\u8f6e\u3002\"]"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "jupyter": {
+ "outputs_hidden": false,
+ "source_hidden": false
+ },
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "outputs": [],
+ "source": [
+ "# Write code here for student task"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "source": [
+ "#### Student Task:\n",
+ "Tone Transformation\n",
+ "\n",
+ "Writing can vary based on the intended audience. ChatGPT can produce different tones. Transform the following message into a business letter."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "gather": {
+ "logged": 1685082786641
+ },
+ "jupyter": {
+ "outputs_hidden": false,
+ "source_hidden": false
+ },
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "outputs": [],
+ "source": [
+ "original_message = \"David, it's John! OMG, the Olympic game is so crazy\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "jupyter": {
+ "outputs_hidden": false,
+ "source_hidden": false
+ },
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "outputs": [],
+ "source": [
+ "# Task: write code here for student task"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "source": [
+ "#### Student Task:\n",
+ "Format Conversion\n",
+ "\n",
+ "ChatGPT can translate between formats. The prompt should describe the input and output formats. Convert the following JSON data into HTML format."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "gather": {
+ "logged": 1685083768996
+ },
+ "jupyter": {
+ "outputs_hidden": false,
+ "source_hidden": false
+ },
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "outputs": [],
+ "source": [
+ "data_json = { \"The 2020 Summer Olympics Opening Ceremony audience name list\" :[ \n",
+ " {\"name\":\"Shyam\", \"email\":\"shyamjaiswal@gmail.com\"},\n",
+ " {\"name\":\"Bob\", \"email\":\"bob32@gmail.com\"},\n",
+ " {\"name\":\"Jai\", \"email\":\"jai87@gmail.com\"}\n",
+ "]}"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "jupyter": {
+ "outputs_hidden": false,
+ "source_hidden": false
+ },
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "outputs": [],
+ "source": [
+ "# Write code here for student task"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "source": [
+ "#### Student Task:\n",
+ "Spellcheck and Grammar check the following text. \n",
+ "\n",
+ "To signal to the LLM that you want it to proofread your text, you instruct the model to 'proofread' or 'proofread and correct'."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "gather": {
+ "logged": 1685084954682
+ },
+ "jupyter": {
+ "outputs_hidden": false,
+ "source_hidden": false
+ },
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "outputs": [],
+ "source": [
+ "check_text = [ \n",
+ " \"Alongside the main Emblem blue, the five other colors use in the branding of the 2020 Games is : Kurenai red, Ai blue, Sakula pink, Fuji purple, and Matsuba green.\",\n",
+ " \"The competition have three round: heats, semifinals, and a final.\"\n",
+ "]"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "jupyter": {
+ "outputs_hidden": false,
+ "source_hidden": false
+ },
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "outputs": [],
+ "source": [
+ "# Write code here for student task"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "source": [
+ "#### Tactic 5: Expanding: customize the automated reply"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "source": [
+ "#### Student Task:\n",
+ "Customize the automated reply to the following customer email.\n",
+ "\n",
+ "The customer faced an issue while buying the Olympics game ticket."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "gather": {
+ "logged": 1685463156048
+ },
+ "jupyter": {
+ "outputs_hidden": false,
+ "source_hidden": false
+ },
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "outputs": [],
+ "source": [
+ "# given the sentiment from the tactic on \"inferring\",\n",
+ "# and the original customer message, customize the email\n",
+ "sentiment = \"negative\"\n",
+ "\n",
+ "# review for a ticket transaction\n",
+ "review = f\"\"\"\n",
+ "I bought the ticket of \"Men's 100 metre freestyle swimming\" game last week.\\\n",
+ "The transaction went through successfully. However, I still have not received the ticket.\\\n",
+ "Over one week has passed.\\\n",
+ "\"\"\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "jupyter": {
+ "outputs_hidden": false,
+ "source_hidden": false
+ },
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "outputs": [],
+ "source": [
+ "# Write code here for student task"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "source": [
+ "#### Tactic 6: Chatbot: personalize conversations for specific tasks or behaviors"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "source": [
+ "#### Student Task:\n",
+ "Create a conversation with chatbot to know where the 2020 Summer Olympics is held."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "jupyter": {
+ "outputs_hidden": false,
+ "source_hidden": false
+ },
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "outputs": [],
+ "source": [
+ "# Write code here for student task"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "source": [
+ "### 2.2 Give the model time to \u201cthink\u201d "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "source": [
+ "#### Tactic 1: Specify the steps required to complete a task\n",
+ "Sometimes you can help the model \"slow down\" and give more robust, detailed answers by specifying the steps it should take.\n",
+ "\n",
+ "Let's ask for output in multiple specified formats."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "gather": {
+ "logged": 1685051679218
+ },
+ "jupyter": {
+ "outputs_hidden": false,
+ "source_hidden": false
+ },
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "outputs": [],
+ "source": [
+ "prompt = f\"\"\"\n",
+ "Your task is to help a journalist summarize information from the article for publication.\n",
+ "\n",
+ "Write a title based on the information provided in the context delimited by triple backticks. \n",
+ "The title should be short, catchy, and reflective of the article's narrative.\n",
+ "\n",
+ "After the title, generate five keywords from the context.\n",
+ "\n",
+ "After the keywords, include a table to organize the information. \n",
+ "The table should have two columns. In the first column is the title.\n",
+ "In the second column include the keywords as a list.\n",
+ "\n",
+ "Give the table the title 'Article Publishing Information'.\n",
+ "\n",
+ "Format everything as HTML that can be used in a website.\n",
+ "Place the title in a element.\n",
+ "\n",
+ "Context: ```{text}\n",
+ "\n",
+ "\"\"\" \n",
+ "\n",
+ "get_chat_completion(prompt)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "source": [
+ "#### Tactic 2: Instruct the model to work out its own solution\n",
+ "\n",
+ "There are two main methods we will demonstrate in this section to get the model to work through a problem instead of rushing to a conclusion, chain-of-thought and chaining. The strategies can lead to increased accuracy, detail, and the ability to work through complex challenges.\n",
+ "\n",
+ "\n",
+ "2.1 - Chain-of-thought prompting\n",
+ "- Ask the model to reason\n",
+ "- One-shot example\n",
+ "- Chatbot reasoning\n",
+ "\n",
+ "2.2 - Chaining\n",
+ "\n",
+ "Let's continue working with the Olympics dataset."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "source": [
+ "#### 2.2.1 Chain-of-Thought Prompting\n",
+ "\n",
+ "Let's do a bit of math. GPT models occasionally don't do super well on solving direct math problems, so let's walk GPT through the problem.\n",
+ "\n",
+ "Let's break down tasks into smaller pieces.\n",
+ "\n",
+ "Read more about methods and whitepaper research here: https://github.com/openai/openai-cookbook/blob/main/techniques_to_improve_reliability.md"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "source": [
+ "#### Tactic 1: You can start by specifically asking the model to simply think step-by-step."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "gather": {
+ "logged": 1685051978623
+ },
+ "jupyter": {
+ "outputs_hidden": false,
+ "source_hidden": false
+ },
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "outputs": [],
+ "source": [
+ "text = f\"\"\"\n",
+ " The 2020 Summer Olympics, officially the Games of the XXXII Olympiad and also known as Tokyo 2020, was an international multi-sport event held from 23 July to 8 August 2021 in Tokyo, Japan, with some preliminary events that began on 21 July 2021. Tokyo was selected as the host city during the 125th IOC Session in Buenos Aires, Argentina, on 7 September 2013.Originally scheduled to take place from 24 July to 9 August 2020, the event was postponed to 2021 on 24 March 2020 due to the global COVID-19 pandemic, the first such instance in the history of the Olympic Games (previous games had been cancelled but not rescheduled). However, the event retained the Tokyo 2020 branding for marketing purposes. It was largely held behind closed doors with no public spectators permitted due to the declaration of a state of emergency in the Greater Tokyo Area in response to the pandemic, the first and only Olympic Games to be held without official spectators. The Games were the most expensive ever, with total spending of over $20 billion.The Games were the fourth Olympic Games to be held in Japan, following the 1964 Summer Olympics (Tokyo), 1972 Winter Olympics (Sapporo), and 1998 Winter Olympics (Nagano). Tokyo became the first city in Asia to hold the Summer Olympic Games twice. The 2020 Games were the second of three consecutive Olympics to be held in East Asia, following the 2018 Winter Olympics in Pyeongchang, South Korea and preceding the 2022 Winter Olympics in Beijing, China. Due to the one-year postponement, Tokyo 2020 was the first and only Olympic Games to have been held in an odd-numbered year and the first Summer Olympics since 1900 to be held in a non-leap year.\\nNew events were introduced in existing sports, including 3x3 basketball, freestyle BMX and mixed gender team events in a number of existing sports, as well as the return of madison cycling for men and an introduction of the same event for women. New IOC policies also allowed the host organizing committee to add new sports to the Olympic program for just one Games. The disciplines added by the Japanese Olympic Committee were baseball and softball, karate, sport climbing, surfing and skateboarding, the last four of which made their Olympic debuts, and the last three of which will remain on the Olympic program.The United States topped the medal count by both total golds (39) and total medals (113), with China finishing second by both respects (38 and 89). Host nation Japan finished third, setting a record for the most gold medals and total medals ever won by their delegation at an Olympic Games with 27 and 58. Great Britain finished fourth, with a total of 22 gold and 64 medals. The Russian delegation competing as the ROC finished fifth with 20 gold medals and third in the overall medal count, with 71 medals. Bermuda, the Philippines and Qatar won their first-ever Olympic gold medals. Burkina Faso, San Marino and Turkmenistan also won their first-ever Olympic medals.'\n",
+ "\"\"\"\n",
+ "\n",
+ "# From Azure documentation\n",
+ "prompt = \"Who was the most decorated (maximum medals) individual athlete in the Olympic games that were held at Sydney? Take a step-by-step approach in your response, cite sources and give reasoning before sharing final answer in the below format: ANSWER is: \"\n",
+ "get_chat_completion(prompt)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "jupyter": {
+ "outputs_hidden": false,
+ "source_hidden": false
+ },
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "outputs": [],
+ "source": [
+ "# Another example\n",
+ "prompt = f\"\"\"\n",
+ "What is the largest time zone difference is between the top two countries who \n",
+ "won the most gold medals in the 2020 Tokyo olympics?\n",
+ "\n",
+ "Use the context below and think aloud as you solve the problem, step-by-step.\n",
+ "\n",
+ "Context: {text}\n",
+ "\"\"\"\n",
+ "get_chat_completion(prompt)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "source": [
+ "#### Tactic 2: One-shot example \n",
+ "Another common tactic is to provide one example of a query and an ideal response. The model will learn from that example and apply the patterns to a new question."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "gather": {
+ "logged": 1685053144682
+ },
+ "jupyter": {
+ "outputs_hidden": false,
+ "source_hidden": false
+ },
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "outputs": [],
+ "source": [
+ "# Notice how this response may not be ideal, or the most accurate.\n",
+ "prompt = f\"\"\"\n",
+ " The United States has 113 total medals, 39 of which are gold medals. \n",
+ " \n",
+ " Great Britain has 64 medals and 22 gold medals. \n",
+ " \n",
+ " How many more silver and bronze medals does the United States have over Great Britain?\n",
+ "\"\"\"\n",
+ "\n",
+ "get_chat_completion(prompt)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "jupyter": {
+ "outputs_hidden": false,
+ "source_hidden": false
+ },
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "outputs": [],
+ "source": [
+ "# Give the model a one-shot example to solve the question more thoroughly\n",
+ "\n",
+ "prompt = f\"\"\"\n",
+ "Question: The United States has 113 total medals, 39 of which are gold medals. How many medals are silver or bronze?\n",
+ "Answer:\n",
+ "[Step 1] - There are three types of medals: gold, silver, and bronze\n",
+ "[Step 2] - We know the gold and total counts of medals, so the number of silver and bronze medals is the difference between the gold (39) and total (113) counts. \n",
+ " 113-39=74, so there are 74 silver and bronze medals combined. The answer is 74.\n",
+ "\n",
+ "===\n",
+ "\n",
+ "Answer the following question using similar steps above.\n",
+ "\n",
+ "Question: China has 89 total medals, 38 of which are gold medals. How many silver and bronze medals do they have?\n",
+ "Answer:\n",
+ "\"\"\"\n",
+ "\n",
+ "get_chat_completion(prompt)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "source": [
+ "#### Tactic 3: Chatbot chain-of-thought reasoning\n",
+ "You can build in logic using variables so the chatbot can dynamically demonstrate specific ways of thinking about a problem.\n",
+ "\n",
+ "**The input place is at the top of the screen. You should see a popup. Type 'quit' if you want to exit.**"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "jupyter": {
+ "outputs_hidden": false,
+ "source_hidden": false
+ },
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "outputs": [],
+ "source": [
+ "# Ask the bot to help you make a decision such as deciding whether to take a job or choose between restaurants.\n",
+ "# If the model does not immediately respond to your query, wait 1-5 seconds and retype it.\n",
+ "# If it is not allowing you to give an input, restart the kernel in the navigation bar.\n",
+ "# Type \"quit\" to end the session\n",
+ "\n",
+ "context = '''\n",
+ " You are a decision bot. Your job is to help come to decision by asking a series of questions one at a time and coming to a reasonable decision based on the information provided.\n",
+ "\n",
+ " You will use the following format to help create the series of questions.\n",
+ "\n",
+ " Template: \n",
+ " [Problem/Scenario/Question]: [Provide a brief description of the problem, scenario, or question.]\n",
+ "\n",
+ " Chain of thought:\n",
+ "\n",
+ " [Step 1]: Identify the [key element/variable] in the [problem/scenario/question].\n",
+ " [Step 2]: Understand the [relationship/connection] between [element A] and [element B].\n",
+ " [Step 3]: [Analyze/Evaluate/Consider] the [context/implication] of the [relationship/connection] between [element A] and [element B].\n",
+ " [Step 4]: [Conclude/Decide/Determine] the [outcome/solution] based on the [analysis/evaluation/consideration] of [element A], [element B], and their [relationship/connection].\n",
+ " [Answer/Conclusion/Recommendation]: [Provide a coherent and logical response based on the chain of thought.]\n",
+ "\n",
+ " You will guide the user though a series of questions one at a time. The first question is broad, and they subsequent questions become more specific. \n",
+ "\n",
+ " Begin by introducing yourself and asking the first question (step 1) only and nothing else, in simple and easy way.\n",
+ " '''\n",
+ "\n",
+ "conversation=[{\"role\": \"system\", \"content\": context}]\n",
+ "\n",
+ "while(True):\n",
+ " print(len(conversation))\n",
+ " if len(conversation) == 1:\n",
+ " response = get_completion_from_messages(conversation)\n",
+ " conversation.append({\"role\": \"assistant\", \"content\": response})\n",
+ " print(\"\\n\" + response + \"\\n\")\n",
+ " \n",
+ " user_input = input('Enter your response: ')\n",
+ " if user_input.lower() == \"quit\":\n",
+ " break \n",
+ " conversation.append({\"role\": \"user\", \"content\": user_input})\n",
+ " \n",
+ " response = get_completion_from_messages(conversation)\n",
+ " conversation.append({\"role\": \"assistant\", \"content\": response})\n",
+ " print(\"\\n\" + response + \"\\n\")\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "source": [
+ "#### 2.2.2 - Chaining\n",
+ "Similar to some earlier examples, you can use model outputs from previous queries into other queries. We will show you later in the Hack how to do this at scale."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "gather": {
+ "logged": 1685051679330
+ },
+ "jupyter": {
+ "outputs_hidden": false,
+ "source_hidden": false
+ },
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "outputs": [],
+ "source": [
+ "# Extract medal counts for each country from the news article\n",
+ "# Write an article about the summarized information\n",
+ "# Provide a title for the summary\n",
+ "\n",
+ "text = f\"\"\"\n",
+ " The 2020 Summer Olympics, officially the Games of the XXXII Olympiad and also known as Tokyo 2020, was an international multi-sport event held from 23 July to 8 August 2021 in Tokyo, Japan, with some preliminary events that began on 21 July 2021. Tokyo was selected as the host city during the 125th IOC Session in Buenos Aires, Argentina, on 7 September 2013.Originally scheduled to take place from 24 July to 9 August 2020, the event was postponed to 2021 on 24 March 2020 due to the global COVID-19 pandemic, the first such instance in the history of the Olympic Games (previous games had been cancelled but not rescheduled). However, the event retained the Tokyo 2020 branding for marketing purposes. It was largely held behind closed doors with no public spectators permitted due to the declaration of a state of emergency in the Greater Tokyo Area in response to the pandemic, the first and only Olympic Games to be held without official spectators. The Games were the most expensive ever, with total spending of over $20 billion.The Games were the fourth Olympic Games to be held in Japan, following the 1964 Summer Olympics (Tokyo), 1972 Winter Olympics (Sapporo), and 1998 Winter Olympics (Nagano). Tokyo became the first city in Asia to hold the Summer Olympic Games twice. The 2020 Games were the second of three consecutive Olympics to be held in East Asia, following the 2018 Winter Olympics in Pyeongchang, South Korea and preceding the 2022 Winter Olympics in Beijing, China. Due to the one-year postponement, Tokyo 2020 was the first and only Olympic Games to have been held in an odd-numbered year and the first Summer Olympics since 1900 to be held in a non-leap year.\\nNew events were introduced in existing sports, including 3x3 basketball, freestyle BMX and mixed gender team events in a number of existing sports, as well as the return of madison cycling for men and an introduction of the same event for women. New IOC policies also allowed the host organizing committee to add new sports to the Olympic program for just one Games. The disciplines added by the Japanese Olympic Committee were baseball and softball, karate, sport climbing, surfing and skateboarding, the last four of which made their Olympic debuts, and the last three of which will remain on the Olympic program.The United States topped the medal count by both total golds (39) and total medals (113), with China finishing second by both respects (38 and 89). Host nation Japan finished third, setting a record for the most gold medals and total medals ever won by their delegation at an Olympic Games with 27 and 58. Great Britain finished fourth, with a total of 22 gold and 64 medals. The Russian delegation competing as the ROC finished fifth with 20 gold medals and third in the overall medal count, with 71 medals. Bermuda, the Philippines and Qatar won their first-ever Olympic gold medals. Burkina Faso, San Marino and Turkmenistan also won their first-ever Olympic medals.'\n",
+ "\"\"\"\n",
+ "\n",
+ "prompt = f\"\"\"\n",
+ " Based on the context below, write a JSON object that contains the number of gold and total medals for each country.\n",
+ " Context: {text}\n",
+ "\"\"\"\n",
+ "num_medals_dict = get_chat_completion(prompt)\n",
+ "\n",
+ "prompt = f\"\"\"\n",
+ " Write a brief article about the winners and losers of the Olympics based on medal count:\n",
+ " {num_medals_dict}\n",
+ "\"\"\"\n",
+ "summary = get_chat_completion(prompt)\n",
+ "print(summary)\n",
+ "\n",
+ "prompt = f\"\"\"\n",
+ " Give the summary a title in 5 words:\n",
+ " {summary}\n",
+ "\"\"\"\n",
+ "title = get_chat_completion(prompt)\n",
+ "print(title)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "source": [
+ "#### Student Task (Chaining): \n",
+ "\n",
+ "Your job is now to write code that will determine the country that won the most silver and bronze medals combined.\n",
+ "\n",
+ "We can see that the model performs poorly on answering the question directly."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "gather": {
+ "logged": 1685053948700
+ },
+ "jupyter": {
+ "outputs_hidden": false,
+ "source_hidden": false
+ },
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "outputs": [],
+ "source": [
+ "prompt = f\"\"\"\n",
+ " Based on the context, which country had the most silver and bronze medals?\n",
+ " Context: {text}\n",
+ "\"\"\"\n",
+ "\n",
+ "get_chat_completion(prompt)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "source": [
+ "Write prompts in the cell below that will help the model answer the question by breaking down the tasks into different steps. Make sure it only answers with the information it was given. This concept of grounding will be further introduced in Challenge 3.\n",
+ "\n",
+ "You should be able to get the model to answer the question in 2-3 steps."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "jupyter": {
+ "outputs_hidden": false,
+ "source_hidden": false
+ },
+ "nteract": {
+ "transient": {
+ "deleting": false
+ }
+ }
+ },
+ "outputs": [],
+ "source": [
+ "# Write code here for student task"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Conclusion\n",
+ "\n",
+ "In this first challenge, we covered how to interact with the Azure OpenAI for different goals. Hopefully you were able to see the dynamic versatility of the models and how they can be used to solve a variety of problems using different techniques.\n",
+ "\n",
+ "We gave the API short pieces of text using fixed variables. In the next set of challenges, you will see how to use the API with larger datasets."
+ ]
+ }
+ ],
+ "metadata": {
+ "kernel_info": {
+ "name": "python38-azureml"
+ },
+ "kernelspec": {
+ "display_name": ".venv (3.13.11)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.13.11"
+ },
+ "microsoft": {
+ "host": {
+ "AzureML": {
+ "notebookHasBeenCompleted": true
+ }
+ },
+ "ms_spell_check": {
+ "ms_spell_check_language": "en"
+ }
+ },
+ "nteract": {
+ "version": "nteract-front-end@1.0.0"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
\ No newline at end of file
diff --git a/066-OpenAIFundamentals/Student/Resources/notebooks/CH-03-A-Grounding.ipynb b/066-OpenAIFundamentals/Student/Resources/notebooks/CH-03-A-Grounding.ipynb
index 195c83072e..0c1310f5d9 100644
--- a/066-OpenAIFundamentals/Student/Resources/notebooks/CH-03-A-Grounding.ipynb
+++ b/066-OpenAIFundamentals/Student/Resources/notebooks/CH-03-A-Grounding.ipynb
@@ -44,7 +44,9 @@
"import json\n",
"\n",
"from dotenv import load_dotenv, find_dotenv\n",
- "load_dotenv(find_dotenv())"
+ "load_dotenv(find_dotenv())\n",
+ "from openai import AzureOpenAI\n",
+ "from azure.identity import DefaultAzureCredential, get_bearer_token_provider"
]
},
{
@@ -62,19 +64,26 @@
"metadata": {},
"outputs": [],
"source": [
- "API_KEY = os.getenv(\"OPENAI_API_KEY\")\n",
- "assert API_KEY, \"ERROR: Azure OpenAI Key is missing\"\n",
- "openai.api_key = API_KEY\n",
+ "token_provider = get_bearer_token_provider(\n",
+ " DefaultAzureCredential(),\n",
+ " \"https://cognitiveservices.azure.com/.default\"\n",
+ ")\n",
"\n",
"RESOURCE_ENDPOINT = os.getenv(\"OPENAI_API_BASE\",\"\").strip()\n",
"assert RESOURCE_ENDPOINT, \"ERROR: Azure OpenAI Endpoint is missing\"\n",
"assert \"openai.azure.com\" in RESOURCE_ENDPOINT.lower(), \"ERROR: Azure OpenAI Endpoint should be in the form: \\n\\n\\t.openai.azure.com\"\n",
- "openai.api_base = RESOURCE_ENDPOINT\n",
"\n",
+ "openai.api_base = RESOURCE_ENDPOINT\n",
"openai.api_type = os.getenv(\"OPENAI_API_TYPE\")\n",
"openai.api_version = os.getenv(\"OPENAI_API_VERSION\")\n",
- "\n",
- "model=os.getenv(\"CHAT_MODEL_NAME\")"
+ "openai.azure_ad_token_provider = token_provider\n",
+ "chat_model=os.getenv(\"CHAT_MODEL_NAME\")\n",
+ "\n",
+ "client = AzureOpenAI(\n",
+ " azure_endpoint=RESOURCE_ENDPOINT,\n",
+ " azure_ad_token_provider=token_provider,\n",
+ " api_version=os.getenv(\"OPENAI_API_VERSION\")\n",
+ ")"
]
},
{
@@ -92,16 +101,16 @@
"metadata": {},
"outputs": [],
"source": [
- "def get_chat_completion(prompt, model=model):\n",
+ "def get_chat_completion(prompt, model=chat_model):\n",
" messages = [{\"role\": \"user\", \"content\": prompt}]\n",
- " response = openai.ChatCompletion.create(\n",
- " engine=model,\n",
+ " response = client.chat.completions.create(\n",
+ " model=chat_model,\n",
" messages=messages,\n",
" temperature=0, # this is the degree of randomness of the model's output\n",
" max_tokens = 200,\n",
" top_p = 1.0\n",
" )\n",
- " return response.choices[0].message[\"content\"]"
+ " return response.choices[0].message.content"
]
},
{
@@ -130,7 +139,7 @@
"Enter Question Here\n",
"\"\"\"\n",
"\n",
- "model_response = get_chat_completion(prompt, model=model)\n",
+ "model_response = get_chat_completion(prompt, model=chat_model)\n",
"print(f\"Response: {model_response}\\n\")\n"
]
},
@@ -156,7 +165,7 @@
"Enter Question Here\n",
"\"\"\"\n",
"\n",
- "model_response = get_chat_completion(prompt, model=model)\n",
+ "model_response = get_chat_completion(prompt, model=chat_model)\n",
"print(f\"Response: {model_response}\\n\")"
]
},
@@ -183,7 +192,7 @@
],
"metadata": {
"kernelspec": {
- "display_name": "Python 3 (ipykernel)",
+ "display_name": ".venv (3.13.11)",
"language": "python",
"name": "python3"
},
@@ -197,10 +206,10 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.10.13"
+ "version": "3.13.11"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
-}
+}
\ No newline at end of file
diff --git a/066-OpenAIFundamentals/Student/Resources/notebooks/CH-03-B-Chunking.ipynb b/066-OpenAIFundamentals/Student/Resources/notebooks/CH-03-B-Chunking.ipynb
index b5e623f23b..2092d3053b 100644
--- a/066-OpenAIFundamentals/Student/Resources/notebooks/CH-03-B-Chunking.ipynb
+++ b/066-OpenAIFundamentals/Student/Resources/notebooks/CH-03-B-Chunking.ipynb
@@ -44,31 +44,36 @@
"metadata": {},
"outputs": [],
"source": [
+ "%pip install langchain langchain-text-splitters\n",
+ "\n",
"import openai\n",
"import PyPDF3\n",
"import os\n",
"import json\n",
"import tiktoken\n",
"import spacy\n",
- "from openai.error import InvalidRequestError\n",
"\n",
"from dotenv import load_dotenv, find_dotenv\n",
"load_dotenv(find_dotenv())\n",
- "\n",
+ "from openai import AzureOpenAI\n",
+ "from azure.identity import DefaultAzureCredential, get_bearer_token_provider\n",
+ "token_provider = get_bearer_token_provider(\n",
+ " DefaultAzureCredential(),\n",
+ " \"https://cognitiveservices.azure.com/.default\"\n",
+ ")\n",
"from spacy.lang.en import English \n",
"nlp = spacy.load(\"en_core_web_sm\")\n",
"\n",
"import langchain\n",
- "from langchain.text_splitter import RecursiveCharacterTextSplitter"
+ "from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
+ "from openai import BadRequestError"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "Set up your environment to access your Azure OpenAI keys. Refer to your Azure OpenAI resource in the Azure Portal to retrieve information regarding your Azure OpenAI endpoint and keys. \n",
- "\n",
- "For security purposes, store your sensitive information in an .env file."
+ "This cell sets up your Python environment to access your Azure OpenAI endpoint and sets up various openai settings from your .env file. "
]
},
{
@@ -77,19 +82,26 @@
"metadata": {},
"outputs": [],
"source": [
- "# Load your OpenAI credentials\n",
- "API_KEY = os.getenv(\"OPENAI_API_KEY\")\n",
- "assert API_KEY, \"ERROR: Azure OpenAI Key is missing\"\n",
- "openai.api_key = API_KEY\n",
+ "token_provider = get_bearer_token_provider(\n",
+ " DefaultAzureCredential(),\n",
+ " \"https://cognitiveservices.azure.com/.default\"\n",
+ ")\n",
"\n",
"RESOURCE_ENDPOINT = os.getenv(\"OPENAI_API_BASE\",\"\").strip()\n",
"assert RESOURCE_ENDPOINT, \"ERROR: Azure OpenAI Endpoint is missing\"\n",
"assert \"openai.azure.com\" in RESOURCE_ENDPOINT.lower(), \"ERROR: Azure OpenAI Endpoint should be in the form: \\n\\n\\t.openai.azure.com\"\n",
- "openai.api_base = RESOURCE_ENDPOINT\n",
"\n",
+ "openai.api_base = RESOURCE_ENDPOINT\n",
"openai.api_type = os.getenv(\"OPENAI_API_TYPE\")\n",
"openai.api_version = os.getenv(\"OPENAI_API_VERSION\")\n",
- "model=os.getenv(\"CHAT_MODEL_NAME\")\n"
+ "openai.azure_ad_token_provider = token_provider\n",
+ "chat_model=os.getenv(\"CHAT_MODEL_NAME\")\n",
+ "\n",
+ "client = AzureOpenAI(\n",
+ " azure_endpoint=RESOURCE_ENDPOINT,\n",
+ " azure_ad_token_provider=token_provider,\n",
+ " api_version=os.getenv(\"OPENAI_API_VERSION\")\n",
+ ")"
]
},
{
@@ -164,6 +176,7 @@
"outputs": [],
"source": [
"document = open(r'Insert PDF file path', 'rb') \n",
+ "\n",
"doc_helper = PyPDF3.PdfFileReader(document)"
]
},
@@ -194,12 +207,16 @@
"\n",
"try:\n",
" final_prompt = prompt + q\n",
- " response = openai.ChatCompletion.create(engine=model, messages=final_prompt, max_tokens=50)\n",
- " answer = response.choices[0].text.strip()\n",
+ " response = client.chat.completions.create(\n",
+ " model=chat_model, \n",
+ " messages=[{\"role\": \"user\", \"content\": final_prompt}], \n",
+ " max_tokens=50\n",
+ " )\n",
+ " answer = response.choices[0].message.content.strip()\n",
" print(f\"{q}\\n{answer}\\n\")\n",
"\n",
- "except InvalidRequestError as e:\n",
- " print(e.error)\n",
+ "except BadRequestError as e:\n",
+ " print(e)\n",
"\n"
]
},
@@ -387,7 +404,7 @@
],
"metadata": {
"kernelspec": {
- "display_name": "Python 3 (ipykernel)",
+ "display_name": ".venv (3.13.11)",
"language": "python",
"name": "python3"
},
@@ -401,10 +418,10 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.10.13"
+ "version": "3.13.11"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
-}
+}
\ No newline at end of file
diff --git a/066-OpenAIFundamentals/Student/Resources/notebooks/CH-03-C-Embeddings.ipynb b/066-OpenAIFundamentals/Student/Resources/notebooks/CH-03-C-Embeddings.ipynb
index 9e88ed1da1..ada7fd48bc 100644
--- a/066-OpenAIFundamentals/Student/Resources/notebooks/CH-03-C-Embeddings.ipynb
+++ b/066-OpenAIFundamentals/Student/Resources/notebooks/CH-03-C-Embeddings.ipynb
@@ -52,7 +52,6 @@
"source": [
"! pip install num2words\n",
"! pip install plotly\n",
- "! pip install \"openai==0.28.1\" \n",
"! pip install nptyping"
]
},
@@ -62,7 +61,6 @@
"metadata": {},
"outputs": [],
"source": [
- "import openai\n",
"import os\n",
"import re \n",
"import requests\n",
@@ -70,11 +68,36 @@
"from num2words import num2words \n",
"import pandas as pd \n",
"import numpy as np\n",
- "from openai.embeddings_utils import get_embedding, cosine_similarity \n",
"import tiktoken\n",
"from dotenv import load_dotenv\n",
"from tenacity import retry, wait_random_exponential, stop_after_attempt\n",
- "load_dotenv() "
+ "from sklearn.metrics.pairwise import cosine_similarity as sklearn_cosine_similarity\n",
+ "from openai import AzureOpenAI\n",
+ "from azure.identity import DefaultAzureCredential, get_bearer_token_provider\n",
+ "\n",
+ "load_dotenv()\n",
+ "\n",
+ "token_provider = get_bearer_token_provider(\n",
+ " DefaultAzureCredential(),\n",
+ " \"https://cognitiveservices.azure.com/.default\"\n",
+ ")\n",
+ "\n",
+ "# Initialize the Azure OpenAI client\n",
+ "client = AzureOpenAI(\n",
+ " azure_endpoint=os.getenv(\"OPENAI_API_BASE\"),\n",
+ " azure_ad_token_provider=token_provider,\n",
+ " api_version=os.getenv(\"OPENAI_API_VERSION\")\n",
+ ")\n",
+ "\n",
+ "# Define helper functions using the OpenAI 1.x API\n",
+ "@retry(wait=wait_random_exponential(min=1, max=20), stop=stop_after_attempt(6))\n",
+ "def get_embedding(text: str, engine: str) -> list:\n",
+ "\ttext = text.replace(\"\\n\", \" \")\n",
+ "\tresponse = client.embeddings.create(input=[text], model=engine)\n",
+ "\treturn response.data[0].embedding\n",
+ "\n",
+ "def cosine_similarity(a, b):\n",
+ "\treturn np.dot(a, b) / (np.linalg.norm(a) * np.linalg.norm(b))"
]
},
{
@@ -92,11 +115,8 @@
"metadata": {},
"outputs": [],
"source": [
- "openai.api_type = os.getenv(\"OPENAI_API_TYPE\")\n",
- "openai.api_key = os.environ.get(\"OPENAI_API_KEY\")\n",
- "openai.api_base = os.environ.get(\"OPENAI_API_BASE\")\n",
- "openai.api_version = os.getenv(\"OPENAI_API_VERSION\")\n",
- "embedding_model=os.getenv(\"EMBEDDING_MODEL_NAME\")"
+ "# Get the embedding model name from environment\n",
+ "embedding_model = os.getenv(\"EMBEDDING_MODEL_NAME\")"
]
},
{
@@ -119,7 +139,7 @@
"\n",
"input=\"I would like to order a pizza\"\n",
"\n",
- "# Add code here "
+ "# Add code here: Create embedding using the helper function\n"
]
},
{
@@ -127,7 +147,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "The openai.Embedding.create() method will take a list of text - here we have a single sentence - and then will return a list containing a single embedding. You can use these embeddings when searching, providing recommendations, classification, and more."
+ "The client.embeddings.create() method will take a list of text - here we have a single sentence - and then will return a list containing a single embedding. You can use these embeddings when searching, providing recommendations, classification, and more."
]
},
{
@@ -148,6 +168,7 @@
"outputs": [],
"source": [
"df=pd.read_csv(os.path.join(os.getcwd(),r'Enter path here'))\n",
+ "\n",
"df"
]
},
@@ -234,7 +255,7 @@
],
"metadata": {
"kernelspec": {
- "display_name": "Python 3 (ipykernel)",
+ "display_name": ".venv (3.13.11)",
"language": "python",
"name": "python3"
},
@@ -248,10 +269,10 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.10.13"
+ "version": "3.13.11"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
-}
+}
\ No newline at end of file
diff --git a/066-OpenAIFundamentals/Student/Resources/notebooks/CH-04-A-RAG_for_structured_data.ipynb b/066-OpenAIFundamentals/Student/Resources/notebooks/CH-04-A-RAG_for_structured_data.ipynb
index c35a148f3e..c045d7b144 100644
--- a/066-OpenAIFundamentals/Student/Resources/notebooks/CH-04-A-RAG_for_structured_data.ipynb
+++ b/066-OpenAIFundamentals/Student/Resources/notebooks/CH-04-A-RAG_for_structured_data.ipynb
@@ -119,9 +119,8 @@
"import pandas as pd\n",
"import numpy as np\n",
"from sklearn.metrics.pairwise import cosine_similarity\n",
- "\n",
+ "from azure.identity import DefaultAzureCredential, get_bearer_token_provider\n",
"# Azure Cognitive Search imports\n",
- "from azure.core.credentials import AzureKeyCredential\n",
"from azure.search.documents.indexes import SearchIndexClient \n",
"from azure.search.documents import SearchClient\n",
"from azure.search.documents.indexes.models import (\n",
@@ -143,7 +142,12 @@
"from semantic_kernel.connectors.ai.open_ai import AzureChatPromptExecutionSettings\n",
"\n",
"from dotenv import load_dotenv\n",
- "load_dotenv()"
+ "load_dotenv()\n",
+ "\n",
+ "token_provider = get_bearer_token_provider(\n",
+ " DefaultAzureCredential(),\n",
+ " \"https://cognitiveservices.azure.com/.default\"\n",
+ ")\n"
]
},
{
@@ -160,19 +164,19 @@
"# Initialize Semantic Kernel\n",
"kernel = sk.Kernel()\n",
"\n",
- "# Add Azure OpenAI Chat Completion service\n",
+ "# Add Azure OpenAI Chat Completion service with Entra ID authentication\n",
"chat_service = AzureChatCompletion(\n",
" deployment_name=chat_model,\n",
" endpoint=os.environ['OPENAI_API_BASE'],\n",
- " api_key=os.environ['OPENAI_API_KEY']\n",
+ " ad_token_provider=token_provider\n",
")\n",
"kernel.add_service(chat_service)\n",
"\n",
- "# Add Azure OpenAI Text Embedding service \n",
+ "# Add Azure OpenAI Text Embedding service with Entra ID authentication\n",
"embedding_service = AzureTextEmbedding(\n",
" deployment_name=embedding_model,\n",
" endpoint=os.environ['OPENAI_API_BASE'],\n",
- " api_key=os.environ['OPENAI_API_KEY']\n",
+ " ad_token_provider=token_provider\n",
")\n",
"kernel.add_service(embedding_service)\n",
"\n",
@@ -206,10 +210,13 @@
"metadata": {},
"outputs": [],
"source": [
- "# Create a Cognitive Search Index client\n",
+ "# Create a Cognitive Search Index client with Entra ID authentication\n",
+ "from azure.identity import AzureCliCredential\n",
+ "\n",
"service_endpoint = os.getenv(\"AZURE_AI_SEARCH_ENDPOINT\") \n",
- "key = os.getenv(\"AZURE_AI_SEARCH_KEY\")\n",
- "credential = AzureKeyCredential(key)\n",
+ "\n",
+ "# Use AzureCliCredential for local development (more reliable than DefaultAzureCredential)\n",
+ "credential = AzureCliCredential()\n",
"\n",
"index_name = \"news-index\"\n",
"\n",
@@ -742,7 +749,7 @@
"## Section 3: Text Summarization\n",
"\n",
"This section will cover the end-to-end flow of using the GPT-3 and ChatGPT models for summarization tasks. \n",
- "The model used by the Azure OpenAI service is a generative completion call which uses natural language instructions to identify the task being asked and skill required – aka Prompt Engineering. Using this approach, the first part of the prompt includes natural language instructions and/or examples of the specific task desired. The model then completes the task by predicting the most probable next text. This technique is known as \"in-context\" learning. \n",
+ "The model used by the Azure OpenAI service is a generative completion call which uses natural language instructions to identify the task being asked and skill required \u2013 aka Prompt Engineering. Using this approach, the first part of the prompt includes natural language instructions and/or examples of the specific task desired. The model then completes the task by predicting the most probable next text. This technique is known as \"in-context\" learning. \n",
"\n",
"There are three main approaches for in-context learning: Zero-shot, Few-shot and Fine tuning. These approaches vary based on the amount of task-specific data that is given to the model: \n",
"\n",
@@ -751,7 +758,7 @@
"**Few-shot**: In this case, a user includes several examples in the call prompt that demonstrate the expected answer format and content. \n",
"\n",
"**Fine-Tuning**: Fine Tuning lets you tailor models to your personal datasets. This customization step will let you get more out of the service by providing: \n",
- "-\tWith lots of data (at least 500 and above) traditional optimization techniques are used with Back Propagation to re-adjust the weights of the model – this enables higher quality results than mere zero-shot or few-shot. \n",
+ "-\tWith lots of data (at least 500 and above) traditional optimization techniques are used with Back Propagation to re-adjust the weights of the model \u2013 this enables higher quality results than mere zero-shot or few-shot. \n",
"-\tA customized model improves the few-shot learning approach by training the model weights on your specific prompts and structure. This lets you achieve better results on a wider number of tasks without needing to provide examples in the prompt. The result is less text sent and fewer tokens \n"
]
},
@@ -819,7 +826,7 @@
"name": "python3"
},
"kernelspec": {
- "display_name": "Python 3",
+ "display_name": ".venv (3.13.11)",
"language": "python",
"name": "python3"
},
@@ -833,7 +840,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.13"
+ "version": "3.13.11"
},
"microsoft": {
"host": {
@@ -848,4 +855,4 @@
},
"nbformat": 4,
"nbformat_minor": 5
-}
+}
\ No newline at end of file
diff --git a/066-OpenAIFundamentals/Student/Resources/notebooks/CH-04-B-RAG_for_unstructured_data.ipynb b/066-OpenAIFundamentals/Student/Resources/notebooks/CH-04-B-RAG_for_unstructured_data.ipynb
index e4ca2b4acd..ca4f23f2ea 100644
--- a/066-OpenAIFundamentals/Student/Resources/notebooks/CH-04-B-RAG_for_unstructured_data.ipynb
+++ b/066-OpenAIFundamentals/Student/Resources/notebooks/CH-04-B-RAG_for_unstructured_data.ipynb
@@ -49,6 +49,7 @@
"from azure.core.credentials import AzureKeyCredential\n",
"from azure.search.documents.indexes import SearchIndexClient \n",
"from azure.search.documents import SearchClient\n",
+ "from azure.identity import DefaultAzureCredential, get_bearer_token_provider\n",
"from azure.search.documents.indexes.models import (\n",
" SearchIndex,\n",
" SearchField,\n",
@@ -66,7 +67,12 @@
"import numpy as np\n",
"\n",
"from dotenv import load_dotenv\n",
- "load_dotenv()"
+ "load_dotenv()\n",
+ "\n",
+ "token_provider = get_bearer_token_provider(\n",
+ " DefaultAzureCredential(),\n",
+ " \"https://cognitiveservices.azure.com/.default\"\n",
+ ")"
]
},
{
@@ -80,10 +86,11 @@
"# Initialize the Azure OpenAI client for the latest version\n",
"from openai import AzureOpenAI\n",
"\n",
+ "# Initialize the Azure OpenAI client\n",
"client = AzureOpenAI(\n",
- " api_key=os.environ['OPENAI_API_KEY'],\n",
- " api_version=os.environ['OPENAI_API_VERSION'],\n",
- " azure_endpoint=os.environ['OPENAI_API_BASE']\n",
+ " azure_endpoint=os.getenv(\"OPENAI_API_BASE\"),\n",
+ " azure_ad_token_provider=token_provider,\n",
+ " api_version=os.getenv(\"OPENAI_API_VERSION\")\n",
")\n",
"\n",
"chat_model = os.environ['CHAT_MODEL_NAME']\n",
@@ -115,14 +122,15 @@
"metadata": {},
"outputs": [],
"source": [
- "from azure.core.credentials import AzureKeyCredential\n",
"from azure.ai.formrecognizer import DocumentAnalysisClient\n",
"\n",
- "endpoint = os.environ[\"AZURE_DOC_INTELLIGENCE_ENDPOINT\"]\n",
- "key = os.environ[\"AZURE_DOC_INTELLIGENCE_KEY\"]\n",
+ "endpoint = os.environ[\"DOCUMENT_INTELLIGENCE_ENDPOINT\"]\n",
+ "\n",
+ "# Use Entra ID authentication instead of API key\n",
+ "credential = DefaultAzureCredential()\n",
"\n",
"document_analysis_client = DocumentAnalysisClient(\n",
- " endpoint=endpoint, credential=AzureKeyCredential(key)\n",
+ " endpoint=endpoint, credential=credential\n",
")"
]
},
@@ -263,8 +271,7 @@
"source": [
"# Create an SDK client\n",
"service_endpoint = os.getenv(\"AZURE_AI_SEARCH_ENDPOINT\") \n",
- "key = os.getenv(\"AZURE_AI_SEARCH_KEY\")\n",
- "credential = AzureKeyCredential(key)\n",
+ "credential = DefaultAzureCredential()\n",
"\n",
"index_name = \"research-paper-index\"\n",
"\n",
@@ -545,18 +552,11 @@
"answer = query_search(\"what is prompt tuning?\", 10)\n",
"print(answer)"
]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": []
}
],
"metadata": {
"kernelspec": {
- "display_name": "Python 3",
+ "display_name": ".venv (3.13.11)",
"language": "python",
"name": "python3"
},
@@ -570,10 +570,10 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.13"
+ "version": "3.13.11"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
-}
+}
\ No newline at end of file
diff --git a/066-OpenAIFundamentals/Student/Resources/notebooks/CH-06-AgenticAI.ipynb b/066-OpenAIFundamentals/Student/Resources/notebooks/CH-06-AgenticAI.ipynb
new file mode 100644
index 0000000000..a7107af7da
--- /dev/null
+++ b/066-OpenAIFundamentals/Student/Resources/notebooks/CH-06-AgenticAI.ipynb
@@ -0,0 +1,411 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "243b51a2",
+ "metadata": {},
+ "source": [
+ "# Challenge 06 - Agentic AI\n",
+ "\n",
+ "In this notebook, you will build a **Research Assistant Agent** using the Microsoft Agent Framework. This agent leverages **Model Context Protocol (MCP)** to connect to live data sources like Microsoft Learn documentation."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "96883d46",
+ "metadata": {},
+ "source": [
+ "Quick tip! To view the Table of Contents for this Notebook in VS Code or within Codespaces, take a look at the \"Explorer\" tab, expand the \"Outline\" section."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "4a5d9005",
+ "metadata": {},
+ "source": [
+ "## 6.1. Setting Up Your Environment\n",
+ "\n",
+ "First, install the Microsoft Agent Framework. The `--pre` flag is required while the Agent Framework is in preview."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "c0865873",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "%pip install agent-framework-azure-ai --pre"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "3d6e6900",
+ "metadata": {},
+ "source": [
+ "### 6.1.1 Load Environment Variables\n",
+ "\n",
+ "Load your Microsoft Foundry project endpoint and model deployment name from the `.env` file.\n",
+ "\n",
+ "**NOTE:** These values in your .env file are required to ensure the notebook runs seamlessly. They should already be there if you deployed using the deployment script in Challenge 0.\n",
+ "* AZURE_AI_PROJECT_ENDPOINT must equal your Microsoft Foundry project endpoint\n",
+ "* CHAT_MODEL_NAME must equal the deployed model's name (e.g., `gpt-4o`)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "79e84127",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import os\n",
+ "\n",
+ "from dotenv import load_dotenv, find_dotenv\n",
+ "load_dotenv(find_dotenv())\n",
+ "\n",
+ "# Note: We use the async version of DefaultAzureCredential for the Agent Framework\n",
+ "from azure.identity.aio import DefaultAzureCredential"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "2d5eec04",
+ "metadata": {},
+ "source": [
+ "## 6.2. Creating the Research Assistant Agent\n",
+ "\n",
+ "### 6.2.1 Import Required Libraries\n",
+ "\n",
+ "Import the Agent Framework components and Azure Identity for authentication."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "4a994927",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from agent_framework.azure import AzureAIClient\n",
+ "from agent_framework import MCPStreamableHTTPTool"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b3394b76",
+ "metadata": {},
+ "source": [
+ "### 6.2.2 Define the MCP Tool\n",
+ "\n",
+ "Create a function that returns the MCP tool configuration for Microsoft Learn documentation. This allows your agent to query live, up-to-date documentation."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "9bd08b8d",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def create_mcp_tools():\n",
+ " \"\"\"Create MCP tools for the Research Assistant agent.\"\"\"\n",
+ " return [\n",
+ " MCPStreamableHTTPTool(\n",
+ " name=\"Microsoft Learn MCP\",\n",
+ " description=\"Provides trusted, up-to-date information from Microsoft's official documentation\",\n",
+ " url=\"https://learn.microsoft.com/api/mcp\",\n",
+ " )\n",
+ " ]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "ce749f50",
+ "metadata": {},
+ "source": [
+ "### 6.2.3 Define the Agent Instructions\n",
+ "\n",
+ "Create the system instructions that define how the Research Assistant should behave."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "b42304d3",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "AGENT_INSTRUCTIONS = \"\"\"\n",
+ "You are a helpful research assistant that specializes in Azure and Microsoft technologies. \n",
+ "\n",
+ "Your responsibilities:\n",
+ "1. Use the Microsoft Learn MCP tool to find accurate, up-to-date documentation when answering questions\n",
+ "2. Always cite your sources by providing links to the documentation\n",
+ "3. If you're unsure about something, acknowledge it and suggest where the user might find more information\n",
+ "4. Provide clear, concise explanations suitable for developers of varying experience levels\n",
+ "\n",
+ "When responding:\n",
+ "- Start with a direct answer to the question\n",
+ "- Provide relevant code examples when appropriate\n",
+ "- Include links to official documentation for further reading\n",
+ "\"\"\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "8b414374",
+ "metadata": {},
+ "source": [
+ "### 6.2.4 Set Up Environment Variables\n",
+ "\n",
+ "Load the project endpoint and model deployment from your `.env` file."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "ea960cc0",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "PROJECT_ENDPOINT = os.getenv(\"AZURE_AI_PROJECT_ENDPOINT\", \"\").strip()\n",
+ "assert PROJECT_ENDPOINT, \"ERROR: AZURE_AI_PROJECT_ENDPOINT is missing\"\n",
+ "\n",
+ "MODEL_DEPLOYMENT = os.getenv(\"CHAT_MODEL_NAME\", \"\").strip()\n",
+ "assert MODEL_DEPLOYMENT, \"ERROR: CHAT_MODEL_NAME is missing\"\n",
+ "\n",
+ "print(f\"Project Endpoint: {PROJECT_ENDPOINT}\")\n",
+ "print(f\"Model Deployment: {MODEL_DEPLOYMENT}\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "968fcd8f",
+ "metadata": {},
+ "source": [
+ "## 6.3. Testing the Research Assistant\n",
+ "\n",
+ "### 6.3.1 Single Query Test\n",
+ "\n",
+ "Let's test the agent with a single question about Azure services."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "eeee9085",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "async def ask_agent(question: str):\n",
+ " \"\"\"Send a single question to the Research Assistant agent.\"\"\"\n",
+ " async with (\n",
+ " DefaultAzureCredential() as credential,\n",
+ " AzureAIClient(\n",
+ " project_endpoint=PROJECT_ENDPOINT,\n",
+ " model_deployment_name=MODEL_DEPLOYMENT,\n",
+ " credential=credential,\n",
+ " ).as_agent(\n",
+ " name=\"ResearchAssistant\",\n",
+ " instructions=AGENT_INSTRUCTIONS,\n",
+ " tools=create_mcp_tools(),\n",
+ " ) as agent,\n",
+ " ):\n",
+ " print(f\"Question: {question}\\n\")\n",
+ " print(\"Assistant: \", end=\"\", flush=True)\n",
+ " \n",
+ " async for chunk in agent.run_stream(question):\n",
+ " if chunk.text:\n",
+ " print(chunk.text, end=\"\", flush=True)\n",
+ " print(\"\\n\")\n",
+ "\n",
+ "# Test with a sample question\n",
+ "await ask_agent(\"What is Azure Kubernetes Service and when should I use it?\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "94073cb2",
+ "metadata": {},
+ "source": [
+ "### 6.3.2 Multi-Turn Conversation with Thread\n",
+ "\n",
+ "One of the powerful features of the Agent Framework is thread persistence, which maintains context across multiple conversation turns."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "a8710a29",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "async def multi_turn_conversation(questions: list):\n",
+ " \"\"\"Demonstrate multi-turn conversation with context retention.\"\"\"\n",
+ " async with (\n",
+ " DefaultAzureCredential() as credential,\n",
+ " AzureAIClient(\n",
+ " project_endpoint=PROJECT_ENDPOINT,\n",
+ " model_deployment_name=MODEL_DEPLOYMENT,\n",
+ " credential=credential,\n",
+ " ).as_agent(\n",
+ " name=\"ResearchAssistant\",\n",
+ " instructions=AGENT_INSTRUCTIONS,\n",
+ " tools=create_mcp_tools(),\n",
+ " ) as agent,\n",
+ " ):\n",
+ " # Create a thread for multi-turn conversation\n",
+ " thread = agent.get_new_thread()\n",
+ " \n",
+ " for i, question in enumerate(questions, 1):\n",
+ " print(f\"--- Turn {i} ---\")\n",
+ " print(f\"You: {question}\\n\")\n",
+ " print(\"Assistant: \", end=\"\", flush=True)\n",
+ " \n",
+ " async for chunk in agent.run_stream(question, thread=thread):\n",
+ " if chunk.text:\n",
+ " print(chunk.text, end=\"\", flush=True)\n",
+ " print(\"\\n\")\n",
+ "\n",
+ "# Test multi-turn conversation\n",
+ "questions = [\n",
+ " \"How do I set up managed identity for an Azure Function?\",\n",
+ " \"Can you show me a code example for that?\",\n",
+ " \"What are the security benefits of using managed identity instead of connection strings?\"\n",
+ "]\n",
+ "\n",
+ "await multi_turn_conversation(questions)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b65362b9",
+ "metadata": {},
+ "source": [
+ "## 6.4. Exploring Agent Capabilities\n",
+ "\n",
+ "### 6.4.1 Adding Custom Tools\n",
+ "\n",
+ "In addition to MCP tools, you can create custom Python functions as tools. Here's an example of adding a simple calculation tool."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "47556caa",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from typing import Annotated\n",
+ "\n",
+ "def calculate_azure_storage_cost(\n",
+ " storage_gb: Annotated[float, \"Amount of storage in GB\"],\n",
+ " tier: Annotated[str, \"Storage tier: 'hot', 'cool', or 'archive'\"] = \"hot\"\n",
+ ") -> str:\n",
+ " \"\"\"Calculate estimated monthly cost for Azure Blob Storage.\"\"\"\n",
+ " # Simplified pricing (actual prices vary by region)\n",
+ " prices = {\n",
+ " \"hot\": 0.0184,\n",
+ " \"cool\": 0.01,\n",
+ " \"archive\": 0.00099\n",
+ " }\n",
+ " price_per_gb = prices.get(tier.lower(), prices[\"hot\"])\n",
+ " monthly_cost = storage_gb * price_per_gb\n",
+ " return f\"Estimated monthly cost for {storage_gb} GB on {tier} tier: ${monthly_cost:.2f}\"\n",
+ "\n",
+ "# You can add this tool to your agent like this:\n",
+ "# tools=[get_mcp_tools()[0], calculate_azure_storage_cost]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "42ab6430",
+ "metadata": {},
+ "source": [
+ "### 6.4.2 Try It Yourself!\n",
+ "\n",
+ "Use the cell below to ask your own questions to the Research Assistant. Modify the question and run the cell to see the response."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "56a14bd3",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Try your own question!\n",
+ "your_question = \"What are the best practices for Azure OpenAI prompt engineering?\"\n",
+ "\n",
+ "await ask_agent(your_question)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "5e733c45",
+ "metadata": {},
+ "source": [
+ "### 6.4.3 Try this in the new Foundry Portal (optional)\n",
+ "\n",
+ "The Microsoft Foundry portal also provides a no-code experience for creating and testing agents. If you'd like to explore the portal-based approach:\n",
+ "\n",
+ "1. Navigate to [Microsoft Foundry](https://ai.azure.com) and open your project using the New Foundry portal \n",
+ "2. Click **Build** in the top right\n",
+ "3. If you already did the steps above, you should already see a ResearchAssistant and you can click that. Otherwise, create a new agent and give it a name like \"ResearchAssistant\"\n",
+ "4. Add instructions similar to what we defined in `AGENT_INSTRUCTIONS` above\n",
+ "5. Under **Tools**, add the Microsoft Learn MCP tool to give your agent access to documentation\n",
+ "6. Use the **Playground** to test your agent with the same questions you tried in this notebook\n",
+ "\n",
+ "Compare the portal experience with the code-first approach you used here. Consider:\n",
+ "- When would you prefer the portal vs. code?\n",
+ "- How might you use both together in a development workflow?"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "ebf0f473",
+ "metadata": {},
+ "source": [
+ "## 6.5. Summary\n",
+ "\n",
+ "In this notebook, you learned how to:\n",
+ "\n",
+ "1. **Set up the Microsoft Agent Framework** with the `agent-framework-azure-ai` package\n",
+ "2. **Create MCP tools** to connect your agent to live data sources (Microsoft Learn)\n",
+ "3. **Build a Research Assistant agent** with custom instructions\n",
+ "4. **Use thread persistence** for multi-turn conversations\n",
+ "5. **Extend agents with custom tools** using Python functions\n",
+ "\n",
+ "### Next Steps\n",
+ "\n",
+ "Consider exploring:\n",
+ "- Adding more MCP tools (e.g., GitHub, databases)\n",
+ "- Creating multi-agent systems for complex workflows\n",
+ "- Implementing agent handoffs for specialized tasks\n",
+ "- Adding memory and state management for long-running agents"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": ".venv (3.13.11)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.13.11"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
\ No newline at end of file
diff --git a/066-OpenAIFundamentals/Student/Resources/notebooks/CH-5.7-RedTeaming.ipynb b/066-OpenAIFundamentals/Student/Resources/notebooks/CH-5.7-RedTeaming.ipynb
index 0bf538deef..02edcd59e6 100644
--- a/066-OpenAIFundamentals/Student/Resources/notebooks/CH-5.7-RedTeaming.ipynb
+++ b/066-OpenAIFundamentals/Student/Resources/notebooks/CH-5.7-RedTeaming.ipynb
@@ -199,7 +199,7 @@
],
"metadata": {
"kernelspec": {
- "display_name": "Python 3",
+ "display_name": ".venv (3.13.11)",
"language": "python",
"name": "python3"
},
@@ -213,9 +213,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.13"
+ "version": "3.13.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
-}
+}
\ No newline at end of file