You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Resolves inconsistency between two different implementations of
num_tokens_from_messages in the cookbook notebooks by creating a
unified utility function that supports all current OpenAI models.
- Created shared token_counting_utils.py module in examples/utils/
- Consolidated logic from both notebook versions into single function
- Added support for all current models including gpt-4o variants
- Updated both notebooks to import from shared utility module
- Maintains backward compatibility with existing code
This ensures consistent token counting across all cookbook examples
and makes it easier to maintain model support in one location.
Co-Authored-By: Claude <noreply@anthropic.com>
" f\"\"\"num_tokens_from_messages() is not implemented for model {model}.\"\"\"\n",
500
-
" )\n",
501
-
" num_tokens = 0\n",
502
-
" for message in messages:\n",
503
-
" num_tokens += tokens_per_message\n",
504
-
" for key, value in message.items():\n",
505
-
" num_tokens += len(encoding.encode(value))\n",
506
-
" if key == \"name\":\n",
507
-
" num_tokens += tokens_per_name\n",
508
-
" num_tokens += 3 # every reply is primed with <|start|>assistant<|message|>\n",
509
-
" return num_tokens\n"
510
-
]
460
+
"source": "# Import the unified token counting function\nimport sys\nimport os\n# Add the utils directory to the path so we can import our utility\nsys.path.append(os.path.join(os.path.dirname(os.path.abspath('.')), 'utils'))\n\nfrom utils.token_counting_utils import num_tokens_from_messages\n\n# The num_tokens_from_messages function is now imported from the shared utility module\n# It supports all current OpenAI models including:\n# - gpt-3.5-turbo variants\n# - gpt-4 variants \n# - gpt-4o and gpt-4o-mini variants"
" f\"\"\"num_tokens_from_messages() is not implemented for model {model}.\"\"\"\n",
547
-
" )\n",
548
-
" num_tokens = 0\n",
549
-
" for message in messages:\n",
550
-
" num_tokens += tokens_per_message\n",
551
-
" for key, value in message.items():\n",
552
-
" num_tokens += len(encoding.encode(value))\n",
553
-
" if key == \"name\":\n",
554
-
" num_tokens += tokens_per_name\n",
555
-
" num_tokens += 3 # every reply is primed with <|start|>assistant<|message|>\n",
556
-
" return num_tokens\n"
557
-
]
514
+
"source": "# Import the unified token counting function\nimport sys\nimport os\n# Add the utils directory to the path so we can import our utility\nsys.path.append(os.path.join(os.path.dirname(os.path.abspath('.')), 'utils'))\n\nfrom utils.token_counting_utils import num_tokens_from_messages\n\n# The num_tokens_from_messages function is now imported from the shared utility module\n# It supports all current OpenAI models including:\n# - gpt-3.5-turbo variants\n# - gpt-4 variants \n# - gpt-4o and gpt-4o-mini variants"
0 commit comments