client.summarize(...)
-
-
-
This endpoint analyzes videos and generates summaries, chapters, or highlights. Optionally, you can provide a prompt to customize the output.
This endpoint is rate-limited. For details, see the [Rate limits](/v1.3/docs/get-started/rate-limits) page.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.summarize( video_id="6298d673f1090f1100476d4c", type="summary", prompt="Generate a summary of this video for a social media post, up to two sentences.", temperature=0.2, )
-
-
-
video_id:
strβ The unique identifier of the video that you want to summarize.
-
type:
strSpecifies the type of summary. Use one of the following values:
summary: A brief that encapsulates the key points of a video, presenting the most important information clearly and concisely.chapter: A chronological list of all the chapters in a video, providing a granular breakdown of its content. For each chapter, the platform returns its starting and end times, measured in seconds from the beginning of the video clip, a descriptive headline that offers a brief of the events or activities within that part of the video, and an accompanying summary that elaborates on the headline.highlight: A chronologically ordered list of the most important events within a video. Unlike chapters, highlights only capture the key moments, providing a snapshot of the video's main topics. For each highlight, the platform returns its starting and end times, measured in seconds from the beginning of the video, a title, and a brief description that captures the essence of this part of the video.
-
prompt:
typing.Optional[str]Use this field to provide context for the summarization task, such as the target audience, style, tone of voice, and purpose.
- Your prompts can be instructive or descriptive, or you can also phrase them as questions. - The maximum length of a prompt is 2,000 tokens.Example: Generate a summary of this video for a social media post, up to two sentences.
-
temperature:
typing.Optional[float]Controls the randomness of the text output generated by the model. A higher value generates more creative text, while a lower value produces more deterministic text output.
Default: 0.2 Min: 0 Max: 1
-
response_format:
typing.Optional[ResponseFormat]Use this parameter to specify the format of the response. This parameter is only valid when the
typeparameter is set tosummary. If you omit this parameter, the platform returns unstructured text.
-
max_tokens:
typing.Optional[int]β The maximum number of tokens to generate.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.gist(...)
-
-
-
This endpoint analyzes videos and generates titles, topics, and hashtags.
This endpoint is rate-limited. For details, see the [Rate limits](/v1.3/docs/get-started/rate-limits) page.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.gist( video_id="6298d673f1090f1100476d4c", types=["title", "topic"], )
-
-
-
video_id:
strβ The unique identifier of the video that you want to generate a gist for.
-
types:
typing.Sequence[GistRequestTypesItem]Specifies the type of gist. Use one of the following values:
title: A title succinctly captures a video's main theme, such as "From Consumerism to Minimalism: A Journey Toward Sustainable Living," guiding viewers to its content and themes.topic: A topic is the central theme of a video, such as "Shopping Vlog Lifestyle", summarizing its content for efficient categorization and reference.hashtag: A hashtag, like "#BlackFriday", represents key themes in a video, enhancing its discoverability and categorization on social media platforms.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.generate(...)
-
-
-
This endpoint is deprecated. Use the [`/analyze`](/v1.3/api-reference/analyze-videos/analyze) endpoint instead, which provides identical functionality.
This endpoint generates open-ended texts based on your videos, including but not limited to tables of content, action items, memos, and detailed analyses.
- This endpoint is rate-limited. For details, see the [Rate limits](/v1.3/docs/get-started/rate-limits) page. - This endpoint supports streaming responses. For details on integrating this feature into your application, refer to the [Open-ended analysis](/v1.3/docs/guides/analyze-videos/open-ended-analysis#streaming-responses) guide.
-
This endpoint is deprecated. Use the [`/analyze`](/v1.3/api-reference/analyze-videos/analyze) endpoint instead, which provides identical functionality.
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.generate( video_id="6298d673f1090f1100476d4c", prompt="I want to generate a description for my video with the following format - Title of the video, followed by a summary in 2-3 sentences, highlighting the main topic, key events, and concluding remarks.", temperature=0.2, stream=True, )
-
-
-
video_id:
strβ The unique identifier of the video for which you wish to generate a text.
-
prompt:
strA prompt that guides the model on the desired format or content.
- Even though the model behind this endpoint is trained to a high degree of accuracy, the preciseness of the generated text may vary based on the nature and quality of the video and the clarity of the prompt. - Your prompts can be instructive or descriptive, or you can also phrase them as questions. - The maximum length of a prompt is 2,000 tokens.Examples:
- Based on this video, I want to generate five keywords for SEO (Search Engine Optimization).
- I want to generate a description for my video with the following format: Title of the video, followed by a summary in 2-3 sentences, highlighting the main topic, key events, and concluding remarks.
-
temperature:
typing.Optional[float]Controls the randomness of the text output generated by the model. A higher value generates more creative text, while a lower value produces more deterministic text output.
Default: 0.2 Min: 0 Max: 1
-
stream:
typing.Optional[bool]Set this parameter to
trueto enable streaming responses in the NDJSON format.Default:
true
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.analyze_stream(...)
-
-
-
This endpoint analyzes your videos and creates fully customizable text based on your prompts, including but not limited to tables of content, action items, memos, and detailed analyses.
- This endpoint is rate-limited. For details, see the [Rate limits](/v1.3/docs/get-started/rate-limits) page. - This endpoint supports streaming responses. For details on integrating this feature into your application, refer to the [Open-ended analysis](/v1.3/docs/guides/analyze-videos/open-ended-analysis#streaming-responses).
-
-
-
from twelvelabs import ResponseFormat, TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) response = client.analyze_stream( video_id="6298d673f1090f1100476d4c", prompt="I want to generate a description for my video with the following format - Title of the video, followed by a summary in 2-3 sentences, highlighting the main topic, key events, and concluding remarks.", temperature=0.2, response_format=ResponseFormat( json_schema={ "type": "object", "properties": { "title": {"type": "string"}, "summary": {"type": "string"}, "keywords": {"type": "array", "items": {"type": "string"}}, }, }, ), max_tokens=2000, ) for chunk in response.data: yield chunk
-
-
-
video_id:
strβ The unique identifier of the video for which you wish to generate a text.
-
prompt:
strA prompt that guides the model on the desired format or content.
- Even though the model behind this endpoint is trained to a high degree of accuracy, the preciseness of the generated text may vary based on the nature and quality of the video and the clarity of the prompt. - Your prompts can be instructive or descriptive, or you can also phrase them as questions. - The maximum length of a prompt is 2,000 tokens.Examples:
- Based on this video, I want to generate five keywords for SEO (Search Engine Optimization).
- I want to generate a description for my video with the following format: Title of the video, followed by a summary in 2-3 sentences, highlighting the main topic, key events, and concluding remarks.
-
temperature:
typing.Optional[float]Controls the randomness of the text output generated by the model. A higher value generates more creative text, while a lower value produces more deterministic text output.
Default: 0.2 Min: 0 Max: 1
-
response_format:
typing.Optional[ResponseFormat]
-
max_tokens:
typing.Optional[int]β The maximum number of tokens to generate.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.analyze(...)
-
-
-
This endpoint analyzes your videos and creates fully customizable text based on your prompts, including but not limited to tables of content, action items, memos, and detailed analyses.
- This endpoint is rate-limited. For details, see the [Rate limits](/v1.3/docs/get-started/rate-limits) page. - This endpoint supports streaming responses. For details on integrating this feature into your application, refer to the [Open-ended analysis](/v1.3/docs/guides/analyze-videos/open-ended-analysis#streaming-responses).
-
-
-
from twelvelabs import ResponseFormat, TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.analyze( video_id="6298d673f1090f1100476d4c", prompt="I want to generate a description for my video with the following format - Title of the video, followed by a summary in 2-3 sentences, highlighting the main topic, key events, and concluding remarks.", temperature=0.2, response_format=ResponseFormat( json_schema={ "type": "object", "properties": { "title": {"type": "string"}, "summary": {"type": "string"}, "keywords": {"type": "array", "items": {"type": "string"}}, }, }, ), max_tokens=2000, )
-
-
-
video_id:
strβ The unique identifier of the video for which you wish to generate a text.
-
prompt:
strA prompt that guides the model on the desired format or content.
- Even though the model behind this endpoint is trained to a high degree of accuracy, the preciseness of the generated text may vary based on the nature and quality of the video and the clarity of the prompt. - Your prompts can be instructive or descriptive, or you can also phrase them as questions. - The maximum length of a prompt is 2,000 tokens.Examples:
- Based on this video, I want to generate five keywords for SEO (Search Engine Optimization).
- I want to generate a description for my video with the following format: Title of the video, followed by a summary in 2-3 sentences, highlighting the main topic, key events, and concluding remarks.
-
temperature:
typing.Optional[float]Controls the randomness of the text output generated by the model. A higher value generates more creative text, while a lower value produces more deterministic text output.
Default: 0.2 Min: 0 Max: 1
-
response_format:
typing.Optional[ResponseFormat]
-
max_tokens:
typing.Optional[int]β The maximum number of tokens to generate.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tasks.list(...)
-
-
-
This method returns a list of the video indexing tasks in your account. The platform returns your video indexing tasks sorted by creation date, with the newest at the top of the list.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) response = client.tasks.list( page=1, page_limit=10, sort_by="created_at", sort_option="desc", index_id="630aff993fcee0532cb809d0", filename="01.mp4", duration=531.998133, width=640, height=360, created_at="2024-03-01T00:00:00Z", updated_at="2024-03-01T00:00:00Z", ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page
-
-
-
page:
typing.Optional[int]A number that identifies the page to retrieve.
Default:
1.
-
page_limit:
typing.Optional[int]The number of items to return on each page.
Default:
10. Max:50.
-
sort_by:
typing.Optional[str]The field to sort on. The following options are available:
updated_at: Sorts by the time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"), when the item was updated.created_at: Sorts by the time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"), when the item was created.
Default:
created_at.
-
sort_option:
typing.Optional[str]The sorting direction. The following options are available:
ascdesc
Default:
desc.
-
index_id:
typing.Optional[str]β Filter by the unique identifier of an index.
-
status:
typing.Optional[ typing.Union[ TasksListRequestStatusItem, typing.Sequence[TasksListRequestStatusItem] ] ]Filter by one or more video indexing task statuses. The following options are available:
ready: The video has been successfully uploaded and indexed.uploading: The video is being uploaded.validating: The video is being validated against the prerequisites.pending: The video is pending.queued: The video is queued.indexing: The video is being indexed.failed: The video indexing task failed.
To filter by multiple statuses, specify the
statusparameter for each value:status=ready&status=validating
-
filename:
typing.Optional[str]β Filter by filename.
-
duration:
typing.Optional[float]β Filter by duration. Expressed in seconds.
-
width:
typing.Optional[int]β Filter by width.
-
height:
typing.Optional[int]β Filter by height.
-
created_at:
typing.Optional[str]β Filter video indexing tasks by the creation date and time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"). The platform returns the video indexing tasks that were created on the specified date at or after the given time.
-
updated_at:
typing.Optional[str]β Filter video indexing tasks by the last update date and time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"). The platform returns the video indexing tasks that were updated on the specified date at or after the given time.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tasks.create(...)
-
-
-
This method creates a video indexing task that uploads and indexes a video in a single operation.
This endpoint bundles two operations (upload and indexing) together. In the next major API release, this endpoint will be removed in favor of a separated workflow: 1. Upload your video using the [`POST /assets`](/v1.3/api-reference/upload-content/direct-uploads/create) endpoint 2. Index the uploaded video using the [`POST /indexes/{index-id}/indexed-assets`](/v1.3/api-reference/index-content/create) endpointThis separation provides better control, reusability of assets, and improved error handling. New implementations should use the new workflow.
Upload options:
- Local file: Use the
video_fileparameter. - Publicly accessible URL: Use the
video_urlparameter.
Your video files must meet requirements based on your workflow:
- Search: Marengo requirements.
- Video analysis: Pegasus requirements.
- If you want to both search and analyze your videos, the most restrictive requirements apply.
- This method allows you to upload files up to 2 GB in size. To upload larger files, use the Multipart Upload API
- Local file: Use the
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.tasks.create( index_id="index_id", )
-
-
-
index_id:
strβ The unique identifier of the index to which the video is being uploaded.
-
video_file: `from future import annotations
typing.Optional[core.File]` β See core.File for more documentation
-
video_url:
typing.Optional[str]β Specify this parameter to upload a video from a publicly accessible URL.
-
enable_video_stream:
typing.Optional[bool]β This parameter indicates if the platform stores the video for streaming. When set totrue, the platform stores the video, and you can retrieve its URL by calling theGETmethod of the/indexes/{index-id}/videos/{video-id}endpoint. You can then use this URL to access the stream over the HLS protocol.
-
user_metadata:
typing.Optional[str]β Metadata that helps you categorize your videos. You can specify a list of keys and values. Keys must be of typestring, and values can be of the following types:string,integer,floatorboolean.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tasks.retrieve(...)
-
-
-
This method retrieves a video indexing task.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.tasks.retrieve( task_id="6298d673f1090f1100476d4c", )
-
-
-
task_id:
strβ The unique identifier of the video indexing task to retrieve.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tasks.delete(...)
-
-
-
This action cannot be undone. Note the following about deleting a video indexing task:
- You can only delete video indexing tasks for which the status is
readyorfailed. - If the status of your video indexing task is
ready, you must first delete the video vector associated with your video indexing task by calling theDELETEmethod of the/indexes/videosendpoint.
- You can only delete video indexing tasks for which the status is
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.tasks.delete( task_id="6298d673f1090f1100476d4c", )
-
-
-
task_id:
strβ The unique identifier of the video indexing task you want to delete.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.indexes.list(...)
-
-
-
This method returns a list of the indexes in your account. The platform returns indexes sorted by creation date, with the oldest indexes at the top of the list.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) response = client.indexes.list( page=1, page_limit=10, sort_by="created_at", sort_option="desc", index_name="myIndex", model_options="visual,audio", model_family="marengo", created_at="2024-08-16T16:53:59Z", updated_at="2024-08-16T16:55:59Z", ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page
-
-
-
page:
typing.Optional[int]A number that identifies the page to retrieve.
Default:
1.
-
page_limit:
typing.Optional[int]The number of items to return on each page.
Default:
10. Max:50.
-
sort_by:
typing.Optional[str]The field to sort on. The following options are available:
updated_at: Sorts by the time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"), when the item was updated.created_at: Sorts by the time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"), when the item was created.
Default:
created_at.
-
sort_option:
typing.Optional[str]The sorting direction. The following options are available:
ascdesc
Default:
desc.
-
index_name:
typing.Optional[str]β Filter by the name of an index.
-
model_options:
typing.Optional[str]β Filter by the model options. When filtering by multiple model options, the values must be comma-separated.
-
model_family:
typing.Optional[str]β Filter by the model family. This parameter can take one of the following values:marengoorpegasus. You can specify a single value.
-
created_at:
typing.Optional[str]β Filter indexes by the creation date and time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"). The platform returns the indexes that were created on the specified date at or after the given time.
-
updated_at:
typing.Optional[str]β Filter indexes by the last update date and time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"). The platform returns the indexes that were last updated on the specified date at or after the given time.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.indexes.create(...)
-
-
-
This method creates an index.
-
-
-
from twelvelabs import TwelveLabs from twelvelabs.indexes import IndexesCreateRequestModelsItem client = TwelveLabs( api_key="YOUR_API_KEY", ) client.indexes.create( index_name="myIndex", models=[ IndexesCreateRequestModelsItem( model_name="marengo3.0", model_options=["visual", "audio"], ), IndexesCreateRequestModelsItem( model_name="pegasus1.2", model_options=["visual", "audio"], ), ], addons=["thumbnail"], )
-
-
-
index_name:
strβ The name of the index. Make sure you use a succinct and descriptive name.
-
models:
typing.Sequence[IndexesCreateRequestModelsItem]β An array that specifies the video understanding models and the model options to be enabled for this index. Models determine what tasks you can perform with your videos. Model options determine which modalities the platform analyzes.
-
addons:
typing.Optional[typing.Sequence[str]]An array specifying which add-ons should be enabled. Each entry in the array is an addon, and the following values are supported:
thumbnail: Enables thumbnail generation.
If you don't provide this parameter, no add-ons will be enabled.
- You can only enable addons when using the Marengo video understanding model. - You cannot disable an add-on once the index has been created.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.indexes.retrieve(...)
-
-
-
This method retrieves details about the specified index.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.indexes.retrieve( index_id="6298d673f1090f1100476d4c", )
-
-
-
index_id:
strβ Unique identifier of the index to retrieve.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.indexes.update(...)
-
-
-
This method updates the name of the specified index.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.indexes.update( index_id="6298d673f1090f1100476d4c", index_name="myIndex", )
-
-
-
index_id:
strβ Unique identifier of the index to update.
-
index_name:
strβ The name of the index.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.indexes.delete(...)
-
-
-
This method deletes the specified index and all the videos within it. This action cannot be undone.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.indexes.delete( index_id="6298d673f1090f1100476d4c", )
-
-
-
index_id:
strβ Unique identifier of the index to delete.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.assets.list(...)
-
-
-
This method returns a list of assets in your account.
- The platform returns your assets sorted by creation date, with the newest at the top of the list. - The platform automatically deletes assets that are not associated with any entity after 72 hours.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) response = client.assets.list( page=1, page_limit=10, ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page
-
-
-
page:
typing.Optional[int]A number that identifies the page to retrieve.
Default:
1.
-
page_limit:
typing.Optional[int]The number of items to return on each page.
Default:
10. Max:50.
-
asset_ids:
typing.Optional[typing.Union[str, typing.Sequence[str]]]β Filters the response to include only assets with the specified IDs. Provide one or more asset IDs. When you specify multiple IDs, the platform returns all matching assets.
-
asset_types:
typing.Optional[ typing.Union[ AssetsListRequestAssetTypesItem, typing.Sequence[AssetsListRequestAssetTypesItem], ] ]β Filters the response to include only assets of the specified types. Provide one or more asset types. When you specify multiple types, the platform returns all matching assets.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.assets.create(...)
-
-
-
This method creates an asset by uploading a file to the platform. Assets are files (such as images, audio, or video) that you can use in downstream workflows, including indexing, analyzing video content, and creating entities.
Supported content: Video, audio, and images.
Upload methods:
- Local file: Set the
methodparameter todirectand use thefileparameter to specify the file. - Publicly accessible URL: Set the
methodparameter tourland use theurlparameter to specify the URL of your file.
File size: 200MB maximum for local file uploads, 4GB maximum for URL uploads.
Additional requirements depend on your workflow:
- Search: Marengo requirements
- Video analysis: Pegasus requirements
- Entity search: Marengo image requirements
- Create embeddings: Marengo requirements
- Local file: Set the
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.assets.create( method="direct", )
-
-
-
method:
AssetsCreateRequestMethodβ Specifies the upload method for the asset. Usedirectto upload a local file orurlfor a publicly accessible URL.
-
file: `from future import annotations
typing.Optional[core.File]` β See core.File for more documentation
-
url:
typing.Optional[str]Specify this parameter to upload a file from a publicly accessible URL. This parameter is required when
URL uploads are limited to 4GB.methodis set tourl.
-
filename:
typing.Optional[str]β The optional filename of the asset. If not provided, the platform will determine the filename from the file or URL.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.assets.retrieve(...)
-
-
-
This method retrieves details about the specified asset.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.assets.retrieve( asset_id="6298d673f1090f1100476d4c", )
-
-
-
asset_id:
strβ The unique identifier of the asset to retrieve.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.assets.delete(...)
-
-
-
This method deletes the specified asset. This action cannot be undone.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.assets.delete( asset_id="6298d673f1090f1100476d4c", )
-
-
-
asset_id:
strβ The unique identifier of the asset to delete.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.multipart_upload.list_incomplete_uploads(...)
-
-
-
This method returns a list of all incomplete multipart upload sessions in your account.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) response = client.multipart_upload.list_incomplete_uploads( page=1, page_limit=10, ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page
-
-
-
page:
typing.Optional[int]A number that identifies the page to retrieve.
Default:
1.
-
page_limit:
typing.Optional[int]The number of items to return on each page.
Default:
10. Max:50.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.multipart_upload.create(...)
-
-
-
This method creates a multipart upload session.
Supported content: Video and audio
File size: 4GB maximum.
Additional requirements depend on your workflow:
- Search: Marengo requirements
- Video analysis: Pegasus requirements
- Create embeddings: Marengo requirements
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.multipart_upload.create( filename="my-video.mp4", total_size=104857600, )
-
-
-
filename:
strβ Original filename of the asset
-
total_size:
intThe total size of the file in bytes. The platform uses this value to:
- Calculate the optimal chunk size.
- Determine the total number of chunks required
- Generate the initial set of presigned URLs
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.multipart_upload.get_status(...)
-
-
-
This method provides information about an upload session, including its current status, chunk-level progress, and completion state.
Use this endpoint to:
- Verify upload completion (
status=completed) - Identify any failed chunks that require a retry
- Monitor the upload progress by comparing
uploaded_sizewithtotal_size - Determine if the session has expired
- Retrieve the status information for each chunk
You must call this method after reporting chunk completion to confirm the upload has transitioned to the
completedstatus before using the asset. - Verify upload completion (
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) response = client.multipart_upload.get_status( upload_id="507f1f77bcf86cd799439011", page=1, page_limit=10, ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page
-
-
-
upload_id:
strβ The unique identifier of the upload session.
-
page:
typing.Optional[int]A number that identifies the page to retrieve.
Default:
1.
-
page_limit:
typing.Optional[int]The number of items to return on each page.
Default:
10. Max:50.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.multipart_upload.report_chunk_batch(...)
-
-
-
This method notifies the platform which chunks have been successfully uploaded. When all chunks are reported, the platform finalizes the upload.
For optimal performance, report chunks in batches and in any order.
-
-
-
from twelvelabs import CompletedChunk, TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.multipart_upload.report_chunk_batch( upload_id="507f1f77bcf86cd799439011", completed_chunks=[ CompletedChunk( chunk_index=1, proof="d41d8cd98f00b204e9800998ecf8427e", chunk_size=5242880, ) ], )
-
-
-
upload_id:
strβ The unique identifier of the upload session.
-
completed_chunks:
typing.Sequence[CompletedChunk]β The list of chunks successfully uploaded that you're reporting to the platform. Report only after receiving an ETag.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.multipart_upload.get_additional_presigned_urls(...)
-
-
-
This method generates new presigned URLs for specific chunks that require uploading. Use this endpoint in the following situations:
- Your initial URLs have expired (URLs expire after one hour).
- The initial set of presigned URLs does not include URLs for all chunks.
- You need to retry failed chunk uploads with new URLs.
To specify which chunks need URLs, use the
startandcountparameters. For example, to generate URLs for chunks 21 to 30, usestart=21andcount=10. The response will provide new URLs, each with a fresh expiration time of one hour.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.multipart_upload.get_additional_presigned_urls( upload_id="507f1f77bcf86cd799439011", start=1, count=10, )
-
-
-
upload_id:
strβ The unique identifier of the upload session.
-
start:
intβ The index of the first chunk number to generate URLs for. Chunks are numbered from 1.
-
count:
intβ The number of presigned URLs to generate starting from the index. You can request a maximum of 50 URLs in a single API call.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.entity_collections.list(...)
-
-
-
This method returns a list of the entity collections in your account.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) response = client.entity_collections.list( page=1, page_limit=10, name="My entity collection", sort_by="created_at", sort_option="desc", ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page
-
-
-
page:
typing.Optional[int]A number that identifies the page to retrieve.
Default:
1.
-
page_limit:
typing.Optional[int]The number of items to return on each page.
Default:
10. Max:50.
-
name:
typing.Optional[str]β Filter entity collections by name.
-
sort_by:
typing.Optional[EntityCollectionsListRequestSortBy]The field to sort on. The following options are available:
created_at: Sorts by the time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"), when the entity collection was updated.updated_at: Sorts by the time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"), when the entity collection was created.name: Sorts by the name.
-
sort_option:
typing.Optional[str]The sorting direction. The following options are available:
ascdesc
Default:
desc.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.entity_collections.create(...)
-
-
-
This method creates an entity collection.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.entity_collections.create( name="My entity collection", )
-
-
-
name:
strβ The name of the entity collection. Make sure you use a succinct and descriptive name.
-
description:
typing.Optional[str]β Optional description of the entity collection.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.entity_collections.retrieve(...)
-
-
-
This method retrieves details about the specified entity collection.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.entity_collections.retrieve( entity_collection_id="6298d673f1090f1100476d4c", )
-
-
-
entity_collection_id:
strβ The unique identifier of the entity collection to retrieve.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.entity_collections.delete(...)
-
-
-
This method deletes the specified entity collection. This action cannot be undone.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.entity_collections.delete( entity_collection_id="6298d673f1090f1100476d4c", )
-
-
-
entity_collection_id:
strβ The unique identifier of the entity collection to delete.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.entity_collections.update(...)
-
-
-
This method updates the specified entity collection.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.entity_collections.update( entity_collection_id="6298d673f1090f1100476d4c", )
-
-
-
entity_collection_id:
strβ The unique identifier of the entity collection to update.
-
name:
typing.Optional[str]β The updated name of the entity collection.
-
description:
typing.Optional[str]β The updated description of the entity collection.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.manage_entities.list_all_entities(...)
-
-
-
This method returns a list of entities from all entity collections. This is an internal API primarily used by the search interface.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.manage_entities.list_all_entities( page=1, page_limit=10, name="foo", status="processing", sort_by="created_at", sort_option="desc", )
-
-
-
page:
typing.Optional[int]A number that identifies the page to retrieve.
Default:
1.
-
page_limit:
typing.Optional[int]The number of items to return on each page.
Default:
10. Max:50.
-
name:
typing.Optional[str]β Filter entities by name.
-
status:
typing.Optional[ListAllEntitiesRequestStatus]β Filter entities by status.
-
sort_by:
typing.Optional[ListAllEntitiesRequestSortBy]β Field to sort by.
-
sort_option:
typing.Optional[str]The sorting direction. The following options are available:
ascdesc
Default:
desc.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.embed.create(...)
-
-
-
This endpoint will be deprecated in a future version. Migrate to the [Embed API v2](/v1.3/api-reference/create-embeddings-v2) for continued support and access to new features.
This method creates embeddings for text, image, and audio content.
Ensure your media files meet the following requirements:
Parameters for embeddings:
- Common parameters:
model_name: The video understanding model you want to use. Example: "marengo3.0".
- Text embeddings:
text: Text for which to create an embedding.
- Image embeddings:
Provide one of the following:
image_url: Publicly accessible URL of your image file.image_file: Local image file.
- Audio embeddings:
Provide one of the following:
audio_url: Publicly accessible URL of your audio file.audio_file: Local audio file.
- Common parameters:
-
This endpoint will be deprecated in a future version. Migrate to the [Embed API v2](/v1.3/api-reference/create-embeddings-v2) for continued support and access to new features.
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.embed.create( model_name="model_name", )
-
-
-
model_name:
strThe name of the model you want to use. The following models are available:
marengo3.0: Enhanced model with sports intelligence and extended content support. For a list of the new features, see the New in Marengo 3.0 section.Marengo-retrieval-2.7: Video embedding model for multimodal search.
-
text:
typing.Optional[str]The text for which you wish to create an embedding.
Example: "Man with a dog crossing the street"
-
text_truncate:
typing.Optional[str]Specifies how the platform handles text that exceeds token limits.
Available options by model version:
Marengo 3.0: This parameter is deprecated. The platform automatically truncates text exceeding 500 tokens from the end.
Marengo 2.7: Specifies truncation method for text exceeding 77 tokens:
start: Removes tokens from the beginningend: Removes tokens from the end (default)none: Returns an error if the text is longer than the maximum token limit.
Default:
end
-
image_url:
typing.Optional[str]β The publicly accessible URL of the image for which you wish to create an embedding. This parameter is required for image embeddings ifimage_fileis not provided.
-
image_file: `from future import annotations
typing.Optional[core.File]` β See core.File for more documentation
-
audio_url:
typing.Optional[str]β The publicly accessible URL of the audio file for which you wish to creae an embedding. This parameter is required for audio embeddings ifaudio_fileis not provided.
-
audio_file: `from future import annotations
typing.Optional[core.File]` β See core.File for more documentation
-
audio_start_offset_sec:
typing.Optional[float]Specifies the start time, in seconds, from which the platform generates the audio embeddings. This parameter allows you to skip the initial portion of the audio during processing. Default:
0.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.search.create(...)
-
-
-
Use this endpoint to search for relevant matches in an index using text, media, or a combination of both as your query.
Text queries:
- Use the
query_textparameter to specify your query.
Media queries:
- Set the
query_media_typeparameter to the corresponding media type (example:image). - Specify either one of the following parameters:
query_media_url: Publicly accessible URL of your media file.query_media_file: Local media file. If bothquery_media_urlandquery_media_fileare specified in the same request,query_media_urltakes precedence.
Composed text and media queries (Marengo 3.0 only):
-
Use the
query_textparameter for your text query. -
Set
query_media_typetoimage. -
Specify the image using either the
query_media_urlor thequery_media_fileparameter.Example: Provide an image of a car and include "red color" in your query to find red instances of that car model.
Entity search (Marengo 3.0 only and in beta):
- To find a specific person in your videos, enclose the unique identifier of the entity you want to find in the
query_textparameter.
- Use the
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.search.create( index_id="index_id", search_options=["visual"], )
-
-
-
index_id:
strβ The unique identifier of the index to search.
-
search_options:
typing.List[SearchCreateRequestSearchOptionsItem]Specifies the modalities the video understanding model uses to find relevant information.
Available options:
visual: Searches visual content.audio: Searches non-speech audio (Marengo 3.0) or all audio (Marengo 2.7).transcription: Spoken words (Marengo 3.0 only)
For detailed guidance and version-specific behavior, see the Search options section.
-
query_media_type:
typing.Optional[typing.Literal["image"]]β The type of media you wish to use. This parameter is required for media queries. For example, to perform an image-based search, set this parameter toimage. Usequery_texttogether with this parameter when you want to perform a composed image+text search.
-
query_media_url:
typing.Optional[str]β The publicly accessible URL of the media file you wish to use. This parameter is required for media queries ifquery_media_fileis not provided.
-
query_media_file: `from future import annotations
typing.Optional[core.File]` β See core.File for more documentation
-
query_text:
typing.Optional[str]The text query to search for. This parameter is required for text queries. Note that the platform supports full natural language-based search. You can use this parameter together with
query_media_typeandquery_media_urlorquery_media_fileto perform a composed image+text search.If you're using the Entity Search feature to search for specific persons in your video content, you must enclose the unique identifier of your entity between the
<@and>markers. For example, to search for an entity with the IDentity123, use<@entity123> is walkingas your query.The maximum query length varies by model. Marengo 3.0 supports up to 500 tokens per query, while Marengo 2.7 supports up to 77 tokens per query.
-
transcription_options:
typing.Optional[typing.List[SearchCreateRequestTranscriptionOptionsItem]]Specifies how the platform matches your text query with the words spoken in the video. This parameter applies only when using Marengo 3.0 with the
search_optionsparameter containing thetranscriptionvalue.Available options:
lexical: Exact word matchingsemantic: Meaning-based matching
For details on when to use each option, see the Transcription options section.
Default:
["lexical", "semantic"].
-
adjust_confidence_level:
This parameter is deprecated in Marengo 3.0 and newer versions. Use the [`rank`](/v1.3/api-reference/any-to-video-search/make-search-request#response.body.data.rank) field in the response instead, which indicates the relevance ranking assigned by the model. This parameter specifies the strictness of the thresholds for assigning the high, medium, or low confidence levels to search results. If you use a lower value, the thresholds become more relaxed, and more search results will be classified as having high, medium, or low confidence levels. You can use this parameter to include a broader range of potentially relevant video clips, even if some results might be less precise.typing.Optional[float]Min: 0 Max: 1 Default: 0.5
-
group_by:
typing.Optional[SearchCreateRequestGroupBy]Use this parameter to group or ungroup items in a response. It can take one of the following values:
video: The platform will group the matching video clips in the response by video.clip: The matching video clips in the response will not be grouped.
Default:
clip
-
threshold:
typing.Optional[ThresholdSearch]
-
sort_option:
This parameter is deprecated in Marengo 3.0 and newer versions. Use the [`rank`](/v1.3/api-reference/any-to-video-search/make-search-request#response.body.data.rank) field in the response instead, which indicates the relevance ranking assigned by the model.typing.Optional[SearchCreateRequestSortOption]Use this parameter to specify the sort order for the response.
When performing a search, the platform assigns a relevance ranking to each video clip that matches your search terms. By default, the search results are sorted by relevance ranking in ascending order, with 1 being the most relevant result.
If you set this parameter to
scoreandgroup_byis set tovideo, the platform will determine the highest relevance ranking (lowest number) for each video and sort the videos in the response by this ranking. For each video, the matching video clips will be sorted by relevance ranking in ascending order.If you set this parameter to
clip_countandgroup_byis set tovideo, the platform will sort the videos in the response by the number of clips. For each video, the matching video clips will be sorted by relevance ranking in ascending order. You can useclip_countonly when the matching video clips are grouped by video.Default:
score
-
operator:
typing.Optional[SearchCreateRequestOperator]Combines multiple search options using
ororand. Useandto find segments matching all search options. Useorto find segments matching any search option. For detailed guidance on using this parameter, see the Combine multiple modalities section.Default:
or.
-
page_limit:
typing.Optional[int]The number of items to return on each page. When grouping by video, this parameter represents the number of videos per page. Otherwise, it represents the maximum number of video clips per page.
Max:
50.
-
filter:
typing.Optional[str]Specifies a stringified JSON object to filter your search results. Supports both system-generated metadata (example: video ID, duration) and user-defined metadata.
Syntax for filtering
The following table describes the supported data types, operators, and filter syntax:
Data type Operator Description Syntax String =Matches results equal to the specified value. {"field": "value"}Array of strings =Matches results with any value in the specified array. Supported only for id.{"id": ["value1", "value2"]}Numeric (integer, float) =,lte,gteMatches results equal to or within a range of the specified value. {"field": number}or{"field": { "gte": number, "lte": number }}Boolean =Matches results equal to the specified boolean value. {"field": true}or{"field": false}.
**System-generated metadata**The table below describes the system-generated metadata available for filtering your search results:
Field name Description Type Example idFilters by specific video IDs. Array of strings {"id": ["67cec9caf45d9b64a58340fc", "67cec9baf45d9b64a58340fa"]}.durationFilters based on the duration of the video containing the segment that matches your query. Number or object with gteandlte{"duration": 600}or{"duration": { "gte": 600, "lte": 800 }}widthFilters by video width (in pixels). Number or object with gteandlte{"width": 1920}or{"width": { "gte": 1280, "lte": 1920}}heightFilters by video height (in pixels). Number or object with gteandlte.{"height": 1080}or{"height": { "gte": 720, "lte": 1080 }}.sizeFilters by video size (in bytes) Number or object with gteandlte.{"size": 1048576}or{"size": { "gte": 1048576, "lte": 5242880}}filenameFilters by the exact file name. String {"filename": "Animal Encounters part 1"}
**User-defined metadata**To filter by user-defined metadata:
- Add metadata to your video by calling the
PUTmethod of the/indexes/:index-id/videos/:video-idendpoint - Reference the custom field in your filter object. For example, to filter videos where a custom field named
needsReviewof type boolean istrue, use{"needs_review": true}.
For more details and examples, see the Filter search results page.
- Add metadata to your video by calling the
-
include_user_metadata:
typing.Optional[bool]β Specifies whether to include user-defined metadata in the search results.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.search.retrieve(...)
-
-
-
Use this endpoint to retrieve a specific page of search results.
When you use pagination, you will not be charged for retrieving subsequent pages of results.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.search.retrieve( page_token="1234567890", include_user_metadata=True, )
-
-
-
page_token:
strβ A token that identifies the page to retrieve.
-
include_user_metadata:
typing.Optional[bool]β Specifies whether to include user-defined metadata in the search results.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.embed.tasks.list(...)
-
-
- This method will be deprecated in a future version. Migrate to the [Embed API v2](/v1.3/api-reference/create-embeddings-v2) for continued support and access to new features. This method returns a list of the video embedding tasks in your account. The platform returns your video embedding tasks sorted by creation date, with the newest at the top of the list. - Video embeddings are stored for seven days - When you invoke this method without specifying the `started_at` and `ended_at` parameters, the platform returns all the video embedding tasks created within the last seven days.
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) response = client.embed.tasks.list( started_at="2024-03-01T00:00:00Z", ended_at="2024-03-01T00:00:00Z", status="processing", page=1, page_limit=10, ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page
-
-
-
started_at:
typing.Optional[str]β Retrieve the embedding tasks that were created after the given date and time, expressed in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ").
-
ended_at:
typing.Optional[str]β Retrieve the embedding tasks that were created before the given date and time, expressed in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ").
-
status:
typing.Optional[str]Filter the embedding tasks by their current status.
Values:
processing,ready, orfailed.
-
page:
typing.Optional[int]A number that identifies the page to retrieve.
Default:
1.
-
page_limit:
typing.Optional[int]The number of items to return on each page.
Default:
10. Max:50.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.embed.tasks.create(...)
-
-
-
This endpoint will be deprecated in a future version. Migrate to the [Embed API v2](/v1.3/api-reference/create-embeddings-v2) for continued support and access to new features.
This method creates a new video embedding task that uploads a video to the platform and creates one or multiple video embeddings.
Upload options:
- Local file: Use the
video_fileparameter - Publicly accessible URL: Use the
video_urlparameter.
Specify at least one option. If both are provided,
video_urltakes precedence.Your video files must meet the format requirements. This endpoint allows you to upload files up to 2 GB in size. To upload larger files, use the Multipart Upload API
- The Marengo video understanding model generates embeddings for all modalities in the same latent space. This shared space enables any-to-any searches across different types of content. - Video embeddings are stored for seven days. - Local file: Use the
-
This endpoint will be deprecated in a future version. Migrate to the [Embed API v2](/v1.3/api-reference/create-embeddings-v2) for continued support and access to new features.
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.embed.tasks.create( model_name="model_name", )
-
-
-
model_name:
strThe name of the model you want to use. The following models are available:
marengo3.0: Enhanced model with sports intelligence and extended content support. For a list of the new features, see the New in Marengo 3.0 section.Marengo-retrieval-2.7: Video embedding model for multimodal search.
-
video_file: `from future import annotations
typing.Optional[core.File]` β See core.File for more documentation
-
video_url:
typing.Optional[str]β Specify this parameter to upload a video from a publicly accessible URL.
-
video_start_offset_sec:
typing.Optional[float]The start offset in seconds from the beginning of the video where processing should begin. Specifying 0 means starting from the beginning of the video.
Default: 0 Min: 0 Max: Duration of the video minus video_clip_length
-
video_end_offset_sec:
typing.Optional[float]The end offset in seconds from the beginning of the video where processing should stop.
Ensure the following when you specify this parameter:
- The end offset does not exceed the total duration of the video file.
- The end offset is greater than the start offset.
- You must set both the start and end offsets. Setting only one of these offsets is not permitted, resulting in an error.
Min: video_start_offset + video_clip_length Max: Duration of the video file
-
video_clip_length:
typing.Optional[float]The desired duration in seconds for each clip for which the platform generates an embedding. Ensure that the clip length does not exceed the interval between the start and end offsets.
Default: 6 Min: 2 Max: 10
-
video_embedding_scope:
typing.Optional[typing.List[TasksCreateRequestVideoEmbeddingScopeItem]]Defines the scope of video embedding generation. Valid values are the following:
clip: Creates embeddings for each video segment ofvideo_clip_lengthseconds, fromvideo_start_offset_sectovideo_end_offset_sec.clipandvideo: Creates embeddings for video segments and the entire video.
To create embeddings for segments and the entire video in the same request, include this parameter twice as shown below:
--form video_embedding_scope=clip \ --form video_embedding_scope=video
Default:
clip
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.embed.tasks.status(...)
-
-
-
This endpoint will be deprecated in a future version. Migrate to the [Embed API v2](/v1.3/api-reference/create-embeddings-v2) for continued support and access to new features.
This method retrieves the status of a video embedding task. Check the task status of a video embedding task to determine when you can retrieve the embedding.
A task can have one of the following statuses:
processing: The platform is creating the embeddings.ready: Processing is complete. Retrieve the embeddings by invoking theGETmethod of the/embed/tasks/{task_id} endpoint.failed: The task could not be completed, and the embeddings haven't been created.
-
This endpoint will be deprecated in a future version. Migrate to the [Embed API v2](/v1.3/api-reference/create-embeddings-v2) for continued support and access to new features.
This method retrieves the status of a video embedding task. Check the task status of a video embedding task to determine when you can retrieve the embedding.
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.embed.tasks.status( task_id="663da73b31cdd0c1f638a8e6", )
-
-
-
task_id:
strβ The unique identifier of your video embedding task.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.embed.tasks.retrieve(...)
-
-
-
This method retrieves embeddings for a specific video embedding task. Ensure the task status is
readybefore invoking this method. Refer to the Retrieve the status of a video embedding tasks page for instructions on checking the task status.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.embed.tasks.retrieve( task_id="663da73b31cdd0c1f638a8e6", )
-
-
-
task_id:
strβ The unique identifier of your video embedding task.
-
embedding_option:
typing.Optional[ typing.Union[ TasksRetrieveRequestEmbeddingOptionItem, typing.Sequence[TasksRetrieveRequestEmbeddingOptionItem], ] ]Specifies which types of embeddings to retrieve. Values vary depending on the version of the model:
- Marengo 3.0:
visual,audio,transcription. - Marengo 2.7:
visual-text,audio.
For details, see the Embedding options section.
The platform returns all available embeddings when you omit this parameter. - Marengo 3.0:
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.embed.v_2.create(...)
-
-
-
This endpoint synchronously creates embeddings for multimodal content and returns the results immediately in the response.
This method only supports Marengo version 3.0 or newer.When to use this endpoint:
- Create embeddings for text, images, audio, or video content
- Get immediate results without waiting for background processing
- Process audio or video content up to 10 minutes in duration
Do not use this endpoint for:
- Audio or video content longer than 10 minutes. Use the
POSTmethod of the/embed-v2/tasksendpoint instead.
Images:
- Formats: JPEG, PNG
- Minimum size: 128x128 pixels
- Maximum file size: 5 MB
Audio and video:
- Maximum duration: 10 minutes
- Maximum file size for base64 encoded strings: 36 MB
- Audio formats: WAV (uncompressed), MP3 (lossy), FLAC (lossless)
- Video formats: FFmpeg supported formats
- Video resolution: 360x360 to 3840x2160 pixels
- Aspect ratio: Between 1:1 and 1:2.4, or between 2.4:1 and 1:1
-
-
-
from twelvelabs import TextInputRequest, TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.embed.v_2.create( input_type="text", model_name="marengo3.0", text=TextInputRequest( input_text="man walking a dog", ), )
-
-
-
input_type:
CreateEmbeddingsRequestInputTypeβ The type of content for which you wish to create embeddings.
-
model_name:
strβ The video understanding model you wish to use.
-
text:
typing.Optional[TextInputRequest]
-
image:
typing.Optional[ImageInputRequest]
-
text_image:
typing.Optional[TextImageInputRequest]
-
audio:
typing.Optional[AudioInputRequest]
-
video:
typing.Optional[VideoInputRequest]
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.embed.v_2.tasks.list(...)
-
-
-
This method returns a list of the async embedding tasks in your account. The platform returns your async embedding tasks sorted by creation date, with the newest at the top of the list.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) response = client.embed.v_2.tasks.list( started_at="2024-03-01T00:00:00Z", ended_at="2024-03-01T00:00:00Z", status="processing", page=1, page_limit=10, ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page
-
-
-
started_at:
typing.Optional[str]β Retrieve the embedding tasks that were created after the given date and time, expressed in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ").
-
ended_at:
typing.Optional[str]β Retrieve the embedding tasks that were created before the given date and time, expressed in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ").
-
status:
typing.Optional[str]Filter the embedding tasks by their current status.
Values:
processing,ready, orfailed.
-
page:
typing.Optional[int]A number that identifies the page to retrieve.
Default:
1.
-
page_limit:
typing.Optional[int]The number of items to return on each page.
Default:
10. Max:50.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.embed.v_2.tasks.create(...)
-
-
-
This endpoint creates embeddings for audio and video content asynchronously.
This method only supports Marengo version 3.0 or newer.When to use this endpoint:
- Process audio or video files longer than 10 minutes
- Process files up to 4 hours in duration
Audio:
- Minimum duration: 4 seconds
- Maximum duration: 4 hours
- Maximum file size: 2 GB
- Formats: WAV (uncompressed), MP3 (lossy), FLAC (lossless)
Creating embeddings asynchronously requires three steps:
- Create a task using this endpoint. The platform returns a task ID.
- Poll for the status of the task using the
GETmethod of the/embed-v2/tasks/{task_id}endpoint. Wait until the status isready. - Retrieve the embeddings from the response when the status is
readyusing theGETmethod of the/embed-v2/tasks/{task_id}endpoint.
-
-
-
from twelvelabs import ( MediaSource, TwelveLabs, VideoInputRequest, VideoSegmentation_Dynamic, VideoSegmentationDynamicDynamic, ) client = TwelveLabs( api_key="YOUR_API_KEY", ) client.embed.v_2.tasks.create( input_type="video", model_name="marengo3.0", video=VideoInputRequest( media_source=MediaSource( url="https://user-bucket.com/video/long-video.mp4", ), start_sec=0.0, end_sec=7200.0, segmentation=VideoSegmentation_Dynamic( dynamic=VideoSegmentationDynamicDynamic( min_duration_sec=4, ), ), embedding_option=["visual", "audio", "transcription"], embedding_scope=["clip", "asset"], ), )
-
-
-
input_type:
CreateAsyncEmbeddingRequestInputTypeThe type of content for which you wish to create embeddings.
Values:
audio: Audio filesvideo: Video content
-
model_name:
strβ The model you wish to use.
-
audio:
typing.Optional[AudioInputRequest]
-
video:
typing.Optional[VideoInputRequest]
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.embed.v_2.tasks.retrieve(...)
-
-
-
This method retrieves the status and the results of an async embedding task.
Task statuses:
processing: The platform is creating the embeddings.ready: Processing is complete. Embeddings are available in the response.failed: The task failed. Embeddings were not created.
Invoke this method repeatedly until the
statusfield isready. Whenstatusisready, use the embeddings from the response.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.embed.v_2.tasks.retrieve( task_id="64f8d2c7e4a1b37f8a9c5d12", )
-
-
-
task_id:
strβ The unique identifier of the embedding task.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.entity_collections.entities.list(...)
-
-
-
This method returns a list of the entities in the specified entity collection.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) response = client.entity_collections.entities.list( entity_collection_id="6298d673f1090f1100476d4c", page=1, page_limit=10, name="My entity", status="processing", sort_by="created_at", sort_option="desc", ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page
-
-
-
entity_collection_id:
strβ The unique identifier of the entity collection for which the platform will retrieve the entities.
-
page:
typing.Optional[int]A number that identifies the page to retrieve.
Default:
1.
-
page_limit:
typing.Optional[int]The number of items to return on each page.
Default:
10. Max:50.
-
name:
typing.Optional[str]β Filter entities by name.
-
status:
typing.Optional[EntitiesListRequestStatus]β Filter entities by status.
-
sort_by:
typing.Optional[EntitiesListRequestSortBy]The field to sort on. The following options are available:
created_at: Sorts by the time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"), when the entity was created.updated_at:Sorts by the time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"), when the entity collection was updated.name: Sorts by the name.
-
sort_option:
typing.Optional[str]The sorting direction. The following options are available:
ascdesc
Default:
desc.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.entity_collections.entities.create(...)
-
-
-
This method creates an entity within a specified entity collection. Each entity must be associated with at least one asset.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.entity_collections.entities.create( entity_collection_id="6298d673f1090f1100476d4c", name="My entity", asset_ids=["6298d673f1090f1100476d4c", "6298d673f1090f1100476d4d"], )
-
-
-
entity_collection_id:
strβ The unique identifier of the entity collection in which to create the entity.
-
name:
strβ The name of the entity. Make sure you use a succinct and descriptive name.
-
asset_ids:
typing.Sequence[str]β An array of asset IDs to associate with the entity. You must provide at least one value.
-
description:
typing.Optional[str]β An optional description of the entity.
-
metadata:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]Optional metadata for the entity, provided as key-value pairs to store additional context or attributes. Use metadata to categorize or describe the entity for easier management and search. Keys must be of type
string, and values can be of typestring,integer,float, orboolean.Example:
To store complex data types such as objects or arrays, convert them to string values before including them in the metadata.{ "sport": "soccer", "teamId": 42, "performanceScore": 8.7, "isActive": true }
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.entity_collections.entities.create_bulk(...)
-
-
-
This method creates multiple entities within a specified entity collection in a single request. Each entity must be associated with at least one asset. This endpoint is useful for efficiently adding multiple entities, such as a roster of players or a group of characters.
-
-
-
from twelvelabs import TwelveLabs from twelvelabs.entity_collections.entities import ( EntitiesCreateBulkRequestEntitiesItem, ) client = TwelveLabs( api_key="YOUR_API_KEY", ) client.entity_collections.entities.create_bulk( entity_collection_id="6298d673f1090f1100476d4c", entities=[ EntitiesCreateBulkRequestEntitiesItem( name="My entity", asset_ids=["6298d673f1090f1100476d4c", "6298d673f1090f1100476d4d"], ) ], )
-
-
-
entity_collection_id:
strβ The unique identifier of the entity collection in which to create the entities.
-
entities:
typing.Sequence[EntitiesCreateBulkRequestEntitiesItem]
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.entity_collections.entities.retrieve(...)
-
-
-
This method retrieves details about the specified entity.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.entity_collections.entities.retrieve( entity_collection_id="6298d673f1090f1100476d4c", entity_id="6298d673f1090f1100476d4c", )
-
-
-
entity_collection_id:
strβ The unique identifier of the entity collection.
-
entity_id:
strβ The unique identifier of the entity to retrieve.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.entity_collections.entities.delete(...)
-
-
-
This method deletes a specific entity from an entity collection. It permanently removes the entity and its associated data, but does not affect the assets associated with this entity.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.entity_collections.entities.delete( entity_collection_id="6298d673f1090f1100476d4c", entity_id="6298d673f1090f1100476d4c", )
-
-
-
entity_collection_id:
strβ The unique identifier of the entity collection containing the entity to be deleted.
-
entity_id:
strβ The unique identifier of the entity to delete.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.entity_collections.entities.update(...)
-
-
-
This method updates the specified entity within an entity collection. This operation allows modification of the entity's name, description, or metadata. Note that this endpoint does not affect the assets associated with the entity.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.entity_collections.entities.update( entity_collection_id="6298d673f1090f1100476d4c", entity_id="6298d673f1090f1100476d4c", )
-
-
-
entity_collection_id:
strβ The unique identifier of the entity collection containing the entity to be updated.
-
entity_id:
strβ The unique identifier of the entity to update.
-
name:
typing.Optional[str]β The new name for the entity.
-
description:
typing.Optional[str]β An updated description for the entity.
-
metadata:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Updated metadata for the entity. If provided, this completely replaces the existing metadata. Use this to store custom key-value pairs related to the entity.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.entity_collections.entities.create_assets(...)
-
-
-
This method adds assets to the specified entity within an entity collection. Assets are used to identify the entity in media content, and adding multiple assets can improve the accuracy of entity recognition in searches.
When assets are added, the entity may temporarily enter the "processing" state while the platform updates the necessary data. Once processing is complete, the entity status will return to "ready."
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.entity_collections.entities.create_assets( entity_collection_id="6298d673f1090f1100476d4c", entity_id="6298d673f1090f1100476d4c", asset_ids=["6298d673f1090f1100476d4c", "6298d673f1090f1100476d4d"], )
-
-
-
entity_collection_id:
strβ The unique identifier of the entity collection that contains the entity to which assets will be added.
-
entity_id:
strβ The unique identifier of the entity within the specified entity collection to which the assets will be added.
-
asset_ids:
typing.Sequence[str]β An array of asset IDs to add to the entity.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.entity_collections.entities.delete_assets(...)
-
-
-
This method removes from the specified entity. Assets are used to identify the entity in media content, and removing assets may impact the accuracy of entity recognition in searches if too few assets remain.
When assets are removed, the entity may temporarily enter a "processing" state while the system updates the necessary data. Once processing is complete, the entity status will return to "ready."
- This operation only removes the association between the entity and the specified assets; it does not delete the assets themselves. - An entity must always have at least one asset associated with it. You can't remove the last asset from an entity.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.entity_collections.entities.delete_assets( entity_collection_id="6298d673f1090f1100476d4c", entity_id="6298d673f1090f1100476d4c", asset_ids=["6298d673f1090f1100476d4e", "6298d673f1090f1100476d4f"], )
-
-
-
entity_collection_id:
strβ The unique identifier of the entity collection that contains the entity from which assets will be removed.
-
entity_id:
strβ The unique identifier of the entity within the specified entity collection from which the assets will be removed.
-
asset_ids:
typing.Sequence[str]β An array of asset IDs to remove from the entity.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.indexes.indexed_assets.list(...)
-
-
-
This method returns a list of the indexed assets in the specified index. By default, the platform returns your indexed assets sorted by creation date, with the newest at the top of the list.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) response = client.indexes.indexed_assets.list( index_id="6298d673f1090f1100476d4c", page=1, page_limit=10, sort_by="created_at", sort_option="desc", filename="01.mp4", duration=1.1, fps=1.1, width=1.1, height=1, size=1.1, created_at="2024-08-16T16:53:59Z", updated_at="2024-08-16T16:53:59Z", ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page
-
-
-
index_id:
strβ The unique identifier of the index for which the platform will retrieve the indexed assets.
-
page:
typing.Optional[int]A number that identifies the page to retrieve.
Default:
1.
-
page_limit:
typing.Optional[int]The number of items to return on each page.
Default:
10. Max:50.
-
sort_by:
typing.Optional[str]The field to sort on. The following options are available:
updated_at: Sorts by the time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"), when the item was updated.created_at: Sorts by the time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"), when the item was created.
Default:
created_at.
-
sort_option:
typing.Optional[str]The sorting direction. The following options are available:
ascdesc
Default:
desc.
-
status:
typing.Optional[ typing.Union[ IndexedAssetsListRequestStatusItem, typing.Sequence[IndexedAssetsListRequestStatusItem], ] ]Filter by one or more indexing task statuses. The following options are available:
ready: The indexed asset has been successfully uploaded and indexed.pending: The indexed asset is pending.queued: The indexed asset is queued.indexing: The indexed asset is being indexed.failed: The indexed asset indexing task failed.
To filter by multiple statuses, specify the
statusparameter for each value:status=ready&status=validating
-
filename:
typing.Optional[str]β Filter by filename.
-
duration:
typing.Optional[float]β Filter by duration. Expressed in seconds.
-
fps:
typing.Optional[float]β Filter by frames per second.
-
width:
typing.Optional[float]β Filter by width.
-
height:
typing.Optional[int]β Filter by height.
-
size:
typing.Optional[float]β Filter by size. Expressed in bytes.
-
created_at:
typing.Optional[str]β Filter indexed assets by the creation date and time of their associated indexing tasks, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"). The platform returns the indexed assets whose indexing tasks were created on the specified date at or after the given time.
-
updated_at:
typing.Optional[str]β This filter applies only to indexed assets updated using thePUTmethod of the/indexes/{index-id}/indexed-assets/{indexed-asset-id}endpoint. It filters indexed assets by the last update date and time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"). The platform returns the indexed assets that were last updated on the specified date at or after the given time.
-
user_metadata:
typing.Optional[ typing.Dict[str, typing.Optional[IndexedAssetsListRequestUserMetadataValue]] ]To enable filtering by custom fields, you must first add user-defined metadata to your video by calling the
PUTmethod of the/indexes/:index-id/indexed-assets/:indexed-asset-idendpoint.Examples:
- To filter on a string:
?category=recentlyAdded - To filter on an integer:
?batchNumber=5 - To filter on a float:
?rating=9.3 - To filter on a boolean:
?needsReview=true
- To filter on a string:
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.indexes.indexed_assets.create(...)
-
-
-
This method indexes an uploaded asset to make it searchable and analyzable. Indexing processes your content and extracts information that enables the platform to search and analyze your videos.
This operation is asynchronous. The platform returns an indexed asset ID immediately and processes your content in the background. Monitor the indexing status to know when your content is ready to use.
Your asset must meet the requirements based on your workflow:
- Search: Marengo requirements
- Video analysis: Pegasus requirements.
If you want to both search and analyze your videos, the most restrictive requirements apply.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.indexes.indexed_assets.create( index_id="6298d673f1090f1100476d4c", asset_id="6298d673f1090f1100476d4c", )
-
-
-
index_id:
strβ The unique identifier of the index to which the asset will be indexed.
-
asset_id:
strβ The unique identifier of the asset to index.
-
enable_video_stream:
typing.Optional[bool]β This parameter indicates if the platform stores the video for streaming. When set totrue, the platform stores the video, and you can retrieve its URL by calling theGETmethod of the/indexes/{index-id}/indexed-assets/{indexed-asset-id}endpoint. You can then use this URL to access the stream over the HLS protocol.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.indexes.indexed_assets.retrieve(...)
-
-
-
This method retrieves information about an indexed asset, including its status, metadata, and optional embeddings or transcription.
Common use cases:
-
Monitor indexing progress:
- Call this endpoint after creating an indexed asset
- Check the
statusfield until it showsready - Once ready, your content is available for search and analysis
-
Retrieve asset metadata:
- Retrieve system metadata (duration, resolution, filename)
- Access user-defined metadata
-
Retrieve embeddings:
- Include the
embedding_optionparameter to retrieve video embeddings - Requires the Marengo video understanding model to be enabled in your index
- Include the
-
Retrieve transcriptions:
- Set the
transcriptionparameter totrueto retrieve spoken words from your video
- Set the
-
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.indexes.indexed_assets.retrieve( index_id="6298d673f1090f1100476d4c", indexed_asset_id="6298d673f1090f1100476d4c", transcription=True, )
-
-
-
index_id:
strβ The unique identifier of the index to which the indexed asset has been uploaded.
-
indexed_asset_id:
strβ The unique identifier of the indexed asset to retrieve.
-
embedding_option:
typing.Optional[ typing.Union[ IndexedAssetsRetrieveRequestEmbeddingOptionItem, typing.Sequence[IndexedAssetsRetrieveRequestEmbeddingOptionItem], ] ]Specifies which types of embeddings to retrieve. Values vary depending on the version of the model:
- Marengo 3.0:
visual,audio,transcription. - Marengo 2.7:
visual-text,audio.
For details, see the Embedding options section.
To retrieve embeddings for a video, it must be indexed using the Marengo video understanding model. For details on enabling this model for an index, see the [Create an index](/reference/create-index) page. - Marengo 3.0:
-
transcription:
typing.Optional[bool]β The parameter indicates whether to retrieve a transcription of the spoken words for the indexed asset.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.indexes.indexed_assets.delete(...)
-
-
-
This method deletes all the information about the specified indexed asset. This action cannot be undone.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.indexes.indexed_assets.delete( index_id="6298d673f1090f1100476d4c", indexed_asset_id="6298d673f1090f1100476d4c", )
-
-
-
index_id:
strβ The unique identifier of the index to which the indexed asset has been uploaded.
-
indexed_asset_id:
strβ The unique identifier of the indexed asset to delete.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.indexes.indexed_assets.update(...)
-
-
-
Use this method to update one or more fields of the metadata of an indexed asset. Also, can delete a field by setting it to null.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.indexes.indexed_assets.update( index_id="6298d673f1090f1100476d4c", indexed_asset_id="6298d673f1090f1100476d4c", user_metadata={ "category": "recentlyAdded", "batchNumber": 5, "rating": 9.3, "needsReview": True, }, )
-
-
-
index_id:
strβ The unique identifier of the index to which the indexed asset has been uploaded.
-
indexed_asset_id:
strβ The unique identifier of the indexed asset to update.
-
user_metadata:
typing.Optional[UserMetadata]
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.indexes.videos.list(...)
-
-
-
This method will be deprecated in a future version. New implementations should use the List indexed assets method.
This method returns a list of the videos in the specified index. By default, the platform returns your videos sorted by creation date, with the newest at the top of the list.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) response = client.indexes.videos.list( index_id="6298d673f1090f1100476d4c", page=1, page_limit=10, sort_by="created_at", sort_option="desc", filename="01.mp4", duration=1.1, fps=1.1, width=1.1, height=1, size=1.1, created_at="2024-08-16T16:53:59Z", updated_at="2024-08-16T16:53:59Z", ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page
-
-
-
index_id:
strβ The unique identifier of the index for which the platform will retrieve the videos.
-
page:
typing.Optional[int]A number that identifies the page to retrieve.
Default:
1.
-
page_limit:
typing.Optional[int]The number of items to return on each page.
Default:
10. Max:50.
-
sort_by:
typing.Optional[str]The field to sort on. The following options are available:
updated_at: Sorts by the time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"), when the item was updated.created_at: Sorts by the time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"), when the item was created.
Default:
created_at.
-
sort_option:
typing.Optional[str]The sorting direction. The following options are available:
ascdesc
Default:
desc.
-
filename:
typing.Optional[str]β Filter by filename.
-
duration:
typing.Optional[float]β Filter by duration. Expressed in seconds.
-
fps:
typing.Optional[float]β Filter by frames per second.
-
width:
typing.Optional[float]β Filter by width.
-
height:
typing.Optional[int]β Filter by height.
-
size:
typing.Optional[float]β Filter by size. Expressed in bytes.
-
created_at:
typing.Optional[str]β Filter videos by the creation date and time of their associated indexing tasks, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"). The platform returns the videos whose indexing tasks were created on the specified date at or after the given time.
-
updated_at:
typing.Optional[str]β This filter applies only to videos updated using thePUTmethod of the/indexes/{index-id}/videos/{video-id}endpoint. It filters videos by the last update date and time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"). The platform returns the video indexing tasks that were last updated on the specified date at or after the given time.
-
user_metadata:
typing.Optional[ typing.Dict[str, typing.Optional[VideosListRequestUserMetadataValue]] ]To enable filtering by custom fields, you must first add user-defined metadata to your video by calling the
PUTmethod of the/indexes/:index-id/videos/:video-idendpoint.Examples:
- To filter on a string:
?category=recentlyAdded - To filter on an integer:
?batchNumber=5 - To filter on a float:
?rating=9.3 - To filter on a boolean:
?needsReview=true
- To filter on a string:
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.indexes.videos.retrieve(...)
-
-
-
This method will be deprecated in a future version. New implementations should use the Retrieve an indexed asset method.
This method retrieves information about the specified video.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.indexes.videos.retrieve( index_id="6298d673f1090f1100476d4c", video_id="6298d673f1090f1100476d4c", transcription=True, )
-
-
-
index_id:
strβ The unique identifier of the index to which the video has been uploaded.
-
video_id:
strβ The unique identifier of the video to retrieve.
-
embedding_option:
typing.Optional[ typing.Union[ VideosRetrieveRequestEmbeddingOptionItem, typing.Sequence[VideosRetrieveRequestEmbeddingOptionItem], ] ]Specifies which types of embeddings to retrieve. Values vary depending on the version of the model:
- Marengo 3.0:
visual,audio,transcription. - Marengo 2.7:
visual-text,audio.
For details, see the Embedding options section.
To retrieve embeddings for a video, it must be indexed using the Marengo video understanding model. For details on enabling this model for an index, see the [Create an index](/reference/create-index) page. - Marengo 3.0:
-
transcription:
typing.Optional[bool]β The parameter indicates whether to retrieve a transcription of the spoken words for the indexed video.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.indexes.videos.delete(...)
-
-
-
This method will be deprecated in a future version. New implementations should use the Delete an indexed asset method.
This method deletes all the information about the specified video. This action cannot be undone.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.indexes.videos.delete( index_id="6298d673f1090f1100476d4c", video_id="6298d673f1090f1100476d4c", )
-
-
-
index_id:
strβ The unique identifier of the index to which the video has been uploaded.
-
video_id:
strβ The unique identifier of the video to delete.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.indexes.videos.update(...)
-
-
-
This method will be deprecated in a future version. New implementations should use the Partial update indexed asset method.
Use this method to update one or more fields of the metadata of a video. Also, can delete a field by setting it to null.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.indexes.videos.update( index_id="6298d673f1090f1100476d4c", video_id="6298d673f1090f1100476d4c", user_metadata={ "category": "recentlyAdded", "batchNumber": 5, "rating": 9.3, "needsReview": True, }, )
-
-
-
index_id:
strβ The unique identifier of the index to which the video has been uploaded.
-
video_id:
strβ The unique identifier of the video to update.
-
user_metadata:
typing.Optional[UserMetadata]
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tasks.transfers.create(...)
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.tasks.transfers.create( integration_id="integration-id", )
-
-
-
integration_id:
str
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tasks.transfers.get_status(...)
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.tasks.transfers.get_status( integration_id="integration-id", )
-
-
-
integration_id:
str
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tasks.transfers.get_logs(...)
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.tasks.transfers.get_logs( integration_id="integration-id", )
-
-
-
integration_id:
str
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-