Releases: oracle/python-select-ai
v1.2.0
This release includes the following updates and enhancements:
-
Added support for Select AI Agent. The main classes in this submodule are:
-
Support for summarize
-
Support for feedback
-
For consistency, all proxy objects introduce a new fetch API. These are class level methods
-
All proxy objects expose set_attribute and set_attributes() method. This behavior is made consistent.
- set_attribute(attribute_name, attribute_value) will accept a single attribute
- set_attributes(attributes) can be used for multiple attributes using the localized attributes class for corresponding object type.
-
The grant / revoke privilege are separated from enabling/disabling HTTP access for users. Previously, both operations happened in a single API enable_provider() / disable_provider() which was confusing. Now, we have the following:
-
DBMS_CLOUD_AI_AGENT is added to the list of packages for which we grant / revoke execute privilege
-
Added support for Python 3.14
-
Added CI support using Github Actions. It runs the complete test suite for Python 3.11, 3.12, 3.13 and 3.14 against ADB 26ai
-
HTML documentation https://oracle.github.io/python-select-ai/ with Python docs theme
v1.1.0
This release includes the following updates:
- Fixes #6
- Fixed bugs for vector_index get/set attributes
- Handle "no data for prompt" during profile.run_sql to return an empty dataframe
- Handle missing profile when you list vector indexes using vector_index.list()
- Make enable / disable vector index operations idempotent
- Added new samples
- Verified tests for both 19c and 23ai
v1.0.0
Select AI for Python v1.0.0
- Create and manage Select AI’s AI profile objects
- Text-to-SQL : Query your database using natural language, whether to generate a SQL query, run or explain that query, or get a narrated response to the results of that query.
- Chat: Use your LLM directly for content generation based on your user prompt—for example, generating custom emails, answering questions, and sentiment analysis—just to name a few use cases.
- Retrieval-Augmented Generation (RAG): Enable LLMs to generate more relevant responses by augmenting your prompt with knowledge from your provided documents
- Automated vector index creation and maintenance: Quickly and easily create a vector index to be used with RAG using data from cloud storage and other sources.
- Results integrated in Python objects: Receive AI-generated results directly into Python data structures, facilitating analysis and integration.
- Chatbot with conversation memory: Create and manage named conversations based on your interactions with the LLM.
- Synthetic data generation: Generate synthetic data for a single table or a set of tables with referential integrity constraints.
- Synchronous and asynchronous invocation: Build applications using Python with either synchronous or the more flexible asynchronous programming style using standalone Python clients. With asynchronous support, the API integrates easily with web frameworks like FastAPI or Flask, enabling real-time AI-driven applications.