diff --git a/notebooks/code_sharing/llm/llm_descriptions_context.ipynb b/notebooks/code_sharing/llm/add_context_to_llm_descriptions.ipynb similarity index 64% rename from notebooks/code_sharing/llm/llm_descriptions_context.ipynb rename to notebooks/code_sharing/llm/add_context_to_llm_descriptions.ipynb index b4df54d95..6951e5285 100644 --- a/notebooks/code_sharing/llm/llm_descriptions_context.ipynb +++ b/notebooks/code_sharing/llm/add_context_to_llm_descriptions.ipynb @@ -4,18 +4,61 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# Adding Context to LLM-based Descriptions\n", + "# Add context to LLM-generated test descriptions\n", "\n", + "When you run ValidMind tests, test descriptions are automatically generated with LLM using the test results, the test name, and the static test definitions provided in the test's docstring. While this metadata offers valuable high-level overviews of tests, insights produced by the LLM-based descriptions may not always align with your specific use cases or incorporate organizational policy requirements.\n", "\n", - "When running ValidMind tests, the LLM-based descriptions are generated using the test results, the test name, and the static test description provided in the test's docstring. While this metadata offers a valuable high-level overview of the test, the insights produced by the LLM-based descriptions may not always align with specific use cases or incorporate bank policy requirements. \n", - "\n", - "In this notebook, we will show how to add context to the LLM-based descriptions to provide additional information about the test or the use case. Providing use-case context is useful when you want to provide information about the intended use and technique of the model or the insitution policies and standards specific to a use case. " + "In this notebook, you'll learn how to add context to the generated descriptions by providing additional information about the test or the use case. Including custom use case context is useful when you want to highlight information about the intended use and technique of the model, or the insitution policies and standards specific to your use case." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "::: {.content-hidden when-format=\"html\"}\n", + "## Contents \n", + "- [Install the ValidMind Library](#toc1_) \n", + "- [Initialize the ValidMind Library](#toc2_) \n", + " - [Get your code snippet](#toc2_1_) \n", + "- [Initialize the Python environment](#toc3_) \n", + "- [Load the sample dataset](#toc4_) \n", + " - [Preprocess the raw dataset](#toc4_1_) \n", + "- [Initialize the ValidMind objects](#toc5_) \n", + " - [Initialize the datasets](#toc5_1_) \n", + " - [Initialize a model object](#toc5_2_) \n", + " - [Assign predictions to the datasets](#toc5_3_) \n", + "- [Set custom context for test descriptions](#toc6_) \n", + " - [Review default LLM-generated descriptions](#toc6_1_) \n", + " - [Enable use case context](#toc6_2_) \n", + " - [Disable use case context](#toc6_2_1_) \n", + " - [Add test-specific context](#toc6_3_) \n", + " - [Dataset Description](#toc6_3_1_) \n", + " - [Class Imbalance](#toc6_3_2_) \n", + " - [High Cardinality](#toc6_3_3_) \n", + " - [Missing Values](#toc6_3_4_) \n", + " - [Unique Rows](#toc6_3_5_) \n", + " - [Too Many Zero Values](#toc6_3_6_) \n", + " - [IQR Outliers Table](#toc6_3_7_) \n", + " - [Descriptive Statistics](#toc6_3_8_) \n", + " - [Pearson Correlation Matrix](#toc6_3_9_) \n", + "\n", + ":::\n", + "\n", + "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ + "\n", + "\n", "## Install the ValidMind Library\n", "\n", "To install the library:" @@ -34,12 +77,16 @@ "cell_type": "markdown", "metadata": {}, "source": [ + "\n", + "\n", "## Initialize the ValidMind Library\n", "\n", "ValidMind generates a unique _code snippet_ for each registered model to connect with your developer environment. You initialize the ValidMind Library with this code snippet, which ensures that your documentation and tests are uploaded to the correct model when you run the notebook.\n", "\n", "\n", "\n", + "\n", + "\n", "### Get your code snippet\n", "\n", "1. In a browser, [log in to ValidMind](https://docs.validmind.ai/guide/configuration/log-in-to-validmind.html).\n", @@ -76,10 +123,10 @@ "import validmind as vm\n", "\n", "vm.init(\n", - " api_host = \"https://api.prod.validmind.ai/api/v1/tracking\",\n", - " api_key = \"...\",\n", - " api_secret = \"...\",\n", - " model = \"...\"\n", + " # api_host = \"https://api.prod.validmind.ai/api/v1/tracking\",\n", + " # api_key = \"...\",\n", + " # api_secret = \"...\",\n", + " # model = \"...\"\n", ")" ] }, @@ -87,14 +134,16 @@ "cell_type": "markdown", "metadata": {}, "source": [ + "\n", + "\n", "## Initialize the Python environment\n", "\n", - "Next, let's import the necessary libraries and set up your Python environment for data analysis:" + "After you've connected to your model register in the ValidMind Platform, let's import the necessary libraries and set up your Python environment for data analysis:" ] }, { "cell_type": "code", - "execution_count": 3, + "execution_count": null, "metadata": {}, "outputs": [], "source": [ @@ -108,9 +157,11 @@ "cell_type": "markdown", "metadata": {}, "source": [ + "\n", + "\n", "## Load the sample dataset\n", "\n", - "The sample dataset used here is provided by the ValidMind library. To be able to use it, you need to import the dataset and load it into a pandas [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html), a two-dimensional tabular data structure that makes use of rows and columns:" + "First, we'll import a sample ValidMind dataset and load it into a pandas [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html), a two-dimensional tabular data structure that makes use of rows and columns:" ] }, { @@ -135,15 +186,17 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### Prepocess the raw dataset\n", + "\n", + "\n", + "### Preprocess the raw dataset\n", "\n", - "Preprocessing performs a number of operations to get ready for the subsequent steps:\n", + "Then, we'll perform a number of operations to get ready for the subsequent steps:\n", "\n", - "- Preprocess the data: Splits the DataFrame (`df`) into multiple datasets (`train_df`, `validation_df`, and `test_df`) using `demo_dataset.preprocess` to simplify preprocessing.\n", - "- Separate features and targets: Drops the target column to create feature sets (`x_train`, `x_val`) and target sets (`y_train`, `y_val`).\n", - "- Initialize XGBoost classifier: Creates an `XGBClassifier` object with early stopping rounds set to 10.\n", - "- Set evaluation metrics: Specifies metrics for model evaluation as \"error,\" \"logloss,\" and \"auc.\"\n", - "- Fit the model: Trains the model on `x_train` and `y_train` using the validation set `(x_val, y_val)`. Verbose output is disabled." + "- **Preprocess the data:** Splits the DataFrame (`df`) into multiple datasets (`train_df`, `validation_df`, and `test_df`) using `demo_dataset.preprocess` to simplify preprocessing.\n", + "- **Separate features and targets:** Drops the target column to create feature sets (`x_train`, `x_val`) and target sets (`y_train`, `y_val`).\n", + "- **Initialize XGBoost classifier:** Creates an `XGBClassifier` object with early stopping rounds set to 10.\n", + "- **Set evaluation metrics:** Specifies metrics for model evaluation as `error`, `logloss`, and `auc`.\n", + "- **Fit the model:** Trains the model on `x_train` and `y_train` using the validation set `(x_val, y_val)`. Verbose output is disabled." ] }, { @@ -175,26 +228,29 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Initialize the ValidMind objects\n", - "\n" + "\n", + "\n", + "## Initialize the ValidMind objects" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ + "\n", + "\n", "### Initialize the datasets\n", "\n", - "Before you can run tests, you must first initialize a ValidMind dataset object using the [`init_dataset`](https://docs.validmind.ai/validmind/validmind.html#init_dataset) function from the ValidMind (`vm`) module.\n", + "Before you can run tests, you'll need to initialize a ValidMind dataset object using the [`init_dataset`](https://docs.validmind.ai/validmind/validmind.html#init_dataset) function from the ValidMind (`vm`) module.\n", "\n", - "This function takes a number of arguments:\n", + "We'll include the following arguments:\n", "\n", - "- `dataset` — the raw dataset that you want to provide as input to tests\n", - "- `input_id` - a unique identifier that allows tracking what inputs are used when running each individual test\n", - "- `target_column` — a required argument if tests require access to true values. This is the name of the target column in the dataset\n", - "- `class_labels` — an optional value to map predicted classes to class labels\n", + "- **`dataset`** — the raw dataset that you want to provide as input to tests\n", + "- **`input_id`** - a unique identifier that allows tracking what inputs are used when running each individual test\n", + "- **`target_column`** — a required argument if tests require access to true values. This is the name of the target column in the dataset\n", + "- **`class_labels`** — an optional value to map predicted classes to class labels\n", "\n", - "With all datasets ready, you can now initialize the raw, training and test datasets (`raw_df`, `train_df` and `test_df`) created earlier into their own dataset objects using [`vm.init_dataset()`](https://docs.validmind.ai/validmind/validmind.html#init_dataset):" + "With all datasets ready, you can now initialize the raw, training, and test datasets (`raw_df`, `train_df` and `test_df`) created earlier into their own dataset objects using [`vm.init_dataset()`](https://docs.validmind.ai/validmind/validmind.html#init_dataset):" ] }, { @@ -225,14 +281,18 @@ "cell_type": "markdown", "metadata": {}, "source": [ + "\n", + "\n", "### Initialize a model object\n", "\n", - "Additionally, you need to initialize a ValidMind model object (`vm_model`) that can be passed to other functions for analysis and tests on the data. You simply intialize this model object with [`vm.init_model()`](https://docs.validmind.ai/validmind/validmind.html#init_model):" + "Additionally, you'll need to initialize a ValidMind model object (`vm_model`) that can be passed to other functions for analysis and tests on the data. \n", + "\n", + "Simply intialize this model object with [`vm.init_model()`](https://docs.validmind.ai/validmind/validmind.html#init_model):" ] }, { "cell_type": "code", - "execution_count": 7, + "execution_count": null, "metadata": {}, "outputs": [], "source": [ @@ -246,9 +306,13 @@ "cell_type": "markdown", "metadata": {}, "source": [ + "\n", + "\n", "### Assign predictions to the datasets\n", "\n", - "We can now use the assign_predictions() method from the Dataset object to link existing predictions to any model. If no prediction values are passed, the method will compute predictions automatically:" + "We can now use the `assign_predictions()` method from the Dataset object to link existing predictions to any model.\n", + "\n", + "If no prediction values are passed, the method will compute predictions automatically:" ] }, { @@ -270,16 +334,22 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Setting the LLM descriptions context" + "\n", + "\n", + "## Set custom context for test descriptions" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "### Adding no context to the LLM descriptions\n", + "\n", + "\n", + "### Review default LLM-generated descriptions\n", "\n", - "By default, the LLM descriptions context is disabled. This means that the LLM descriptions will not include any additional context. As a result, the LLM descriptions will be generated in isolation, without any additional context." + "By default, custom context for LLM-generated descriptions is disabled, meaning that the output will not include any additional context.\n", + "\n", + "Let's generate an initial test description for the `DatasetDescription` test for comparision with later iterations:" ] }, { @@ -300,9 +370,13 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### Adding use case context to the LLM descriptions\n", + "\n", + "\n", + "### Enable use case context\n", "\n", - "To enable the LLM descriptions context, you need to set the `VALIDMIND_LLM_DESCRIPTIONS_CONTEXT_ENABLED` environment variable to `1`. This will enable the LLM descriptions context, which will be used to provide additional context to the LLM descriptions. This is a global setting that will affect all tests." + "To enable custom use case context, set the `VALIDMIND_LLM_DESCRIPTIONS_CONTEXT_ENABLED` environment variable to `1`.\n", + "\n", + "This is a global setting that will affect all tests for your linked model:" ] }, { @@ -314,6 +388,13 @@ "os.environ[\"VALIDMIND_LLM_DESCRIPTIONS_CONTEXT_ENABLED\"] = \"1\"" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Enabling use case context allows you to pass in additional context, such as information about your model, relevant regulatory requirements, or model validation targets to the LLM-generated text descriptions within `use_case_context`:" + ] + }, { "cell_type": "code", "execution_count": 11, @@ -346,6 +427,13 @@ "os.environ[\"VALIDMIND_LLM_DESCRIPTIONS_CONTEXT\"] = use_case_context" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "With the use case context set, generate an updated test description for the `DatasetDescription` test for comparision with default output:" + ] + }, { "cell_type": "code", "execution_count": null, @@ -364,9 +452,13 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### Disabling the LLM descriptions context\n", + "\n", "\n", - "To disable the LLM descriptions context, you need to set the `VALIDMIND_LLM_DESCRIPTIONS_CONTEXT_ENABLED` environment variable to `0`. This will disable the LLM descriptions context, which will be used to provide additional context to the LLM descriptions. This is a global setting that will affect all tests." + "#### Disable use case context\n", + "\n", + "To disable custom use case context, set the `VALIDMIND_LLM_DESCRIPTIONS_CONTEXT_ENABLED` environment variable to `0`.\n", + "\n", + "This is a global setting that will affect all tests for your linked model:" ] }, { @@ -378,6 +470,13 @@ "os.environ[\"VALIDMIND_LLM_DESCRIPTIONS_CONTEXT_ENABLED\"] = \"0\"" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "With the use case context disabled again, generate another test description for the `DatasetDescription` test for comparision with previous custom output:" + ] + }, { "cell_type": "code", "execution_count": null, @@ -396,14 +495,18 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### Adding test-specific context to the LLM descriptions\n", + "\n", + "\n", + "### Add test-specific context\n", "\n", - "We can also add test-specific context to the LLM descriptions. This is useful when you want to provide test-specific validation criteria about the test that is being run. All we need to do in this case is to set the `VALIDMIND_LLM_DESCRIPTIONS_CONTEXT_ENABLED` environment variable to `1` and join the test-specific context to the use-case context using the `VALIDMIND_LLM_DESCRIPTIONS_CONTEXT` environment variable." + "In addition to the model-level `use_case_context`, you're able to add test-specific context to your LLM-generated descriptions allowing you to provide test-specific validation criteria about the test that is being run.\n", + "\n", + "We'll reenable use case context by setting the `VALIDMIND_LLM_DESCRIPTIONS_CONTEXT_ENABLED` environment variable to `1`, then join the test-specific context to the use case context using the `VALIDMIND_LLM_DESCRIPTIONS_CONTEXT` environment variable." ] }, { "cell_type": "code", - "execution_count": 89, + "execution_count": null, "metadata": {}, "outputs": [], "source": [ @@ -414,9 +517,11 @@ "cell_type": "markdown", "metadata": {}, "source": [ + "\n", + "\n", "#### Dataset Description\n", "\n", - "Rather than relying on generic dataset result descriptions in isolation, we use the context to specify precise thresholds for missing values, appropriate data types for banking variables (like `CreditScore` and `Balance`), and valid value ranges based on particular business rules. " + "Rather than relying on generic dataset result descriptions in isolation, we'll use the context to specify precise thresholds for missing values, appropriate data types for banking variables (like `CreditScore` and `Balance`), and valid value ranges based on particular business rules:" ] }, { @@ -443,6 +548,13 @@ "os.environ[\"VALIDMIND_LLM_DESCRIPTIONS_CONTEXT\"] = context" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "With the test-specific context set, generate an updated test description for the `DatasetDescription` test again:" + ] + }, { "cell_type": "code", "execution_count": null, @@ -461,9 +573,14 @@ "cell_type": "markdown", "metadata": {}, "source": [ + "\n", + "\n", "#### Class Imbalance\n", "\n", - "This context adds value to the LLM description by providing defined risk levels to assess class representation. By categorizing classes into Low, Medium, and High Risk, the LLM can generate more nuanced and actionable insights, ensuring that the analysis aligns with business requirements for balanced datasets. This approach not only highlights potential issues but also guides necessary documentation and mitigation strategies for high-risk classes." + "The following test-specific context example adds value to the LLM-generated description by providing defined risk levels to assess class representation:\n", + "\n", + "- By categorizing classes into `Low`, `Medium`, and `High Risk`, the LLM can generate more nuanced and actionable insights, ensuring that the analysis aligns with business requirements for balanced datasets.\n", + "- This approach not only highlights potential issues but also guides necessary documentation and mitigation strategies for high-risk classes." ] }, { @@ -494,6 +611,13 @@ "os.environ[\"VALIDMIND_LLM_DESCRIPTIONS_CONTEXT\"] = context" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "With the test-specific context set, generate a test description for the `ClassImbalance` test for review:" + ] + }, { "cell_type": "code", "execution_count": null, @@ -515,9 +639,13 @@ "cell_type": "markdown", "metadata": {}, "source": [ + "\n", + "\n", "#### High Cardinality\n", "\n", - "In this case, the context specifies a risk-based criteria for the number of distinct values in categorical features. This helps the LLM to generate more nuanced and actionable insights, ensuring the descriptions are more relevant to the bank's policy." + "In this below case, the context specifies a risk-based criteria for the number of distinct values in categorical features.\n", + "\n", + "This helps the LLM to generate more nuanced and actionable insights, ensuring the descriptions are more relevant to your organization's policies." ] }, { @@ -548,6 +676,13 @@ "os.environ[\"VALIDMIND_LLM_DESCRIPTIONS_CONTEXT\"] = context" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "With the test-specific context set, generate a test description for the `HighCardinality` test for review:" + ] + }, { "cell_type": "code", "execution_count": null, @@ -571,9 +706,13 @@ "cell_type": "markdown", "metadata": {}, "source": [ + "\n", + "\n", "#### Missing Values\n", "\n", - "We use the test-specific context to establish differentiated risk thresholds across features. Rather than applying uniform criteria, the context allows for specific requirements for critical financial features (`CreditScore`, `Balance`, `Age`)." + "Here, we use the test-specific context to establish differentiated risk thresholds across features.\n", + "\n", + "Rather than applying uniform criteria, the context allows for specific requirements for critical financial features (`CreditScore`, `Balance`, `Age`)." ] }, { @@ -615,6 +754,13 @@ "os.environ[\"VALIDMIND_LLM_DESCRIPTIONS_CONTEXT\"] = context" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "With the test-specific context set, generate a test description for the `MissingValues` test for review:" + ] + }, { "cell_type": "code", "execution_count": null, @@ -636,9 +782,13 @@ "cell_type": "markdown", "metadata": {}, "source": [ + "\n", + "\n", "#### Unique Rows\n", "\n", - "The test-specific context establishes variable-specific thresholds based on business expectations. Rather than applying uniform criteria, it recognizes that high variability is expected in features like `EstimatedSalary` (>90%) and `Balance` (>50%), while enforcing strict limits on categorical features like `Geography` (<5 values), ensuring meaningful validation aligned with banking data characteristics." + "This example context establishes variable-specific thresholds based on business expectations.\n", + "\n", + "Rather than applying uniform criteria, it recognizes that high variability is expected in features like `EstimatedSalary` (>90%) and `Balance` (>50%), while enforcing strict limits on categorical features like `Geography` (<5 values), ensuring meaningful validation aligned with banking data characteristics." ] }, { @@ -678,6 +828,13 @@ "os.environ[\"VALIDMIND_LLM_DESCRIPTIONS_CONTEXT\"] = context" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "With the test-specific context set, generate a test description for the `UniqueRows` test for review:" + ] + }, { "cell_type": "code", "execution_count": null, @@ -699,9 +856,14 @@ "cell_type": "markdown", "metadata": {}, "source": [ + "\n", + "\n", "#### Too Many Zero Values\n", "\n", - "The context in this case is used to provide meaning and expectations for different variables. For instance, zero values in `Balance` and `Tenure` indicate risk, whereas zeros in binary variables like `HasCrCard` or `IsActiveMember` are expected. This tailored context ensures that the analysis accurately reflects the business significance of zero values across different features." + "Here, test-specific context is used to provide meaning and expectations for different variables:\n", + "\n", + "- For instance, zero values in `Balance` and `Tenure` indicate risk, whereas zeros in binary variables like `HasCrCard` or `IsActiveMember` are expected.\n", + "- This tailored context ensures that the analysis accurately reflects the business significance of zero values across different features." ] }, { @@ -738,6 +900,13 @@ "os.environ[\"VALIDMIND_LLM_DESCRIPTIONS_CONTEXT\"] = context" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "With the test-specific context set, generate a test description for the `TooManyZeroValues` test for review:" + ] + }, { "cell_type": "code", "execution_count": null, @@ -759,9 +928,11 @@ "cell_type": "markdown", "metadata": {}, "source": [ + "\n", + "\n", "#### IQR Outliers Table\n", "\n", - "In this case we use test-specific context to incorporate risk levels tailored to key variables like `CreditScore`, `Age`, and `NumOfProducts` that otherwise would not be considered for outlier analysis if we ran the test without context where all variables would be evaluated without any business criteria." + "In this case, we use test-specific context to incorporate risk levels tailored to key variables, like `CreditScore`, `Age`, and `NumOfProducts`, that otherwise would not be considered for outlier analysis if we ran the test without context where all variables would be evaluated without any business criteria." ] }, { @@ -791,6 +962,13 @@ "os.environ[\"VALIDMIND_LLM_DESCRIPTIONS_CONTEXT\"] = context" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "With the test-specific context set, generate a test description for the `IQROutliersTable` test for review:" + ] + }, { "cell_type": "code", "execution_count": null, @@ -812,9 +990,13 @@ "cell_type": "markdown", "metadata": {}, "source": [ + "\n", + "\n", "#### Descriptive Statistics\n", "\n", - "The test-specific context is used in this case to provide risk-based thresholds aligned with the bank's policy. For instance, `CreditScore` ranges of 550-850 are considered low risk based on standard credit assessment practices, while `Balance` thresholds reflect typical retail banking ranges." + "Test-specific context is used in this case to provide risk-based thresholds aligned with the bank's policy.\n", + "\n", + "For instance, `CreditScore` ranges of 550-850 are considered low risk based on standard credit assessment practices, while `Balance` thresholds reflect typical retail banking ranges." ] }, { @@ -868,6 +1050,13 @@ "os.environ[\"VALIDMIND_LLM_DESCRIPTIONS_CONTEXT\"] = context" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "With the test-specific context set, generate a test description for the `DescriptiveStatistics` test for review:" + ] + }, { "cell_type": "code", "execution_count": null, @@ -886,9 +1075,13 @@ "cell_type": "markdown", "metadata": {}, "source": [ + "\n", + "\n", "#### Pearson Correlation Matrix\n", "\n", - "For this test, the context provides meaningful correlation ranges between specific variable pairs based on business criteria. For example, while a general correlation analysis might flag any correlation above 0.7 as concerning, the test-specific context specifies that `Balance` and `NumOfProducts` should maintain a negative correlation between -0.4 and 0, reflecting expected banking relationships. " + "For this test, the context provides meaningful correlation ranges between specific variable pairs based on business criteria.\n", + "\n", + "For example, while a general correlation analysis might flag any correlation above 0.7 as concerning, the test-specific context specifies that `Balance` and `NumOfProducts` should maintain a negative correlation between -0.4 and 0, reflecting expected banking relationships." ] }, { @@ -929,6 +1122,13 @@ "os.environ[\"VALIDMIND_LLM_DESCRIPTIONS_CONTEXT\"] = context" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "With the test-specific context set, generate a test description for the `PearsonCorrelationMatrix` test for review:" + ] + }, { "cell_type": "code", "execution_count": null, diff --git a/pyproject.toml b/pyproject.toml index a332f689a..2fe5f2181 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -10,7 +10,7 @@ description = "ValidMind Library" license = "Commercial License" name = "validmind" readme = "README.pypi.md" -version = "2.8.7" +version = "2.8.8" [tool.poetry.dependencies] python = ">=3.8.1,<3.12" diff --git a/validmind/__version__.py b/validmind/__version__.py index c2c8ceb46..4ff34019e 100644 --- a/validmind/__version__.py +++ b/validmind/__version__.py @@ -1 +1 @@ -__version__ = "2.8.7" +__version__ = "2.8.8"