Metrics over time are most commonly associated with the continued monitoring of a model's performance once it is deployed.\n",
+ "
\n",
+ "While you are able to add Metric Over Time blocks to model documentation, we recommend first
enabling ongoing monitoring for your model to maximize the potential of your performance data.
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "::: {.content-hidden when-format=\"html\"}\n",
+ "## Contents \n",
+ "- [About ValidMind](#toc1_) \n",
+ " - [Before you begin](#toc1_1_) \n",
+ " - [New to ValidMind?](#toc1_2_) \n",
+ " - [Key concepts](#toc1_3_) \n",
+ "- [Install the ValidMind Library](#toc2_) \n",
+ "- [Initialize the ValidMind Library](#toc3_) \n",
+ " - [Get your code snippet](#toc3_1_) \n",
+ "- [Initialize the Python environment](#toc4_) \n",
+ "- [Load demo model](#toc5_) \n",
+ "- [Log metrics](#toc6_) \n",
+ " - [Run unit metrics](#toc6_1_) \n",
+ " - [Log unit metrics over time](#toc6_2_) \n",
+ " - [Pass thresholds](#toc6_3_) \n",
+ " - [Log multiple metrics with custom thresholds](#toc6_4_) \n",
+ "\n",
+ ":::\n",
+ "\n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "