Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
17 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion Project.toml
Original file line number Diff line number Diff line change
@@ -1,11 +1,12 @@
name = "MLJ"
uuid = "add582a8-e3ab-11e8-2d5e-e98b27df1bc7"
version = "0.23.1"
version = "0.23.2"
authors = ["Anthony D. Blaom <anthony.blaom@gmail.com>"]

[deps]
CategoricalArrays = "324d7699-5711-5eae-9e2f-1d82baa6b597"
ComputationalResources = "ed09eef8-17a6-5b46-8889-db040fac31e3"
DataAPI = "9a962f9c-6df0-11e9-0e5d-c546b8b5ee8a"
Distributed = "8ba89e20-285c-5b6f-9357-94700520ee1b"
Distributions = "31c24e10-a181-5473-b8eb-7969acd0382f"
FeatureSelection = "33837fe5-dbff-4c9e-8c2f-c5612fe2b8b6"
Expand All @@ -32,6 +33,7 @@ Tables = "bd369af6-aec1-5ad0-b16a-f7cc5008161c"
[compat]
CategoricalArrays = "1"
ComputationalResources = "0.3"
DataAPI = "1.16"
Distributions = "0.21,0.22,0.23, 0.24, 0.25"
FeatureSelection = "0.2"
MLJBalancing = "0.1"
Expand Down
5 changes: 5 additions & 0 deletions docs/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -25,4 +25,9 @@ TypedTables = "9d95f2ec-7b3d-5a63-8d20-e2491e220bb9"

[compat]
Documenter = "1"
ScientificTypesBase = "3.1.0"
IterationControl = "0.5.4"
EvoTrees = "0.18.5"
MLJClusteringInterface = "0.1.13"
TypedTables = "1.4.6"
julia = "1.6"
49 changes: 41 additions & 8 deletions docs/src/evaluating_model_performance.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,12 +5,8 @@ selected losses or scores, using the [`evaluate`](@ref) or [`evaluate!`](@ref) m
For more on available performance measures, see [Performance
Measures](performance_measures.md).

In addition to hold-out and cross-validation, the user can specify
an explicit list of train/test pairs of row indices for resampling, or
define new resampling strategies.

For simultaneously evaluating *multiple* models, see "[Comparing models of different type
and nested cross-validation](@ref explicit)".
In addition to hold-out and cross-validation, the user can specify an explicit list of
train/test pairs of row indices for resampling, or define new resampling strategies.

For externally logging the outcomes of performance evaluation experiments, see [Logging
Workflows](@ref)
Expand Down Expand Up @@ -48,8 +44,8 @@ machine potentially change. )
Multiple measures are specified as a vector:

```@repl evaluation_of_supervised_models
evaluate!(
mach,
performance_evaluation = evaluate(
model, X, y;
resampling=cv,
measures=[l1, rms, rmslp1],
verbosity=0,
Expand All @@ -58,6 +54,42 @@ evaluate!(

[Custom measures](@ref) can also be provided.


## Multiple models

To create a short named tuple summary of a performance evaluation, one can apply the
[`describe`](@ref) method:

```@repl evaluation_of_supervised_models
describe(performance_evaluation)
```

This is useful when tabulating performance evaluations for multiple models, which you can
do by providing a vector of models in place of `model` in your `evaluate` command. The
models can also include tags to appear in the final table, as shown in the following
example:

```@repl evaluation_of_supervised_models
performance_evaluations = evaluate(
["const" => ConstantRegressor(), "ridge" => model], X, y;
resampling=cv,
measures=[l1, rms, rmslp1],
verbosity=0,
)
table = describe.(performance_evaluations);
pretty(table)
```

One can also wrap a collection of models as a single "metamodel", which always represents
the model with the best performance evaluation estimate for the data it is trained. See
"[Comparing models of different type and nested cross-validation](@ref explicit)".


!!! info

The [`describe`](@ref) method assumes you have at least MLJBase 1.13.0 installed.


## Specifying weights

Per-observation weights can be passed to measures. If a measure does not support weights,
Expand Down Expand Up @@ -165,5 +197,6 @@ MLJBase.evaluate!
MLJBase.evaluate
MLJBase.PerformanceEvaluation
MLJBase.CompactPerformanceEvaluation
describe
default_logger
```
1 change: 1 addition & 0 deletions src/MLJ.jl
Original file line number Diff line number Diff line change
Expand Up @@ -84,6 +84,7 @@ import Random: AbstractRNG, MersenneTwister
using ProgressMeter
using ComputationalResources
using ComputationalResources: CPUProcesses
@reexport import DataAPI.describe # extended by MLJBase >= 1.13.0

# to be extended:
import MLJBase: fit, update, clean!, fit!, predict, fitted_params,
Expand Down
4 changes: 4 additions & 0 deletions test/exported_names.jl
Original file line number Diff line number Diff line change
Expand Up @@ -47,4 +47,8 @@ log_score

@test Transformer <: Unsupervised

# DataAPI

describe

true
Loading