You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Cross-entropy loss, also known as log loss, measures the performance of a classification model whose output is a probability value between 0 and 1. For multi-class classification tasks, we use the categorical cross-entropy loss.
4
+
5
+
### Mathematical Background
6
+
7
+
For a single sample with C classes, the categorical cross-entropy loss is defined as:
8
+
9
+
$L = -\sum_{c=1}^{C} y_c \log(p_c)$
10
+
11
+
where:
12
+
13
+
- $y_c$ is a binary indicator (0 or 1) if class label c is the correct classification for the sample
14
+
- $p_c$ is the predicted probability that the sample belongs to class c
15
+
- $C$ is the number of classes
16
+
17
+
### Implementation Requirements
18
+
19
+
Your task is to implement a function that computes the average cross-entropy loss across multiple samples:
Early stopping is a regularization technique that helps prevent overfitting in machine learning models. Your task is to implement the early stopping decision logic based on the validation loss history.
4
+
5
+
### Problem Description
6
+
7
+
Given a sequence of validation losses from model training, determine if training should be stopped based on the following criteria:
8
+
9
+
- Training should stop if the validation loss hasn't improved (decreased) for a specified number of epochs (patience)
10
+
- An improvement is only counted if the loss decreases by more than a minimum threshold (min_delta)
11
+
- The best model is the one with the lowest validation loss
12
+
13
+
### Example
14
+
15
+
Consider the following validation losses: [0.9, 0.8, 0.75, 0.77, 0.76, 0.77, 0.78]
16
+
17
+
- With patience=2 and min_delta=0.01:
18
+
- Best loss is 0.75 at epoch 2
19
+
- No improvement > 0.01 for next 2 epochs
20
+
- Should stop at epoch 4
21
+
22
+
### Function Requirements
23
+
24
+
- Return both the epoch to stop at and the best epoch
0 commit comments