|
| 1 | +# Implementing Adadelta Optimizer |
| 2 | + |
| 3 | +## Introduction |
| 4 | +Adadelta is an extension of Adagrad that addresses two key issues: the aggressive, monotonically decreasing learning rate and the need for manual learning rate tuning. While Adagrad accumulates all past squared gradients, Adadelta restricts the influence of past gradients to a window of size w. Instead of explicitly storing w past gradients, it efficiently approximates this window using an exponential moving average with decay rate ρ, making it more robust to parameter updates. Additionally, it automatically handles the units of the updates, eliminating the need for a manually set learning rate. |
| 5 | + |
| 6 | +## Learning Objectives |
| 7 | +- Understand how Adadelta optimizer works |
| 8 | +- Learn to implement adaptive learning rates with moving averages |
| 9 | + |
| 10 | +## Theory |
| 11 | +Adadelta uses two main ideas: |
| 12 | +1. Exponential moving average of squared gradients to approximate a window of size w |
| 13 | +2. Automatic unit correction through the ratio of parameter updates |
| 14 | + |
| 15 | +The key equations are: |
| 16 | + |
| 17 | +$v_t = \rho v_{t-1} + (1-\rho)g_t^2$ (Exponential moving average of squared gradients) |
| 18 | + |
| 19 | +The above approximates a window size of $w \approx \dfrac{1}{1-\rho}$ |
| 20 | + |
| 21 | +$\Delta\theta_t = -\dfrac{\sqrt{u_{t-1} + \epsilon}}{\sqrt{v_t + \epsilon}} \cdot g_t$ (Parameter update with unit correction) |
| 22 | + |
| 23 | +$u_t = \rho u_{t-1} + (1-\rho)\Delta\theta_t^2$ (Exponential moving average of squared parameter updates) |
| 24 | + |
| 25 | +Where: |
| 26 | +- $v_t$ is the exponential moving average of squared gradients (decay rate ρ) |
| 27 | +- $u_t$ is the exponential moving average of squared parameter updates (decay rate ρ) |
| 28 | +- $\rho$ is the decay rate (typically 0.9) that controls the effective window size w ≈ 1/(1-ρ) |
| 29 | +- $\epsilon$ is a small constant for numerical stability |
| 30 | +- $g_t$ is the gradient at time step t |
| 31 | + |
| 32 | +The ratio $\dfrac{\sqrt{u_{t-1} + \epsilon}}{\sqrt{v_t + \epsilon}}$ serves as an adaptive learning rate that automatically handles the units of the updates, making the algorithm more robust to different parameter scales. Unlike Adagrad, Adadelta does not require a manually set learning rate, making it especially useful when tuning hyperparameters is difficult. This automatic learning rate adaptation is achieved through the ratio of the root mean squared (RMS) of parameter updates to the RMS of gradients. |
| 33 | + |
| 34 | +Read more at: |
| 35 | + |
| 36 | +1. Zeiler, M. D. (2012). ADADELTA: An Adaptive Learning Rate Method. [arXiv:1212.5701](https://arxiv.org/abs/1212.5701) |
| 37 | +2. Ruder, S. (2017). An overview of gradient descent optimization algorithms. [arXiv:1609.04747](https://arxiv.org/pdf/1609.04747) |
| 38 | + |
| 39 | +## Problem Statement |
| 40 | +Implement the Adadelta optimizer update step function. Your function should take the current parameter value, gradient, and accumulated statistics as inputs, and return the updated parameter value and new accumulated statistics. |
| 41 | + |
| 42 | +### Input Format |
| 43 | +The function should accept: |
| 44 | +- parameter: Current parameter value |
| 45 | +- grad: Current gradient |
| 46 | +- v: Exponentially decaying average of squared gradients |
| 47 | +- u: Exponentially decaying average of squared parameter updates |
| 48 | +- rho: Decay rate (default=0.9) |
| 49 | +- epsilon: Small constant for numerical stability (default=1e-8) |
| 50 | + |
| 51 | +### Output Format |
| 52 | +Return tuple: (updated_parameter, updated_v, updated_u) |
| 53 | + |
| 54 | +## Example |
| 55 | +```python |
| 56 | +# Example usage: |
| 57 | +parameter = 1.0 |
| 58 | +grad = 0.1 |
| 59 | +v = 1.0 |
| 60 | +u = 1.0 |
| 61 | + |
| 62 | +new_param, new_v, new_u = adadelta_optimizer(parameter, grad, v, u) |
| 63 | +``` |
| 64 | + |
| 65 | +## Tips |
| 66 | +- Initialize v and u as zeros |
| 67 | +- Use numpy for numerical operations |
| 68 | +- Test with both scalar and array inputs |
| 69 | +- The learning rate is automatically determined by the algorithm |
| 70 | + |
| 71 | +--- |
0 commit comments