https://github.com/dscripka/openWakeWord/blob/368c03716d1e92591906a84949bc477f3a834455/openwakeword/train.py#L486 when `accumulated_samples < 128`, loss is cleared in next iteration by `self.optimizer.zero_grad()`
openWakeWord/openwakeword/train.py
Line 486 in 368c037
when
accumulated_samples < 128, loss is cleared in next iteration byself.optimizer.zero_grad()