Skip to content

NeuralNetworks

Sathiyanarayanan S edited this page Jan 12, 2021 · 4 revisions

Why Rectified Linear Units is a good activation function?


For Alexnet we usually pass 224224 images (227227 with padding), What happens if we pass image with less resolution (pixel) / more resolution? Does it fail / overfit / poor accuracy?

Alexnet

With fully connected layer it will fail even if the resolution changes by 1 pixel. Because at the point of flattening number of nodes expected by fully connected layer will not match.


What is the loss function in AutoEncode?


what is global average pooling

https://stats.stackexchange.com/a/308218

Clone this wiki locally