1
/
5

Batch renormalization in Deep Learning - June 2017

An interesting article about Batch renomarlization for Deep Learning.


Authors prove the efficiency of Batch Renormalization for Deep Learning.

Deep Learning has revolutionized vision via convolutional neural networks (CNNs)

and natural language processing via recurrent neural networks (RNNs). However,

success stories of Deep Learning with standard feed-forward neural networks

(FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot

exploit many levels of abstract representations. We introduce self-normalizing

neural networks (SNNs) to enable high-level abstract representations. While

batch normalization requires explicit normalization, neuron activations of SNNs

automatically converge towards zero mean and unit variance. The activation

function of SNNs are “scaled exponential linear units” (SELUs), which induce

self-normalizing properties. Using the Banach fixed-point theorem, we prove that

activations close to zero mean and unit variance that are propagated through many

network layers will converge towards zero mean and unit variance — even under

the presence of noise and perturbations.


For reading, this is here:

https://arxiv.org/pdf/1706.02515.pdf

1 いいね!
1 いいね!