ReLU, Sigmoid & Tanh Activation Functions
Why do we use ReLU activation function over sigmoid or tanh? Why activation functions suffer from vanishing gradient problem?
Journey of Curiosity
Why do we use ReLU activation function over sigmoid or tanh? Why activation functions suffer from vanishing gradient problem?
Let’s start our deep learning journey with convolutional neural networks. In this blog, we will get a basic idea about CNN. This is the first blog in this series so stay tuned!