Show List
Activation Functions
Activation functions are a crucial component of neural networks that introduce nonlinearity into the output of neurons. The purpose of activation functions is to squash the input values to a neuron into a smaller, more manageable range. There are several types of activation functions, each with their own unique properties and use cases.
Here are some common activation functions and their code examples:
- Sigmoid function: The sigmoid function maps any input value to a value between 0 and 1. It is typically used in the output layer of binary classification problems.
pythonCopy code
import numpy as np
def sigmoid(x):
return 1 / (1 + np.exp(-x))
- ReLU function: The ReLU (rectified linear unit) function is a popular activation function in deep learning. It returns the input value if it is positive, and 0 otherwise.
pythonCopy code
def relu(x):
return max(0, x)
- Tanh function: The tanh function is similar to the sigmoid function but maps input values to a range between -1 and 1. It is often used in the hidden layers of neural networks.
pythonCopy code
def tanh(x):
return np.tanh(x)
- Softmax function: The softmax function is used in the output layer of multi-class classification problems. It maps input values to a probability distribution over the different classes.
pythonCopy code
def softmax(x):
exp_values = np.exp(x - np.max(x))
return exp_values / np.sum(exp_values, axis=0)
These are just a few examples of activation functions, and there are many more to choose from depending on the problem you are trying to solve. It's important to experiment with different activation functions to find the one that works best for your specific use case.
Leave a Comment