What are hidden layers?
Hidden layers are intermediate layers between the input and output layers of a neural network. They perform nonlinear transformations of the inputs by applying complex non-linear functions to them. One or more hidden layers are used to enable a neural network to learn complex tasks and achieve excellent performance1.
Hidden layers are not visible to the external systems and are “private” to the neural network23. They vary depending on the function and architecture of the neural network, and similarly, the layers may vary depending on their associated weights1.
Why are hidden layers important?
Hidden layers are the reason why neural networks are able to capture very complex relationships and achieve exciting performance in many tasks. To better understand this concept, we should first examine a neural network without any hidden layer like the one that has 3 input features and 1 output.
Based on the equation for computing the output of a neuron, the output value is equal to a linear combination of the inputs along with a non-linearity. Therefore, the model is similar to a linear regression model. As we already know, a linear regression attempts to fit a linear equation to the observed data. In most machine learning tasks, a linear relationship is not enough to capture the complexity of the task and the linear regression model fails4.
Here comes the importance of the hidden layers that enables the neural network to learn very complex non-linear functions. By adding one or more hidden layers, the neural network can break down the function of the output layer into specific transformations of the data. Each hidden layer function is specialized to produce a defined output. For example, in a CNN used for object recognition, a hidden layer that is used to identify wheels cannot solely identify a car, however when placed in conjunction with additional layers used to identify windows, a large metallic body, and headlights, the neural network can then make predictions and identify possible cars within visual data1.
How many hidden layers do we need?
There is no definitive answer to this question, as it depends on many factors such as the type of problem, the size and quality of data, the computational resources available, and so on. However, some general guidelines can be followed:
- For simple problems that can be solved by a linear model, no hidden layer is needed.
- For problems that require some non-linearity but are not very complex, one hidden layer may suffice.
- For problems that are more complex and require higher-level features or abstractions, two or more hidden layers may be needed.
- Adding more hidden layers can increase the expressive power of the neural network, but it can also increase the risk of overfitting and make training more difficult.
Therefore, it is advisable to start with a small number of hidden layers and increase them gradually until we find a good trade-off between performance and complexity.
Conclusion
In this blog post, we have learned what hidden layers are, why they are important for neural networks, and how many hidden layers we may need for different problems. We have also seen some examples of how hidden layers can enable neural networks to learn complex non-linear functions and achieve excellent performance in many tasks.
I hope you enjoyed reading this blog post and learned something new. If you have any questions or feedback, please feel free to leave a comment below. Thank you for your attention!