You hear the neural network buzzword a lot if you were searching in the Machine Learning area. Neural networks are everywhere these days. In computer science, biology, civil engineering, and many other science branches. Here, the goal is to become familiar with what is a neural network?
You will learn:
On the other hand, this tutorial does NOT cover:
Definition
In the past decade, the best AI systems, such as speech processing and face recognition systems, have emerged from a method named “deep learning.” Deep learning is an AI approach by using neural networks. Neural networks were first proposed in the mid-50s and were underestimated for decades.
The simple, yet the very informative definition of neural networks is as below:
“A computing system made up of a number of simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs.“
The inventor of one of the first neurocomputers, Dr. Robert Hecht-Nielsen.
Structure
The processing units in a neural network are called neurons in which the computations take place. In the human brain, neuron fires in case of sufficient stimuli. A neural network consists of different layers that are made of nodes. In a neural network, we have many different layers, cascaded together as below:
- Input layer: The input data is fed to the network from this layer. It is basically the raw data or data features that are represented as the input to the neural network.
- Hidden layers: We have one or more hidden layers as they process and feed-forward the information. The more hidden layers we have, the deeper the network is!
- Output layer: The last layer of the neural network which generates the desired output. This desired output can be the predicted class of an object that we used as the input.

In a neural network, each neuron receives an amount (stimuli) from some of the previous neurons. The stimuli are the weighted average of the activation values of previously connected neurons. In the above figure, each neuron is connected to all the neurons in the previous layer. This is called a fully-connected neural network. To clarify more let’s use some simple notation. We denote the neuron in the
layer as
. The output of the neuron
can be calculated as below for a fully-connected neural network:
where is the weights of connections that are coming from all neurons in the previous layer and connected to the neuron
and
is some activation function. Activation functions are specific functions that neurons use to ignite their output. For now, do not worry about activation functions. We talk about them later.

Big picture of the learning paradigm
Most neural networks employ some sort of ‘learning paradigm’ which changes the weights of the connections by observing and reasoning from the input data. Somehow, neural networks learn by example like us. If you show a picture of a cat to a child and tell them “it is a cat,” they can recognize that image the next time. Furthermore, if the model is smart enough (a child after learning for a while), it can distinguish and categorize another type of cat as a cat as well.
Conclusion
In this tutorial, we discussed what are neural networks. Neural networks are utilized architectures to conduct Deep Learning as an approach to Machine Learning. You learned the architecture of neural networks from their smallest computational components, called neurons, to their higher-level components called layers. Here, you gained enough knowledge to start the path of learning deep learning. It’s a long journey, however, like all the other journeys, it starts with one step as well! Do you have any question or ambiguity? Any view about what you just learn? Feel free to share it here.