You hear the neural network buzzword a lot if you were searching in the Machine Learning area. Neural networks are everywhere these days. In computer science, biology, civil engineering, and many other science branches. Here, the goal is to become familiar with what is a neural network?

You will learn:

The definition of neural networks.
The architecture of a neural network.
The neurons, as the computational units of neural networks.
What are the neural network layers?

On the other hand, this tutorial does NOT cover:

The details of how to train and test a neural network.
What are the different architectures of neural networks?
The application that neural networks shine.
The detailed history behind neural networks.
How to implement neural networks with code.

Definition

In the past decade, the best AI systems, such as speech processing and face recognition systems, have emerged from a method named “deep learning.” Deep learning is an AI approach by using neural networks. Neural networks were first proposed in the mid-50s and were underestimated for decades.

The simple, yet the very informative definition of neural networks is as below:

“A computing system made up of a number of simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs.

The inventor of one of the first neurocomputers, Dr. Robert Hecht-Nielsen.

Structure

The processing units in a neural network are called neurons in which the computations take place. In the human brain, neuron fires in case of sufficient stimuli. A neural network consists of different layers that are made of nodes. In a neural network, we have many different layers, cascaded together as below:

  1. Input layer: The input data is fed to the network from this layer. It is basically the raw data or data features that are represented as the input to the neural network.
  2. Hidden layers: We have one or more hidden layers as they process and feed-forward the information. The more hidden layers we have, the deeper the network is!
  3. Output layer: The last layer of the neural network which generates the desired output. This desired output can be the predicted class of an object that we used as the input.
neural networks
An example of the neural network predicting the image category. This is called supervised learning in which neural networks the data is fed to the neural network and the system should learn the category of data.

In a neural network, each neuron receives an amount (stimuli) from some of the previous neurons. The stimuli are the weighted average of the activation values of previously connected neurons. In the above figure, each neuron is connected to all the neurons in the previous layer. This is called a fully-connected neural network. To clarify more let’s use some simple notation. We denote the i^{th} neuron in the j^{the} layer as \mathcal{N}_{i,j}. The output of the neuron \mathcal{N}_{i,j} can be calculated as below for a fully-connected neural network:

    \[\mathcal{N}_{i,j}=f(\sum_{i} w_{i,j-1} \mathcal{N}_{i,j-1})\]

where w_{i,j-1} is the weights of connections that are coming from all neurons in the previous layer and connected to the neuron \mathcal{N}_{i,j} and f(.) is some activation function. Activation functions are specific functions that neurons use to ignite their output. For now, do not worry about activation functions. We talk about them later.

An example of a neuron computation including the weighted sum and activation.

Big picture of the learning paradigm

Most neural networks employ some sort of ‘learning paradigm’ which changes the weights of the connections by observing and reasoning from the input data. Somehow, neural networks learn by example like us. If you show a picture of a cat to a child and tell them “it is a cat,” they can recognize that image the next time. Furthermore, if the model is smart enough (a child after learning for a while), it can distinguish and categorize another type of cat as a cat as well.

Conclusion

In this tutorial, we discussed what are neural networks. Neural networks are utilized architectures to conduct Deep Learning as an approach to Machine Learning. You learned the architecture of neural networks from their smallest computational components, called neurons, to their higher-level components called layers. Here, you gained enough knowledge to start the path of learning deep learning. It’s a long journey, however, like all the other journeys, it starts with one step as well! Do you have any question or ambiguity? Any view about what you just learn? Feel free to share it here.

Leave a Comment

Your email address will not be published. Required fields are marked *

Tweet
Share
Pin
Share