Nowadays, Deep Learning is everywhere. Instead of traditional Machine Learning, many more stakeholders prefer Deep Learning not just because it promises better results, but also because it is easier to work with. Why many people prefer Deep Learning over traditional Machine Learning? Why it is winning the race and Traditional Machine Learning with hand-crafted feature engineering is set aside, mostly? Why it is the center of attention?
In a previous post, we defined Deep Learning as it considers to be an approach to Machine Learning using Deep Neural Networks. Check how fast the interest of society to the term “Deep Learning” has been increased in the last decade according to Google Trends data:
The vertical axis is the interest over time. According to Google:
Numbers represent search interest relative to the highest point on the chart for the given region and time. A value of 100 is the peak popularity for the term. A value of 50 means that the term is half as popular. A score of 0 means there was not enough data for this term.
Let’s check another data. The coverage of Deep Learning in the news!
Indeed, there must be something! Let’s explore.
Advantage of Deep Learning to Traditional Machine Learning
Easier to engineer
Feature engineering is to leverage domain knowledge to create suitable data features that are understandable by the learning algorithms, instead of using raw data directly. This process is exhaustive, complicated and requires a lot of knowledge and try-and-error!
Traditional Machine Learning algorithms usually perform based on hand-crafted features and rules. Although such an approach may give them the advantage of performing better (compared to deep learning methods) in the absence of a huge amount of data, it still creates a lot of setbacks and complexity to the feature engineering tasks. Furthermore, the data is not a significant problem nowadays like it was a decade ago. We discuss it later in this article.
On the other hand, it’s much easier to avoid this complexity by utilizing Deep Learning. Deep Learning models are capable of learning from data inherently by solely representing the data to them in a structured way (structured data is well-arranged and organized so it’s constituent elements are easily accessible.). Furthermore, a lot of successful efforts employed Deep Learning for learning from unstructured data (data the has no clear composition or formation led to having difficulties in collecting, processing, and examining it) as well.
Data makes them superior
It is empirically proven that the performance of Deep Learning methods in which a neural network is trained to learn from data, will be significantly improved by increasing the data. In case of the availability huge amount of data, Deep Learning based models are usually shown superior performance compared to traditional Machine Learning methods. Andrew Ng depict compare Deep Learning and Traditional Machine Learning by illustrating the performance vs data pattern in his Coursera deep Learning course:
Such increasing performance due to the availability of the massive amounts of data, one of the main reasons that Deep Learning considered to be a better choice the majority of the time as opposed to traditional Machine Learning.
What changed in favor of Deep Learning?
Let’s talk about the growing volume of data at the speed of light! Big data describes the large volume of data, whether structured or unstructured. Let’s have some information about how fast the data is growing.
Since 2013 (with almost 4.4 zettabytes of data), the data became 10x leading to 44 zettabytes of data by 2020!
The data created in the last two years exceed the data generated in the entire human history before that.
With such impressive growth of data, no wonder why Deep Learning is progressing fast as its main nutrient is data!
Training deep neural networks need massive computational power. It used to be a setback back in the days. In the last decade or even considering the last two years, the computational power has been drastically increasing and its cost was decreasing as well. Google collaboratory gives decent free (although limited!) GPU access via Google Colaboratory to whoever that has a Gmail account! It’s fascinating! With Google Colab, basically anyone around the world can do Deep Learning at some decent level. The cost of other services such as AWS, Azure, and GCP are reasonable even if not desired!
The above figures show the eye-catching progress of computing and it’s only until 2010. The drastic increasing trend of computing power, due to the invention of GPUs and TPUs, helped the progress of AI faster than anyone could anticipate. The trend shows a 10x increase each year. In fact, three different technologies are competing in the race of AI:
These three technologies are expected to always surprise stakeholders how fast computational power can increase!
Novel Approaches: One important aspect
Well, although the advent of big data and powerful, less costly computational resources seems to be the main factor in the advancement of Deep Learning, there is a very important point here. Those two factors in addition to the superior performance of Deep Learning, provided great opportunities and motivation for the researchers to dig deeper and dedicated a lot of effort into Deep Learning. This leads to creating excellent algorithms and models to do the job. However, of course, the precursor of these efforts was the advent of data and the advancement of the technology.
According to the Artificial Intelligence Index Report 2019 by Stanford:
- Attendance at AI conferences continues to increase significantly. For example, NeurIPS has 800% increased compared to 2012. Also, major conferences such as CVPR are expected to have an annual 30% attendance growth rate.
- Between 1998 and 2018, the volume of peer-reviewed AI papers has grown by more than 300%.
- In the US, the portion of jobs in AI-related topics has increased drastically.
It is obvious that such an increasing trend of AI-related efforts, by itself advances this area faster than many other branches of science.
We talk about the advantages of Deep Learning to traditional Machine Learning. No matter how much data we have and how advanced the technology is. As of now, although Deep Learning is doing much better than traditional Machine Learning, still, in a lot of situations, we may not have access to enough data or the nature of the data is complicated and we may have to conduct many different data preprocessing steps. Deep Learning by itself is NOT the panacea. However, these days, the combination of the human brain and Deep Learning has proven to be superior to the human brain in a lot of applications.
Having anything else in mind? Do you believe all is cover or there are still important factors that are left behind? What do you find superior in Deep Learning compared to Traditional Machine Learning?