The academic tip: What is Deep Learning?

This is a guest post from Jacques Zuber, Data Science Teacher at HEIG-VD.

The commonly called deep learning or hierarchical learning is now a popular trend in machine learning. Recently during the Swiss Analytics Meeting Prof. Dr. Sven F. Crone presented how we can use deep learning in the industry in a forecasting perspective (beer forecasting for manufacturing, lettuce forecasting in retail outlets, container forecasts). Deep learning has a variety of applications as for example image and handwritten character recognition. It analyses a picture and will be able to conclude if it is a dog, a human or something else. After a learning process, deep learning first understands your handwriting and then can read and interpret a draft paper you have quickly written. But briefly what is exactly deep learning?

In the artificial intelligence process, deep learning plays an important role. It is considered as a method of machine learning and roughly speaking means neural networks. More precisely artificial neural networks are intended to simulate the behaviour of biological systems composed of multiple layers of nodes (or computational units), usually interconnected in feed-forward way. Each node in one layer has directed connections to the nodes of the subsequent layer. Feed-forward neural networks can be considered as a type of non-linear predictive models that takes inputs (very often huge amount of both labelled and unlabelled data), transforms and weights these through plenty of hidden layers to produce a set of outputs (predictions). The use of a sequence of layers, organised in deep or hierarchical levels, explains the term of « deep learning ». Each layer receives as input the information contained in the previous layer, transforms it to the following layer and of course complete and improve it.

We consider the well-known numerical image recognition problem to illustrate how deep learning works in practice. In the first hidden layer, the network analyses the pixels and classifies them, for example by colour. Obtained results are then studied in the second layer for identifying relevant relationships. For instance, some lines and shadowing effects are detected. A third hidden layer analyses and combines these curves for discovering forms such as human faces. New layers can be added to improve and refine the deep learning model for discovering better patterns. This process can continue until the network generates as output a desirable image where the nature of the picture can be identified (a dog, a cat or a human person for example).

To conclude this academic tip, deep learning is a learning mechanism. It is very attractive and effective for almost any task of machine learning and Internet of Things (IoT) especially for classification. But it needs in fact a lot of data and requires very long times for model training, especially when the number of hidden layers is large. Nerveless, the availability of new hardware, particularly GPUs, and modern parallel computing have made computations much cheaper and faster.

Neural network models are very flexible but typically over-parametrized. They are so-called « black-box » models and provide results which are not always human-interpretable even for an expert.

Recent developments have be carried out to improve deep learning methods and algorithms. Different libraries are now available as for example the open-source TensorFlow developed by Google and MXNetR, darch, deepnet, h2o and deepr, libraries of the free statistical computing system R. Deep Learning is in commercial business software packages too as for example in SAS Enterprise Miner.

This article was originally published in the Swiss Analytics Magazine.

Share

Recommended Reading