This entry is part [part not set] of 3 in the series Everyone talks about AI but what is it, really?

What is AI; Specifically What is Machine Learning?

Is there really any learning going on?  Are they getting smarter?

By VICTOR ANJOS

Artificial intelligence is the future. Artificial intelligence is science fiction. Artificial intelligence is already part of our everyday lives. All those statements are true, it just depends on what flavor of AI you are referring to.

Most of us are familiar with the term “Artificial Intelligence.” After all, it’s been a popular focus in movies such as The Terminator, The Matrix, and Ex Machina but you may have recently been hearing about other terms like “Machine Learning” and “Deep Learning,” sometimes used interchangeably with artificial intelligence. As a result, the difference between artificial intelligence, machine learning, and deep learning can be very unclear.

I’ll begin by giving a quick explanation of what Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) actually mean and how they’re different this week (over a many part series) and explain some of their applications and current best practices, paradigms and uses.

We started in our last post by examining what Artificial Intelligence and followed it up telling you about what Machine Learning (ML) is. In today's final article, now knowing what Artificial Intelligence and Machine Learning are, let's figure out how they all fit in with deep learning (DL) as a new paradigm (and if we have time, let's briefly talk about the internet of things - IoT - as well).

What is Deep Learning (DL)?

In the most basic terms, Deep Learning can be explained as a system of probability. Based on a large dataset you feed to it, it is able to make statements, decisions or predictions with a degree of certainty. So the system might be 78% confident that there is a cat on the image, 91% confident that it’s an animal and 8% confident it’s a toy. Then you can add on the top of it a feedback loop, telling the machine whether it decisions were correct. That enables learning and possibility to modify decisions it takes in the future.

Deep artificial neural networks are a set of algorithms that have set new records in accuracy for many important problems, such as image recognition, sound recognition, recommender systems, etc. For example, deep learning is part of DeepMind’s well-known AlphaGo algorithm, which beat the former world champion Lee Sedol at Go in early 2016, and the current world champion Ke Jie in early 2017.

Deep is a technical term. It refers to the number of layers in a neural network. A shallow network has one so-called hidden layer, and a deep network has more than one. Multiple hidden layers allow deep neural networks to learn features of the data in a so-called feature hierarchy, because simple features (e.g. two pixels) recombine from one layer to the next, to form more complex features (e.g. a line). Networks with many layers pass input data (features) through more mathematical operations than networks with few layers, and are therefore more computationally intensive to train.

 

The Neural Network

The building blocks of such deep learning algorithms is the Neural Network; inspired by our understanding of the biology of our brains – all those interconnections between the neurons. But, unlike a biological brain where any neuron can connect to any other neuron within a certain physical distance, these artificial neural networks have discrete layers, connections, and directions of data propagation.

Feeding the Neural Network the data it requires

You might take an image, chop it up into a bunch of tiles that are inputted into the first layer of the neural network. In the first layer individual neurons then pass the data to a second layer. The second layer of neurons does its task, and so on, until the final layer and the final output is produced.

Each neuron assigns a weight to its input (i.e. how correct or incorrect it is relative to the task being performed) and the final output is then determined by the total of all the network’s weights.

 

Think of a network trying to figure out what kind of car it is being “shown” as an example. Attributes of a car’s image are chopped up and “examined” by the neurons —  its car-like shape, its tell-tale shape of the car, the specific type of front grill, its distinctive letters, and its motion or lack thereof. The neural network’s task is to conclude whether this is a Audi A7 or any other car in its “memory”.

It comes up with a “probability vector,” really a highly educated guess,  based on the weighting. In our example the system might be 86% confident the image is an Audi A7, 7% confident it’s a Volkswagen Passat, and 5% it’s a cruise missile,and so on — and the network architecture then tells the neural network whether it is right or not.

Why Neural Networks are so “new” to AI

The problem was even the most basic neural networks were very computationally intensive, it just wasn’t a practical approach. Still, a small heretical research group led by Geoffrey Hinton at the University of #Toronto kept at it, finally parallelizing the algorithms for supercomputers to run and proving the concept, but it wasn’t until GPUs (Graphics Processing Units) were deployed in the effort that the promise was realized.

Getting the Neural Network trained

If we go back again to our car example above, chances are very good that as the network is getting tuned or “trained” it’s coming up with wrong answers —  a lot. What it needs is training.

It needs to see hundreds of thousands, even millions of images, until the weightings of the neuron inputs are tuned so precisely that it gets the answer right practically every time — fog or no fog, sun or rain, moving or stationary, etc… It’s at that point that the neural network has taught itself what an Audi A7 looks like; or your mother’s face in the case of Instagram; or a cat or baby in the case of Facebook.

These techniques focus on building Artificial Neural Networks using several hidden layers. Using different weights, biases, # of neurons, activation functions (Relu, Sigmoid, etc…) and optimizers (SGD, Adam, etc…) the Deep Neural Networks tries to come out with the best possible outcome for the given problem.

As the data amount grows, Deep Learning models seem to outperform Machine Learning models.

Internet of Things

Finally, let’s end with a very small introduction to the term you’ve undoubtedly heard – the internet of things IoT. This is the concept that the devices and objects that you interact with or are part of a system, process or industry can relate to each other and with cloud storage. As well as to compute in order to create experiences, intelligence and actionable data where it wasn’t previously possible.

The internet of things extends the concept of the internet itself (i.e. thousands of disparate people or computing devices connected to form a network) to end points, which are not human interaction devices. Think of it as your refrigerator, a car or part of an industrial process that’s going to generate data, which may, in the short term, allow you to enable or disable something.

Over a prolonged period of time, it’s generating data that you can then reference to look for trends and meta trends, allowing you to determine a correlation. A real world example is that of, in a factory the temperature of a particular conveyor belt seems to change by time of day. This raises the question of, do we have an issue with the environmental control system in that building? It’s that kind of second order ability that generates some of the huge value in IOT.

This becomes very useful for any of the methods we have looked at in these past three articles as it is a wonderful data source (of set of data sets) for the algorithms.  Now you can get any device that has sensors connected to it to be part of the “learning” and have predicted or prescribed outcomes for that device.

Conclusion

It had been disconcerting to me that so many people talk about AI, ML and DL all over the news and whenever I ask them (not just laymen, but even people in the industry) to explain the different concepts I have in the past week, they mostly fail at doing so.

There is a ton of information out there, however my own personal neural network – my brain – has just helped clarify all of that input into one neat little package that you can consume much more easily!

Enjoy and stay tuned for more things Artificially Intelligent (and Blockchain too for good measure)!

So what do you do when all signs point to having to go to University to gain any sort of advantage?  Unfortunately it’s the current state of affairs that most employers will not hire you unless you have a degree for even junior or starting jobs. Once you have that degree, coming to my Mentor Program, with 1000ml with our Patent Pending training system, the only such system in the world; is the only way to gain the practical knowledge and experience that will jump start your career.

Check out our next dates below for our upcoming seminars, labs and programs, we’d love to have you there.

Be a friend, spread the word!

Series Navigation