Machine Learning has become (again, see AI Winter) a very popular topic, often interchanged or mixed with other terms like Artificial Intelligence (AI) and the more intriguing “Deep Learning”.
How are these concepts related each other? You can think of them as a set of Russian dolls nested within each other, beginning with the smallest and working out.
Deep Learning is a subset of Machine Learning, and Machine Learning is a subset of AI, which is an umbrella term for any computer program that does something smart. In other words, all Machine Learning is AI, but not all AI is Machine Learning.
It is important to understand that a computer can be coded to do something smart (i.e. play chess, solve a rubix cube) with just a sequence of if-then-else but it has nothing do with human capacity to learn and autonomously “change” behavior based on new data.
Machine Learning: A Brain Model
To build a “learning” machine scientist started to create a model of how our brain works.
Specifically, they wanted to replicate the mechanism of neurons that receive and transmit signals and have the ability to create interconnections that can be promoted or demoted based on specific desired outcomes.
So, a first mathematical model of a single neuron has been created, called a “perceptron”:
As you can see, a perceptron has:
- A series of inputs with an associated weight
- An output
- An activation function (that determines the behavior)
When you interconnect a series of perceptrons together you get what is normally called an Artificial Neural Network (ANN).
There’s a lot of mathematics here. For a deep dive into the math behind this I would refer you to this Stanford UFLDL site. But today we are lucky because so much of the math has been already figured out and we can refer to libraries like Tensorflow, so that we can keep focus on what we want to achieve.
Deep Learning is, as we have said, a sub-type of Machine Learning. It Is deep because it typically involves hundreds of “hidden” layers like in this picture:
What is important to know is that those multi hidden layers are able to identify specific “features” of the input and classify them to better describe inputs based on specific characteristics.
What we obtain is a simulated brain able to self-learn characteristics of the inputs, improving their ability over the quantity of data and time.
Why does the VMware multi-cloud strategy matter for Machine Learning?
As we have seen a Machine Learning platform requires the following:
- The ability to “ingest” a huge amount of data
- Compute power (GPU or CPU) to do all that math
- The ability to leverage what it learns and engage with users
That means Machine Learning and more broadly any modern application architecture could benefit from multiple types of cloud environments that include private cloud, public cloud and end-user devices in order to take advantage of different services, different levels of performance, security, redundancy, or even different cloud vendors.
VMware can be an important partner for exploring Machine Learning, by providing businesses with consistent infrastructure across clouds, including Amazon Web Services (AWS), IBM Cloud, and more than 4,000 cloud service providers globally who operate as part of the VMware Cloud Provider network.
In addition to providing consistent infrastructure across all of those VMware-based clouds, VMware Cloud also provides consistent operations across all clouds—even those that are not based on VMware technology—such as Microsoft Azure and Google Cloud Platform.
Andrea Siviero is an ten-year veteran of VMware and a senior solutions architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC), a part of the Global Technical Solutions (GTS) team. Prior to PSE, Andrea spent three years as pre-sales system engineer and three years as a post-sales consultant architect for cloud computing and desktop virtualization solutions focusing on very large and complex deployments, especially for service providers in the finance and telco sectors.