Originally posted by Tobias on LinkedIn here – Re-Edited and released for VMware Cloud Insights.
Datacentre < > Cloud
CLOUD!…….Great, now I have your attention, let’s talk about data centers.
Data centers, remember those? Yes, it’s what we used to call Private Clouds before they were known as clouds.
What about Co-los? Yes, those shared data centers we used before they were called Public Clouds or Cloud Hosted.
It all still starts with the humble data centre.
But nobody needs a DC on its own. They exist to run applications that provide a business value. In today’s world though, application innovation is accelerating at an incredible rate. This brings more complexity and more and more pressure on the infrastructure hosting the apps.
Computer Rooms, Applications and Data Centers
DC’s originally started as computer rooms in the 1940s, containing large mainframes that required an equally large amount of energy to run. As applications began to increase in complexity, so did the data centers to keep up. A lot of innovation happened in this area over the following decades, including the introduction of industry standards to help. The first commercial blade server was shipped in 2001 and this is really when DCs started to become scalable. But application innovation didn’t slow down, and this meant more and more specialized components to keep them running, plus even more complex network requirements and then the extra emphasis on the importance of data and data encryption.
It got to the stage where more and more tools were needed to manage each component and more staff were required to operate it, each with deep domain knowledge. This situation for many enterprises was no longer cost effective to innovate with and not nearly quick enough. software developers needed resources and they needed them now!
A change needed to happen. Data centers were now becoming the bottleneck to application developers, which in today’s software defined world, meant a bottleneck for the business as a whole.
And Then, Cloud!
The I.T. industry then took an amazing leap, when servers in the DC were no longer being treated like pets and were now treated more like cattle. The underlying hardware was totally commoditized, and the intelligence moved from the hardware layer into the software layer, bringing in the “Software Defined” era.
Companies started to do this in a very big way in their own data centers, including now very large companies like Google and Amazon.
These start-ups then had a unique opportunity to redesign the datacentre from the ground up. They had demanding modern applications, needed them to scale infinitely for the global population, and of course had a big budget. In short, they had no legacy, but a clean slate to leverage the technology of the time and make it work for today.
As we know, Amazon and Google then opened these out to the public and to this day have the most innovative and widely used mega-clouds in the world.
Cloud Becomes Mainstream
The advent of cloud was well due, and as we adopted it, we found fantastic scale and the possibility of a whole host of new and interesting use cases. To anyone who listens, I will still describe cloud as having these three characteristics as an absolute minimum:
- Self Service – A portal or API where end users can request services (Day 1)
- Elastic – The services can scale automatically as required and as requested (Day 2)
- Metered – You can measure what is being used, when and by whom. (Day X)
These cloud characteristics (enabled by Cloud Management) then opened some really valuable use cases we still talk about today:
- IaaS, PaaS and SaaS
- Cloud Storage
- DRaaS
- Big Data and Analytics
From my experience though, the data centers underneath for most large enterprises are still complex and often very custom things, with some cattle (private cloud), some pets (bare metal apps) and some dinosaurs (mainframe).
Complexity Reigns
So why would I want to talk about data centre innovation? Well even those shiny new clouds still run a lot of the same things that data centers always have, and unfortunately this means they are still subject to a lot of the challenges of old. These challenges are then compacted when a lot of large enterprises have these mixed data centers with the immovable force of legacy still looming over.
Data centers are extremely hard to predict too, you need many different skill sets, each with a huge amount of depth too. Companies are facing some very real and significant challenges, with operational complexity at the forefront. Not forgetting the ever-increasing importance on compliance, security and cost.
Artificial Intelligence to the Rescue!
This is where Machine Learning (ML) can help. ML is an application of Artificial Intelligence (AI). It’s essentially a fancy way of saying that the technology will learn and improve from experience. This is a key skill when it comes to navigating complexity.
Imagine if you could plug your brain completely into all of the software of the data centre and gain full control. You would know and understand each service, each function, each application, each tick box, each variable, each string, each switch, each flashing light, the lot of it. You could then make a change to one of those configurations and see if your apps performed better or worse, then adjust until perfect.
Computer Software can now do exactly this, but at a scale and speed that is impossible for a human mind. Things change almost instantaneously. This is the basis of AI/ML in the data centre. Initial applications have shown some incredible promise.
Applying AI/ML to the DC
The Cloud Management Business Unit at VMware have been working on ML technologies for a while as part of the Self Driving Data Centre initiative. The latest release is called vRealize AI Cloud. (OK yes, we’re now back to the C-word.) Initially the technology is focused on tuning vSAN, the software defined storage solution.
There are so many potential configurations for vSAN (typical for any software today), that often things get left mostly as default. Applying vRealize AI Cloud to a vSAN clustered environment, so that it is tuned specifically for the enterprises key performance indicators (KPIs) such as performance, has provided a staggering 60% improvement in performance on average so far.
Yes, you could go in and manually apply these configuration changes, but, like a living organism, the AI is then constantly tuning the environment. Does your DC configuration ever stay the same? It doesn’t and it shouldn’t. We need to continually adjust as the application requirements adjust. In this example, the amount of monitoring required for the vSAN platform also reduces dramatically because the AI gets ahead of potential problems.
What’s Next?
We can now start looking forward to a world where this type of technology can be scaled out across the rest of the DC. Continually self-tuned, self-healed, self-service data centers.
More and more focus can then be placed on the applications themselves,
I believe this will impact the data centre industry even more than when we first started talking about “Cloud”.
Maybe it’s time for another buzzword then?? Better get your umbrella, Clouds are about to be…….Thunderstruck!
Thank you for taking the time to read this post. This will be the first of many in a series of content dedicated to Artificial Intelligence, so please look out for more as it’s released.