A VMware sponsored eBook “New Best Practices in Virtual and Cloud Management” by author and vExpert Greg Shields presents the fundamental challenges and opportunities organizations are faced with in this new era of managing highly dynamic virtualized and cloud environments (If you don’t know Greg, check out his BIO). In this blog we’d like to point out some of the key takeaways, how they align with the VMware strategy for Operations Management and to further clarify our vision and strategy in this new era of IT.
The first chapter “New Best Practices in Performance Management” focuses on the following key points:
A) Complexity in a virtualized cloud environment:
It is an arduous task for many IT practitioners to accurately understand their virtual environments in case of jagged system performance. Most of the hypervisor management solutions in the market produce metrics that are not actionable and difficult to comprehend. This is where these solutions fail to provide real value to the IT practitioner. This situation leads to the imperative need for a sophisticated IT solution that can help in achieving effective performance management as Shields notes “It should be obvious at this point that the casual monitoring of raw metric data very quickly grows futile as an environment’s interdependencies increase.”
B) Implications of Big Data and the need for a compressive analytics foundation:
90% of current data in the world has been produced over the last two years and this “big data” is getting bigger by the day. A virtualized environment also produces huge volumes of data and all this data needs to be correlated intelligently. The author refers to the concept of “Black Box”, which is akin to VMware’s use of behavioral analytics to crank out all the numbers into manageable and meaningful metrics. The aim is to leverage automated operations to obtain simple “actionable intelligence” that can be acted upon swiftly. Numerous solutions claim to be having this capability but it is easier said than done, as building such an analytics engine is an effort of herculean proportions. This system should accurately measure the network, storage, memory etc. and factor dependencies that exist between the various components in an IT environment.
C) Dynamic Performance Counters:
The author introduces the concept of a generic metric, which an infrastructure management solution needs to measure from all parts of the virtual environment and define acceptable thresholds for quantifying performance levels. This measurement can help an IT administrator to see and understand what is happening in virtual environment.
The monitoring system should also be able to trace and factor in dependencies between components. After these metrics are put in place with related dependencies, each component in the system can employ a green/yellow/red light to indicate its current state. This color-coding can quickly enable an IT administrator to spot the problem, trace the issue to its roots and eventually fix it. These metrics need to be tuned carefully as different environments will have different business cycles and thresholds that can be very dynamic. The analytics system should have adaptive flexibility built into it.
Virtualization has tremendous potential to change the way information technology is delivered, consumed and managed. The virtualized cloud environment can significantly transform performance management, capacity management, compliance management and workload automation but to achieve all these objectives in reality, a powerful infrastructure management solution is of utmost need. Many companies are not getting this next step right and it is the need of the hour for organizations to actually become “virtualized” by implementing a state of the art infrastructure management solution.
This is where VMware’s “Software Defined Datacenter” (see blog post from The Virtualization Practice) management solution would prove instrumental in actualizing your organization’s “journey to the cloud”.