Home > Blogs > VMware for Healthcare IT

VMworld 2015 Healthcare Sessions: Voting is open.

For many of us, presenting in front of large groups does not come easy.  Most of us, at least those of us who came up through the technical ranks, would be just as happy at a keyboard or whiteboard, designing, building, and bringing fun technology to life.  Many of us have realized though that we have interesting stories to share, interesting experiences, and interesting ideas.  For those of us who believe that VMware technologies are among the most fascinating and innovative, the opportunity to present at VMworld is an accomplishment, and a great honor.

VMworld session abstracts are submitted several months before the conference.  The process is rather time consuming and challenging for first timers, but rewarding if selected.  The abstracts are then reviewed by experts, voted on by the community, something which adds additional value to the sessions since you have a say in the process.  If the session is selected, the presentation is written, reviewed, rehearsed, and finally presented.  The process is the same for customers, employees, partners, everyone, the attendees decide what sounds interesting.

This year we are seeing a number of healthcare sessions, over 30 in the voting pool.  These range from customer panels to individual submissions on all series of topics related specifically to healthcare.  I point this out because these are in a large pool of sessions on many topics with many different focuses.  We are seeing an great response from the community to our healthcare team, and the new products focused on managing healthcare environments.  If you plan to be at VMworld, please check out the health care sessions, vote in any you find interesting so we can ensure we get more focus on what we do, and put more emphasis on what you want to hear about.

To vote go to https://vmworld2015.lanyonevents.com/scheduler/publicVoting.do.  You will need a VMworld account, filter by healthcare, and vote in as many healthcare sessions as you want.  Make sure you come visit us at the sessions, and let us know what you want to hear about.  This is your conference, and we want healthcare to be a much bigger part of VMworld, and we want to see you there.

New Research Highlights Clinical Benefits of Virtual Desktops

By James Millington, group product line manager, healthcare solutions, End-User Computing, VMware

New research carried out by HIMSS Analytics at University Hospitals on behalf of VMware has highlighted 6 key areas spanning care provider, patient and IT benefits following their recent VMware Horizon virtual desktop deployment.

HIMSS Analytics interviewed care providers and IT professionals at University Hospitals of Cleveland in order to understand the quantitative ROI of VDI. What they uncovered were numerous benefits including:

  • Workflow improvements that resulted in improved mobility for care providers
  • Substantial time savings from reduced login and application load times
  • More time with patients by healthcare providers


HIMSS Analytics concluded these benefits supported the Health and Human Services Department’s aim of improving the patient experience of care, improving the health of populations and reducing the per capita cost of healthcare.

HIMSS Analytics will host a webinar to discuss the detailed findings of this report on Friday, May 15 at 10 am PT/1 pm ET. The webinar will include several healthcare experts such as:

  • Sean O’Brien, president, Axixe
  • Sean Kelly, assistant professor of Medicine, Harvard University
  • Frank Nydam, chief technology officer, healthcare solutions, VMware

To hear more about how virtual desktop technology is helping to transform clinical care in the real world, sign-up to attend the webinar and download the research paper.


How to Build Security Infrastructure for the Threats of Our Time

As discussed in my previous post, Security was top of mind for CIOs at HIMSS this year, and the progressive among them are looking to the Financial sector for risk reduction strategies. Here we’ll discuss how the openness of the internal environment and end users are the vectors of choice for attackers and how Microsegmentation is the way to mitigate this risk.

Roughly 100 Million records were stolen in the Healthcare industry last year, and at a conservative cleanup cost of $100 per record, that is a minimum cost of $10 Billion to the industry as a whole. The Financial industry has made a substantial investment in a new security model already because they too are targets. According to the FBI, health records are worth a minimum of $50 per record, five to ten times that of a financial record, which makes Healthcare the new target of choice for organized data thieves.

How Are We Doing Network Security Today?

The security model we employ today relies heavily on the border where our network meets the Internet. We have set aside Demilitarized Zones with Edge Firewalls and inspection for those systems that are accessed from outside. These Edge Firewall measures are designed to protect the systems themselves, and these methods are effective to prevent system level attacks. Unfortunately, the vectors of attack have evolved beyond this method's ability to protect us from other methods. Inside the environment is generally considered safe, and most internal systems can talk to just about any other internal system, workstations included. This openness internally is being exploited by organized attackers.

Workstations? Why Are We Talking About Workstations?

Workstations are the new target for attack because workstations run programs at the behest of users, and users can be deceived into running undesirable code. Couple that with sophisticated organizations who are writing custom malware designed to evade detection, and we see quite quickly the need for more granular security. The recent breaches are cautionary tales: exploits are deliberate, targeted, and extremely difficult to detect in today’s complex environments without a change in strategy.

It is believed that a large recent breach, which led to roughly 80 Million customer records being stolen was initiated via emails to employees that linked to malware on false company sites. Once installed, it appears the malware carried out system-level exploits, gained elevated user privileges, and accessed data directly. The first detection came from observing suspicious database queries, possibly because they were causing performance issues. Similar facts are surfacing in another similar case.

I recently overheard a conversation at a customer about suspicious Internet traffic originating from malware on a user workstation. It was detected at the edge, but there was no way to know what internal communication the malware was performing. This is happening all around us, and we are ill-equipped to prevent it without a new security model. This is the new reality: targeted exploits are a fact of life, and the only successful strategy is one of mitigation and detection.

How Do We Reduce This Risk?


NSX policies govern applications and users allowing granular internal communications.

The new strategy is called Microsegmentation, granular security policy, and it requires that we understand the traffic in a new way. It’s not enough to understand the IP Address and Port of internal network traffic because building policy that way is unsustainable: too many rules, impossible to maintain. We need to understand the traffic and apply policy at a higher level: users and applications.

As the datacenter has become more heavily virtualized, upwards of 90% for many of our customers, the virtual infrastructure sees orders of magnitude more traffic than the edge infrastructure where monitoring is typically implemented. The virtual infrastructure also understands higher level components: the systems from which traffic originates, groups of systems, their locations, and other information that we can attach. We can also understand the user that is initiating a communication.

Knowledge of systems, applications, users, and other information allows us to build policy that is much more sophisticated. We can create a rule that says only Finance employees can even initiate communication with a Finance system but only for application level communication, never for system level. We can create a rule that limits administrative communication to select administrators or administrative workstations; no regular end user workstations could make system calls of any kind to the datacenter. For virtualized desktops, we can create a rule that prevents any virtual desktop from talking to any other, eliminating a key vector of internal propagation. We can create active policies that restrict or log communications when suspicious activity is detected, and we can pass suspicious traffic to Intrusion Detection and Mitigation solutions for further inspection and alerting.

This is the essence Microsegmentation using VMware NSX Distributed Firewall: policies applied to objects that enable sustainable, granular policy that is easy to build and maintain. It’s a powerful new way to effect modern security policy and mitigate the risks of our time. It’s a framework into which the rest of your security solutions can attach and interact in new, intelligent ways. The Financial industry has already implemented these strategies to significantly limit traffic internally, and Healthcare is exploring it in earnest for the same reasons.

Final Thoughts

The recent exploits tell us an important story: data breaches are possible because of our current security model and reliance on edge security, and they are going to continue unless fundamental change is implemented. The best way to reduce our risk is with a new strategy that restricts and studies internal traffic. The benefits realized by the Financial industry serve as great examples, and it’s time for Healthcare to do the same lest we see further headlines and additional costs at a time when the Healthcare industry cannot afford it. We can help you get there. We have the tools and the experience, and we want you all to be successful rising to meet the challenges of today…and tomorrow.

How to Secure Healthcare Mobile App Communications

Security, Mobility, and Physician Experience were top of mind at HIMSS15 in Chicago this week: no one wants to be in a data breach headline, and BYOD programs are proliferating, bringing new challenges and risk. It is perfect timing that we just announced a new way to ensure secure access from mobile device applications all the way to the internal systems they access.

How do mobile devices connect now?

It depends on the device and who owns it:

  • Some are fully managed and Hospital owned, so we can use a Mobile Device Management solution to control everything: all communication is encrypted from the device to a VPN gateway at the edge of our network, and we have total control over every app and capability.
  • More and more are BYOD, so we can’t fully lock them down and secure all traffic, but we can manage aspects of them via MDM. We can then wrap the Hospital apps in a management layer that forces secure communication from the app to a proxy at the edge of our network.

The latter model is emerging as the more popular, and this is a point of ingress to our datacenter. Beyond the secure tunnel that terminates at the edge of our network, there are seldom restrictions that would prevent that app from talking to things other than what was intended. This is a broader problem of internal network security discussed by Aaron Dumbrow's post a few days ago.

How can we do this better?

Software defined networking via VMware NSX and AirWatch Enterprise Mobility Management are coming together in a new way. By combining what AirWatch does to secure the app communication to the edge of our network and NSX to control the path inside the network, we are creating a secure communications path from the app on any mobile device all the way to the application being accessed with no possibility of it accessing anything other than what we define in policy.

This is great news for Healthcare Mobility strategies in an age of breaches where new devices and apps bring new risks.

How do we learn more?

This solution will be unveiled at RSA 2015 next week. Read more about it here and watch a video that explains it very well.

Software Defined Security

In light of recent increased security breaches in a number of industries, it is a good time to revisit the topic of security and the role of software defined networking in this model.

Healthcare as an industry is heavily regulated with extreme penalties and a high personal cost of failure to patient trust, provider reputation, and long lasting embarrassing headlines. As the saying goes, an ounce of prevention is worth a pound of cure. In many cases, the cost of preventing a breach is so significantly lower, it makes for a more extreme ratio.

The Traditional Security Model

One of the lessons we have learned in the past from many of the attacks is that the weakest link in any security model are the people. No matter how good the security, all it takes is one compromised password, one path in through the firewall, and in many environments, there is little to stop a would be attacker from having free reign.

For a number of years, we have written our security models around our applications. In healthcare we often deal with an Electronic Healthcare Records application, and then many ancillary applications around it. This makes securing the individual ancillary applications more challenging. In the traditional model we have isolated systems by their function as below.


In the Web/Application/Database model, we have typically put the web server(s) out front, typically in a DMZ, and it is generally the most scalable. Another firewall and then the application server served up the front end application for the users, generally more powerful, and still scalable, but often times somewhat less than the web server. Another firewall and then the Database server, typically the most powerful, but the most difficult to scale. This often requires several hardware firewalls or a larger firewall with multiple modules or line cards. This is a highly secure model for the most part, but also can be quite costly.

Virtual LANs, or VLANs, are often used as a lesser form of security, traffic can be forced to a router or firewall through the use of the VLAN tag, separating the different levels of traffic, and stateful packet inspection can be done at this level. This is a potentially less expensive model, but increases latency, albeit very minimally, and increases network traffic back to a router or firewall. This also increases complexity of design, introducing a number of new challenges.

The NSX Security Model

Interestingly enough the NSX security model doesn't materially change the logical design necessarily. We still see the same concepts being used with the multiple firewalls, and potentially multiple layer 3 networks as needed. VLANs can be used in a similar fashion, but with reduced complexity of design and cost, but increased security.

NSX Web App DB

Physically however this is a significant change. From a performance perspective, this is all happening within the virtual environment. Much of the East-West traffic can be handled without ever leaving the virtual environment, and can be done at line speed. Firewall communication is no longer a bottleneck since packets can be inspected as they move between servers. This can be used to remove objections of vendors or application administrators.

From a security perspective, suddenly we are able to provide a true zero trust model. Literally we can inspect packets flowing between neighboring servers on the same subnet and VLAN. This enables us to assume that every packet is potentially compromised and look at it when it leaves the source, and again when it arrives at the destination. This is all done at the virtual hardware layer, removing the concern that compromising the guest OS could potentially disable the security layer.

The case for NSX in Healthcare

When we talk to our customers about NSX, the objections often come from the perimeter firewall team or the network team. A common concern is that NSX is looking to displace existing firewalls, switches, and other network devices. We also hear the network and security teams concerns that NSX takes away their visibility into the virtual environment. Somehow by implementing NSX, they are losing something. This is simply not the case. One of the driving principles for NSX is that we are bringing the network and security teams to the table in the virtualization discussion. No longer are we just requesting ports and VLANs. No longer can we afford to assume that things are secure because they are in the virtual environment. With the NSX model we need to have network and security teams involved in designing, implementing, and managing the virtual environment. Since we are expanding the functionality of the virtual network, and we are increasing visibility, the virtual environment becomes the domain of the entire IT team.

In the healthcare environment this means as we bring in new applications, or even retrofit existing applications we look to the broader team to come together and design security into the application deployment. This is true whether the application is built in house, or purchased. Application administrators become crucial to defining the security requirements for the Virtualization and Network teams. Security continues to be one of the most important conversations in healthcare IT. Bringing the entire infrastructure team and the business unit to the table with security is the best way to prevent a security breach. Bringing VMware NSX into the discussion provides a flexible and powerful tool which can give all sides options without compromise.

Springtime Promise - for Healthcare IT?

Here we are, just a couple of days before the beginning of spring and it is finally starting to look and feel that way (at least here in Ohio!)  The snow pack that didn't get here until mid-February is gone and temperatures are warming up nicely.  The promise of a new beginning for nature is on the rise, but does this promise also ring true for Healthcare IT?  I say yes, and here is why.

Just as the long winter starts fading and we begin to see signs of flowers pushing through the ground, I am hearing many of my customers talking about and asking how to reinvent their IT departments, and not just in minor ways.  Many are talking about major overhauls in structure, operations, and processes around the concept of cloud based IT, whether private or hybrid, but the ground swell is growing.

As we all know, human beings resist change. Change is hard – it takes us out of our comfort zone and threatens our sense of purpose. Adding the daily complications and the sheer inertia of keeping an organization running smoothly to this natural resistance to change can create what might seem like a mountain of snow!  So how does a leader begin to make significant progress with blizzard forces working again them?   One "shovel full" at a time; divide, conquer and persevere!

We have all heard the old saying, "You can't eat an elephant in one bite," and you cannot change you IT organization in one fell swoop, either.  In my experience, there are 4 stages to follow in effecting major change:  Operate, Automate, Integrate, and Innovate.  I will cover each of these separately, but first let's focus on Operate.

In general, one of the biggest complaints coming from the user community about IT is the length of time it takes to get new systems implemented. I have heard times ranging from 6 weeks to 6 months, and these are primarily in largely virtualized environments!  What is causing the delay in delivery?  Where is the bottleneck?  Wasn't one of the promises of virtualization to provide faster delivery of systems without the delays of procurement or physical installations?

This is where vRealize Operations comes into play.  With this tool and the built-in analytical engine, organizations can see in an instant where they are from a capacity standpoint today, and be able to forecast the future as to when this capacity will be exhausted.  This allows Infrastructure managers to be proactive in their requests for new hardware, rather than reactive to each new project. vRealize Operations gives you not only the committed resources that have been allocated to the environment,  gives insight into how much resource is actually being utilized, and where there might be opportunities to "right-size" systems, recoup and extend your investments.

vRealize Operations provides an organization with the foundation of the IT transformation.  Just like flower buds pushing up through late winter snow, the capacity planning/reporting feature is only the start of what vRealize Operations can do for your organization.

In my next installment I will talk about the concept of "Automate". Please take some time to enjoy the spring weather and allow the new beginning of nature to spark a new beginning for your IT department.

Until next time…

Creating the Perfect Clinical Desktop with Horizon View

I am often asked which aspect of a Horizon View implementation is the most important: SAN, SAN-less, pod-based, security, end points, tap-n-go, time savings, you name it, but what typically doesn’t come up is how a Horizon View project can completely change the way IT works with clinicians to improve workflow and facilitate excellent patient care.

The proper design and implementation of the clinical desktop is a critical element for a Horizon View project, one that cannot be designed or deployed without deep clinical buy-in. Certainly IT has to ensure that it’s constructed and deployed with technologies that can be supported but, how the clinical workspace integrates into workflow is a clinical function, and that requires clinical input for optimal utilization in a care setting. It should be clear that the introduction of Horizon View into the clinical desktop mix doesn't make the process around clinical desktop more challenging, quite the opposite in fact, rather, it's that the introduction provides IT with an opportunity to work more closely with the clinical team to design the optimal clinical experience.

The success of a large clinical implementation is just as dependent upon properly preparing the clinical users as it is about properly implementing the software and the supporting – keyword: ‘supporting’ technology; too often we focus on one, and neglect the other. Horizon View is one area where this is expressed; often more acutely and certainly more visibly, than any other, and this offers the opportunity to bring these two groups together to reimage the roles of the patient, caregiver, and technology

Working with the project team including: physician and nursing champions, informaticists, departmental specialists, i.e. phlebotomists, respiratory therapists and dieticians, as well as IT, the objective should be the construction of a future case workflow that minimizes interactions with the technology and empowers clinicians to focus on the patient, as efficiently as possible. The clinical desktop workflow needs to be streamlined and tested, thoroughly.

Key activities in the process include:

  • Define clinical device approach – computing as well as peripheral selection - in conjunction with representative, clinical steering subcommittee, team members
  • Work with key clinical stakeholders to identify future state clinical application mix and establish a branded clinical desktop as part of the initiative. The focus of the clinical desktop being ease of use and reliability – tie this to measureable service level agreements

◦  Define clinical desktop content and workflows

◦  Define clinical desktop technology

◦  Define clinical desktop security model

◦  Test, refine, test

  • Conduct device fairs to showcase device alternatives, including compute as well as peripherals, that IT can support and that are within budgetary constraints
  • Conduct technology lunch-n-learns to showcase the new clinical desktop being implemented and the technology that enables the workflow
  • Conduct show-me session in clinical lounges prior to go-live to offer clinicians the opportunity to log onto the new clinical desktop to become comfortable with the process as well as to take one more opportunity to verify security access and proper group/role placement
  • Design a seamless remote access methodology that should include the same seamless access to the same, consistent, clinical desktop as defined within the clinical inpatient or ambulatory settings, as appropriate. The clinicians should focus on learning the clinical application not the remote access technology. Desktop virtualization is a solid tool to ensure clinicians can work on premise and remotely with the same workflow.

Adopting processes that incorporates these elements not only supports the rapid clinical adoption of any new system by enabling the clinicians to focus on the patients that they are treating, rather than the integration points between software and technology, but also provides the unprecedented opportunity for clinicians and IT to work together. Additionally, aligning the clinical workflow and technology, creates a foundation for future process improvement initiatives that span these disciplines. This alignment will result in better patient care, which, after all, is really the point.

Policy Driven Storage the Healthcare Way

Looking at enterprise storage is a daunting task.  For years we have looked at the cost per gigabyte, cost per performance, and other metrics.  We have differentiated solutions based on small differences and what value they provide.  In Healthcare, we are particularly focused on solutions that are “certified” for our applications, with many enterprise healthcare environments running a number of storage platforms.

A case for policy driven storage

Early in my career I became involved in Storage Engineering.  I understood how the storage system worked, and I was able to quickly provision, and document what I was working on.  It was tedious, and there weren’t many people on the team who had the confidence to work on the system.  Storage tiering was either a manual process, or sometimes functions of add on software.  Deduplication and compression were all post process, and SSD was prohibitively expensive.  As we progressed, the technology didn’t really change much until the “All Flash Array” (AFA) was introduced.  Inline deduplication and compression were born out of necessity, and we saw the cost of SSD technology drop to the point where we expect to see Fibre Channel/SAS drives become irrelevant in the coming years.

This change has brought out a need to do things differently.  We have seen many vendors release better products; bigger, faster, with more features.  But the way we have handled storage at the virtual layer hasn’t kept up with improvement.  While capabilities like the VAAI have improved with each release, and we have continued to offload more and more of the storage workloads to the storage array, the way we manage the storage has not changed.  We have continued to present storage in a big logical drive and then proceed to share it among a number of virtual systems.  Not a terrible way to go, but that leaves performance and features on the table.  There must be a better way.

What does Policy Driven Storage look like?

To take full advantage of the new capabilities, we needed to find a way to remove some of the layers of abstraction.  As with anything, generally speaking, the fewer layers between two components the better.  In order to manage directly though, we need a common interface, a common way of doing things.  Again with the multiple storage vendors we often find in many healthcare environments, it is important to manage each through a common set of policies.  Things like performance, deduplication, compression, or anything a system is capable of providing, should be handled at the individual virtual disk level.  This also makes replication and recovery far more granular and manageable.

To make policy driven storage a reality, VMware gives two different options.  Virtual Volumes (VVOLs), and Virtual SAN (VSAN).  These are two different ways of getting to the same point, and both have their merits.  The real value is that policies can be used to manage both, and once configured, it becomes seamless to the VMware administrator.


The concept behind VVOLs is not so much different from the original VASA.  We have worked with our storage partners and they have exposed their capabilities to a common interface.  Previously vendors would install plugins to manage their storage through vCenter, with some tasks offloaded to the storage array.  The interfaces varied in their value, and didn’t really provide a unified way of managing the storage; especially for a customer having multiple array vendors.

With the introduction of VVOLs, a policy is created to enable a variety of attributes, such as high performance and deduplication.  When a VM is created or moved, the VMware administrator is provided a list of compatible datastores to select from, based on the policy.  If the workload changes, the administrator may change policies and move the workload to a more appropriate datastore.  This is available because the storage array advertises its capabilities to the virtual environment.  The storage policies are then created based on these advertised capabilities.  Since everything is handled by the array, there is lower overhead on the host, and more granular control since each VVOL is a separate object rather than a group of objects on a single lun.



VSAN is the next generation of Software Defined Storage (SDS).  The general concept behind software-defined storage is taking disks internal to a host server, and using them to create a logical storage system. This is managed through software, and historically has been done as a virtual machine controller sitting on each host.

VSAN differs because it is a kernel module, built into the hypervisor itself.  This removes much of the complexity, and overhead typically associated with SDS.  The deployment and expansion is literally only a few clicks, and provisioning storage is as simple as creating a policy.

Because VSAN is designed to be policy driven, it becomes incredibly simple to manage, and we often find that it is considered to be a part of the VMware system by customers who deploy it.  Since it is server-based storage, the storage team does not often need to be involved.


It is important to note, the concept of a datastore changes with both VVOLs and VSAN.  Each virtual disk becomes a lun on the storage array, or in the case of VSAN, a series of separate objects.  The policy simply manages the placement of the objects, and what capabilities it needs.  The Datastore simply appears as a higher-level construct representing a logical grouping of similar virtual disks, not a logical device as previously. 

The Healthcare Difference

What is the value of policy driven storage for healthcare?  Aside from the simplified management, ease of deployment and granular control, policy driven storage unifies the various types of storage.  In many of our customers we find that multiple vendors provide storage arrays with varying capabilities.  This often requires working with different members of the storage team to provision new storage capabilities, and creates challenges when upgrades are required or new implementations.

As we look at healthcare we regularly encounter with new regulations, new requirements, and we always seem to be struggling to keep up with the latest trends.  By using a policy driven approach, we can not only respond more quickly to our customers and security teams, but we can also create cross functional teams who can provide more value to the internal customer, and ultimately to our end customer, the healthcare

A Happy New Year for Healthcare IT?

When January rolls around you most likely think of new beginnings, resolutions, a sense of renewal, and so on. For IT organizations in the Healthcare industry, I wonder how many share these feelings? Shrinking budgets, increased customer demands, growing governmental regulations, just to name a few have many CIO’s wondering where their “new beginnings” will start as they struggle to push forward their efforts to meet the needs of their enterprise.

The Problem

Most hospital entities today are re-evaluating their future spend projections based upon the changes in government reimbursements and the new “pay for performance” measures that are being instituted. Even the most profitable institutions are looking at flat or negative revenue growth estimates due to these changes. Now, more than ever before, it is imperative that IT departments have the ability to provide financial transparency to the organization and the ability to properly demonstrate their value to the company. Management 101 states, “If you do not measure it, you cannot manage it.”

As a former Healthcare IT director, gathering all of this information would take me a couple of weeks of dedicated work; pouring it all into spreadsheets, slicing and dicing the data, and then working with the Finance department to validate those outcomes. This equates to a very onerous process fraught with the chance of human error. Then once all of this work is complete, it would start all over again for the next month. Wouldn’t life be better if there was some type of automated system that could ingest all of this data from all of the various sources and produce the reports and executive dashboards that the organization needs?

Having the ability to produce a “Bill of IT” to better understand the overall costs of the respective areas (Infrastructure, Applications, Service Desk, etc.) I believe, are imperative for today’s healthcare IT departments. For as long as I can remember, most of these “cost numbers” were very much akin to educated guesses versus prescriptive, repeatable figures driven from the company’s own general ledger.

I recently led a focus group session around “Financial Transparency in Healthcare IT” with over 20 CIO’s from various institutions. The general feelings from the vocal members of the group were a little surprising to me, and to my associates in the room. We heard statements like “We don’t see the value in tracking costs to this level of detail (Cost per VM, Cost per Mailbox, etc)” and “This is something more along the lines that the CFO would utilize, not the CIO.”

How can a Service Line Executive (CIO) not want to understand their unit costs and be able to show this to their peers and leadership in a simple, repeatable format? And not just once a month, but at anytime via an easy to understand executive dashboard?

How do we address this problem?

Enter vmware's vRealize Business solution. vRealize Business (formerly ITBM) is a tool set from vmware that brings together all of the disparate data points in an organization automatically and will display the fully loaded cost of any point in your IT environment. The cost of a VM, cost of a mailbox, cost of your PACS system or your EHR. Anything and everything! Utilizing an organizations data sources, (General Ledger, ERP, CMDB, etc.) these reports and cost models can not only be generated for a “where are we today” view, but that same data can be utilized for forecasting, budgeting, or helping determine IT costs for mergers and acquisitions.

vRealize Business is a solution for the modern Healthcare IT department. With this solution, IT will finally be able to answer the questions with a high degree of accuracy like, “Why is IT so expensive?” or “If we add 200 physicians next year, what is the incremental costs?” and so on. I know that most of my former peers have had to face those questions, and for the most part have provided the best number that they could muster from manually pulling all of this information together. With vRealize Business, this is now just a couple of mouse clicks to see this data within minutes, not days. This solution truly does provide CIO’s with the ability not only to measure the financial health of their department, but also to manage financial performance in a much simpler and repeatable process.

Managing the Healthcare Cloud

How do you manage your virtual environment?  Has it changed since converting from the physical world?  Often times we tend to stick with what we know, and what has worked.  Budgets are not increasing but demand always does, in Healthcare we are constantly faced with new regulations, new security threats, and new demands on our time.  We need to find a new way of managing the healthcare cloud, or we will not be able to keep up with the requirements.

 The Big Picture

Looking at managing the cloud requires taking a step back to look at the bigger picture.  Automation, cloud management, software defined compute, networks, and storage, these are all just parts of a larger strategy.

Think about you smart phone.  When you want new functionality, you go to the application eco system, search, and grab what you want.  If you are actually calling the support line to get an application provisioned to your device, you are not likely to be using that device for long.

The end state goal for any cloud management project should be providing a better, more seamless service to the end user.  As IT professionals we need to be able to predict problems, project growth and needs, and ensure our environment is secure from both accidental, and intentional data compromise.

Management, not just monitoring

As a systems administrator, I spent many hours looking through logs, glued to a console, and looking at a myriad of monitoring products, looking for the cause of problems.  We invested some serious money in various monitoring products, and were very proud of being able to get to the bottom of a problem with only 4-5 different solutions, spanning 3-4 different internal teams.  Then we found we were not great at capacity planning, we were using multiple data sets and trying to build spreadsheets.  On top of that, the security team was always bugging us about some new issue, or wanting us to show them the configurations hadn't changed.  We brought in more products to solve this issue.

As my career progressed, I made my mark by creating scripts and even wrote some code which would scrape a number of systems and databases and aggregate it into a report we could send to management.  I then spent many hours pouring over data and correcting issues I had created inadvertently by not correctly correlating the data properly.  The problem wasn't a lack of tools, but the lack of the right tool.

What makes Healthcare different?

When I first started in Healthcare, I was surprised by the sheer number of applications.  It is clear many of these revolve around the core Electronic Health Records, EHR, application, but it is nonetheless staggering.  As we dig a little deeper though, it is remarkable how intertwined these applications really are.  Often times if one application has a problem, a large part of the enterprise can be affected.

Further complicating the healthcare environment is our problems are different than most other places.  One complaint we hear often is that in some EHR programs, print queues are a big problem.  In an electronic system, this is completely counter intuitive, but it is a reality we live with.

Make It Better

So how do we manage the healthcare environment?  The best answer is data aggregation from all sources, and a single management tool that can give you everything.  As we all know, there is no perfect solutions in IT.  There will always be something that will not be addressed, but the key is to find a tool that can handle a majority of the management of the environment, can take data from multiple sources, and most importantly can enable you to remediate and correlate issues and potential issues.

We have all been there, alerts out of control, pages in the middle of the night, digging through logs.  The best way to start making Healthcare clouds more manageable is to provide answers to potential problems before the end users see them.  Providing a data driven methodology for capacity planning and configuration drift, and give management metrics they can use to show the business, will make the Healthcare IT professional’s job much less challenging and enable them to focus on new projects, and providing more business value.

Managing the VMware way

With the advent of virtualization, we are being required to do more with fewer resources.  In healthcare we find that as new regulations are passed, and we modernize to meet new demands, we are especially affected by this trend.  In order to keep pace with the increasing requirements a unified tool becomes more important for managing a healthcare cloud.  With the release of vRealize Operations 6, we are able to more effectively manage larger environments.  As VMware works with more EHR vendors to integrate their data through the Care Systems Analytics products, we are seeing a more healthcare focused solution.

It is highly unlikely there will ever be a scenario where human intervention is not required. It is however necessary that we take advantage of the data available to us to make effective management decisions.  By aggregating all available datasources, and presenting through a single system can increase operational efficiency, and enable a more manageable healthcare cloud.

- Aaron Dumbrow, Systems Engineer