Home > Blogs > VMware for Healthcare IT

How to Secure Healthcare Mobile App Communications

Security, Mobility, and Physician Experience were top of mind at HIMSS15 in Chicago this week: no one wants to be in a data breach headline, and BYOD programs are proliferating, bringing new challenges and risk. It is perfect timing that we just announced a new way to ensure secure access from mobile device applications all the way to the internal systems they access.

How do mobile devices connect now?

It depends on the device and who owns it:

  • Some are fully managed and Hospital owned, so we can use a Mobile Device Management solution to control everything: all communication is encrypted from the device to a VPN gateway at the edge of our network, and we have total control over every app and capability.
  • More and more are BYOD, so we can’t fully lock them down and secure all traffic, but we can manage aspects of them via MDM. We can then wrap the Hospital apps in a management layer that forces secure communication from the app to a proxy at the edge of our network.

The latter model is emerging as the more popular, and this is a point of ingress to our datacenter. Beyond the secure tunnel that terminates at the edge of our network, there are seldom restrictions that would prevent that app from talking to things other than what was intended. This is a broader problem of internal network security discussed by Aaron Dumbrow’s post a few days ago.

How can we do this better?

Software defined networking via VMware NSX and AirWatch Enterprise Mobility Management are coming together in a new way. By combining what AirWatch does to secure the app communication to the edge of our network and NSX to control the path inside the network, we are creating a secure communications path from the app on any mobile device all the way to the application being accessed with no possibility of it accessing anything other than what we define in policy.

This is great news for Healthcare Mobility strategies in an age of breaches where new devices and apps bring new risks.

How do we learn more?

This solution will be unveiled at RSA 2015 next week. Read more about it here and watch a video that explains it very well.

Software Defined Security

In light of recent increased security breaches in a number of industries, it is a good time to revisit the topic of security and the role of software defined networking in this model.

Healthcare as an industry is heavily regulated with extreme penalties and a high personal cost of failure to patient trust, provider reputation, and long lasting embarrassing headlines. As the saying goes, an ounce of prevention is worth a pound of cure. In many cases, the cost of preventing a breach is so significantly lower, it makes for a more extreme ratio.

The Traditional Security Model

One of the lessons we have learned in the past from many of the attacks is that the weakest link in any security model are the people. No matter how good the security, all it takes is one compromised password, one path in through the firewall, and in many environments, there is little to stop a would be attacker from having free reign.

For a number of years, we have written our security models around our applications. In healthcare we often deal with an Electronic Healthcare Records application, and then many ancillary applications around it. This makes securing the individual ancillary applications more challenging. In the traditional model we have isolated systems by their function as below.


In the Web/Application/Database model, we have typically put the web server(s) out front, typically in a DMZ, and it is generally the most scalable. Another firewall and then the application server served up the front end application for the users, generally more powerful, and still scalable, but often times somewhat less than the web server. Another firewall and then the Database server, typically the most powerful, but the most difficult to scale. This often requires several hardware firewalls or a larger firewall with multiple modules or line cards. This is a highly secure model for the most part, but also can be quite costly.

Virtual LANs, or VLANs, are often used as a lesser form of security, traffic can be forced to a router or firewall through the use of the VLAN tag, separating the different levels of traffic, and stateful packet inspection can be done at this level. This is a potentially less expensive model, but increases latency, albeit very minimally, and increases network traffic back to a router or firewall. This also increases complexity of design, introducing a number of new challenges.

The NSX Security Model

Interestingly enough the NSX security model doesn’t materially change the logical design necessarily. We still see the same concepts being used with the multiple firewalls, and potentially multiple layer 3 networks as needed. VLANs can be used in a similar fashion, but with reduced complexity of design and cost, but increased security.

NSX Web App DB

Physically however this is a significant change. From a performance perspective, this is all happening within the virtual environment. Much of the East-West traffic can be handled without ever leaving the virtual environment, and can be done at line speed. Firewall communication is no longer a bottleneck since packets can be inspected as they move between servers. This can be used to remove objections of vendors or application administrators.

From a security perspective, suddenly we are able to provide a true zero trust model. Literally we can inspect packets flowing between neighboring servers on the same subnet and VLAN. This enables us to assume that every packet is potentially compromised and look at it when it leaves the source, and again when it arrives at the destination. This is all done at the virtual hardware layer, removing the concern that compromising the guest OS could potentially disable the security layer.

The case for NSX in Healthcare

When we talk to our customers about NSX, the objections often come from the perimeter firewall team or the network team. A common concern is that NSX is looking to displace existing firewalls, switches, and other network devices. We also hear the network and security teams concerns that NSX takes away their visibility into the virtual environment. Somehow by implementing NSX, they are losing something. This is simply not the case. One of the driving principles for NSX is that we are bringing the network and security teams to the table in the virtualization discussion. No longer are we just requesting ports and VLANs. No longer can we afford to assume that things are secure because they are in the virtual environment. With the NSX model we need to have network and security teams involved in designing, implementing, and managing the virtual environment. Since we are expanding the functionality of the virtual network, and we are increasing visibility, the virtual environment becomes the domain of the entire IT team.

In the healthcare environment this means as we bring in new applications, or even retrofit existing applications we look to the broader team to come together and design security into the application deployment. This is true whether the application is built in house, or purchased. Application administrators become crucial to defining the security requirements for the Virtualization and Network teams. Security continues to be one of the most important conversations in healthcare IT. Bringing the entire infrastructure team and the business unit to the table with security is the best way to prevent a security breach. Bringing VMware NSX into the discussion provides a flexible and powerful tool which can give all sides options without compromise.

Springtime Promise – for Healthcare IT?

Here we are, just a couple of days before the beginning of spring and it is finally starting to look and feel that way (at least here in Ohio!)  The snow pack that didn’t get here until mid-February is gone and temperatures are warming up nicely.  The promise of a new beginning for nature is on the rise, but does this promise also ring true for Healthcare IT?  I say yes, and here is why.

Just as the long winter starts fading and we begin to see signs of flowers pushing through the ground, I am hearing many of my customers talking about and asking how to reinvent their IT departments, and not just in minor ways.  Many are talking about major overhauls in structure, operations, and processes around the concept of cloud based IT, whether private or hybrid, but the ground swell is growing.

As we all know, human beings resist change. Change is hard – it takes us out of our comfort zone and threatens our sense of purpose. Adding the daily complications and the sheer inertia of keeping an organization running smoothly to this natural resistance to change can create what might seem like a mountain of snow!  So how does a leader begin to make significant progress with blizzard forces working again them?   One “shovel full” at a time; divide, conquer and persevere!

We have all heard the old saying, “You can’t eat an elephant in one bite,” and you cannot change you IT organization in one fell swoop, either.  In my experience, there are 4 stages to follow in effecting major change:  Operate, Automate, Integrate, and Innovate.  I will cover each of these separately, but first let’s focus on Operate.

In general, one of the biggest complaints coming from the user community about IT is the length of time it takes to get new systems implemented. I have heard times ranging from 6 weeks to 6 months, and these are primarily in largely virtualized environments!  What is causing the delay in delivery?  Where is the bottleneck?  Wasn’t one of the promises of virtualization to provide faster delivery of systems without the delays of procurement or physical installations?

This is where vRealize Operations comes into play.  With this tool and the built-in analytical engine, organizations can see in an instant where they are from a capacity standpoint today, and be able to forecast the future as to when this capacity will be exhausted.  This allows Infrastructure managers to be proactive in their requests for new hardware, rather than reactive to each new project. vRealize Operations gives you not only the committed resources that have been allocated to the environment,  gives insight into how much resource is actually being utilized, and where there might be opportunities to “right-size” systems, recoup and extend your investments.

vRealize Operations provides an organization with the foundation of the IT transformation.  Just like flower buds pushing up through late winter snow, the capacity planning/reporting feature is only the start of what vRealize Operations can do for your organization.

In my next installment I will talk about the concept of “Automate”. Please take some time to enjoy the spring weather and allow the new beginning of nature to spark a new beginning for your IT department.

Until next time…

Creating the Perfect Clinical Desktop with Horizon View

I am often asked which aspect of a Horizon View implementation is the most important: SAN, SAN-less, pod-based, security, end points, tap-n-go, time savings, you name it, but what typically doesn’t come up is how a Horizon View project can completely change the way IT works with clinicians to improve workflow and facilitate excellent patient care.

The proper design and implementation of the clinical desktop is a critical element for a Horizon View project, one that cannot be designed or deployed without deep clinical buy-in. Certainly IT has to ensure that it’s constructed and deployed with technologies that can be supported but, how the clinical workspace integrates into workflow is a clinical function, and that requires clinical input for optimal utilization in a care setting. It should be clear that the introduction of Horizon View into the clinical desktop mix doesn’t make the process around clinical desktop more challenging, quite the opposite in fact, rather, it’s that the introduction provides IT with an opportunity to work more closely with the clinical team to design the optimal clinical experience.

The success of a large clinical implementation is just as dependent upon properly preparing the clinical users as it is about properly implementing the software and the supporting – keyword: ‘supporting’ technology; too often we focus on one, and neglect the other. Horizon View is one area where this is expressed; often more acutely and certainly more visibly, than any other, and this offers the opportunity to bring these two groups together to reimage the roles of the patient, caregiver, and technology

Working with the project team including: physician and nursing champions, informaticists, departmental specialists, i.e. phlebotomists, respiratory therapists and dieticians, as well as IT, the objective should be the construction of a future case workflow that minimizes interactions with the technology and empowers clinicians to focus on the patient, as efficiently as possible. The clinical desktop workflow needs to be streamlined and tested, thoroughly.

Key activities in the process include:

  • Define clinical device approach – computing as well as peripheral selection – in conjunction with representative, clinical steering subcommittee, team members
  • Work with key clinical stakeholders to identify future state clinical application mix and establish a branded clinical desktop as part of the initiative. The focus of the clinical desktop being ease of use and reliability – tie this to measureable service level agreements

◦  Define clinical desktop content and workflows

◦  Define clinical desktop technology

◦  Define clinical desktop security model

◦  Test, refine, test

  • Conduct device fairs to showcase device alternatives, including compute as well as peripherals, that IT can support and that are within budgetary constraints
  • Conduct technology lunch-n-learns to showcase the new clinical desktop being implemented and the technology that enables the workflow
  • Conduct show-me session in clinical lounges prior to go-live to offer clinicians the opportunity to log onto the new clinical desktop to become comfortable with the process as well as to take one more opportunity to verify security access and proper group/role placement
  • Design a seamless remote access methodology that should include the same seamless access to the same, consistent, clinical desktop as defined within the clinical inpatient or ambulatory settings, as appropriate. The clinicians should focus on learning the clinical application not the remote access technology. Desktop virtualization is a solid tool to ensure clinicians can work on premise and remotely with the same workflow.

Adopting processes that incorporates these elements not only supports the rapid clinical adoption of any new system by enabling the clinicians to focus on the patients that they are treating, rather than the integration points between software and technology, but also provides the unprecedented opportunity for clinicians and IT to work together. Additionally, aligning the clinical workflow and technology, creates a foundation for future process improvement initiatives that span these disciplines. This alignment will result in better patient care, which, after all, is really the point.

Policy Driven Storage the Healthcare Way

Looking at enterprise storage is a daunting task.  For years we have looked at the cost per gigabyte, cost per performance, and other metrics.  We have differentiated solutions based on small differences and what value they provide.  In Healthcare, we are particularly focused on solutions that are “certified” for our applications, with many enterprise healthcare environments running a number of storage platforms.

A case for policy driven storage

Early in my career I became involved in Storage Engineering.  I understood how the storage system worked, and I was able to quickly provision, and document what I was working on.  It was tedious, and there weren’t many people on the team who had the confidence to work on the system.  Storage tiering was either a manual process, or sometimes functions of add on software.  Deduplication and compression were all post process, and SSD was prohibitively expensive.  As we progressed, the technology didn’t really change much until the “All Flash Array” (AFA) was introduced.  Inline deduplication and compression were born out of necessity, and we saw the cost of SSD technology drop to the point where we expect to see Fibre Channel/SAS drives become irrelevant in the coming years.

This change has brought out a need to do things differently.  We have seen many vendors release better products; bigger, faster, with more features.  But the way we have handled storage at the virtual layer hasn’t kept up with improvement.  While capabilities like the VAAI have improved with each release, and we have continued to offload more and more of the storage workloads to the storage array, the way we manage the storage has not changed.  We have continued to present storage in a big logical drive and then proceed to share it among a number of virtual systems.  Not a terrible way to go, but that leaves performance and features on the table.  There must be a better way.

What does Policy Driven Storage look like?

To take full advantage of the new capabilities, we needed to find a way to remove some of the layers of abstraction.  As with anything, generally speaking, the fewer layers between two components the better.  In order to manage directly though, we need a common interface, a common way of doing things.  Again with the multiple storage vendors we often find in many healthcare environments, it is important to manage each through a common set of policies.  Things like performance, deduplication, compression, or anything a system is capable of providing, should be handled at the individual virtual disk level.  This also makes replication and recovery far more granular and manageable.

To make policy driven storage a reality, VMware gives two different options.  Virtual Volumes (VVOLs), and Virtual SAN (VSAN).  These are two different ways of getting to the same point, and both have their merits.  The real value is that policies can be used to manage both, and once configured, it becomes seamless to the VMware administrator.


The concept behind VVOLs is not so much different from the original VASA.  We have worked with our storage partners and they have exposed their capabilities to a common interface.  Previously vendors would install plugins to manage their storage through vCenter, with some tasks offloaded to the storage array.  The interfaces varied in their value, and didn’t really provide a unified way of managing the storage; especially for a customer having multiple array vendors.

With the introduction of VVOLs, a policy is created to enable a variety of attributes, such as high performance and deduplication.  When a VM is created or moved, the VMware administrator is provided a list of compatible datastores to select from, based on the policy.  If the workload changes, the administrator may change policies and move the workload to a more appropriate datastore.  This is available because the storage array advertises its capabilities to the virtual environment.  The storage policies are then created based on these advertised capabilities.  Since everything is handled by the array, there is lower overhead on the host, and more granular control since each VVOL is a separate object rather than a group of objects on a single lun.



VSAN is the next generation of Software Defined Storage (SDS).  The general concept behind software-defined storage is taking disks internal to a host server, and using them to create a logical storage system. This is managed through software, and historically has been done as a virtual machine controller sitting on each host.

VSAN differs because it is a kernel module, built into the hypervisor itself.  This removes much of the complexity, and overhead typically associated with SDS.  The deployment and expansion is literally only a few clicks, and provisioning storage is as simple as creating a policy.

Because VSAN is designed to be policy driven, it becomes incredibly simple to manage, and we often find that it is considered to be a part of the VMware system by customers who deploy it.  Since it is server-based storage, the storage team does not often need to be involved.


It is important to note, the concept of a datastore changes with both VVOLs and VSAN.  Each virtual disk becomes a lun on the storage array, or in the case of VSAN, a series of separate objects.  The policy simply manages the placement of the objects, and what capabilities it needs.  The Datastore simply appears as a higher-level construct representing a logical grouping of similar virtual disks, not a logical device as previously. 

The Healthcare Difference

What is the value of policy driven storage for healthcare?  Aside from the simplified management, ease of deployment and granular control, policy driven storage unifies the various types of storage.  In many of our customers we find that multiple vendors provide storage arrays with varying capabilities.  This often requires working with different members of the storage team to provision new storage capabilities, and creates challenges when upgrades are required or new implementations.

As we look at healthcare we regularly encounter with new regulations, new requirements, and we always seem to be struggling to keep up with the latest trends.  By using a policy driven approach, we can not only respond more quickly to our customers and security teams, but we can also create cross functional teams who can provide more value to the internal customer, and ultimately to our end customer, the healthcare

A Happy New Year for Healthcare IT?

When January rolls around you most likely think of new beginnings, resolutions, a sense of renewal, and so on. For IT organizations in the Healthcare industry, I wonder how many share these feelings? Shrinking budgets, increased customer demands, growing governmental regulations, just to name a few have many CIO’s wondering where their “new beginnings” will start as they struggle to push forward their efforts to meet the needs of their enterprise.

The Problem

Most hospital entities today are re-evaluating their future spend projections based upon the changes in government reimbursements and the new “pay for performance” measures that are being instituted. Even the most profitable institutions are looking at flat or negative revenue growth estimates due to these changes. Now, more than ever before, it is imperative that IT departments have the ability to provide financial transparency to the organization and the ability to properly demonstrate their value to the company. Management 101 states, “If you do not measure it, you cannot manage it.”

As a former Healthcare IT director, gathering all of this information would take me a couple of weeks of dedicated work; pouring it all into spreadsheets, slicing and dicing the data, and then working with the Finance department to validate those outcomes. This equates to a very onerous process fraught with the chance of human error. Then once all of this work is complete, it would start all over again for the next month. Wouldn’t life be better if there was some type of automated system that could ingest all of this data from all of the various sources and produce the reports and executive dashboards that the organization needs?

Having the ability to produce a “Bill of IT” to better understand the overall costs of the respective areas (Infrastructure, Applications, Service Desk, etc.) I believe, are imperative for today’s healthcare IT departments. For as long as I can remember, most of these “cost numbers” were very much akin to educated guesses versus prescriptive, repeatable figures driven from the company’s own general ledger.

I recently led a focus group session around “Financial Transparency in Healthcare IT” with over 20 CIO’s from various institutions. The general feelings from the vocal members of the group were a little surprising to me, and to my associates in the room. We heard statements like “We don’t see the value in tracking costs to this level of detail (Cost per VM, Cost per Mailbox, etc)” and “This is something more along the lines that the CFO would utilize, not the CIO.”

How can a Service Line Executive (CIO) not want to understand their unit costs and be able to show this to their peers and leadership in a simple, repeatable format? And not just once a month, but at anytime via an easy to understand executive dashboard?

How do we address this problem?

Enter vmware’s vRealize Business solution. vRealize Business (formerly ITBM) is a tool set from vmware that brings together all of the disparate data points in an organization automatically and will display the fully loaded cost of any point in your IT environment. The cost of a VM, cost of a mailbox, cost of your PACS system or your EHR. Anything and everything! Utilizing an organizations data sources, (General Ledger, ERP, CMDB, etc.) these reports and cost models can not only be generated for a “where are we today” view, but that same data can be utilized for forecasting, budgeting, or helping determine IT costs for mergers and acquisitions.

vRealize Business is a solution for the modern Healthcare IT department. With this solution, IT will finally be able to answer the questions with a high degree of accuracy like, “Why is IT so expensive?” or “If we add 200 physicians next year, what is the incremental costs?” and so on. I know that most of my former peers have had to face those questions, and for the most part have provided the best number that they could muster from manually pulling all of this information together. With vRealize Business, this is now just a couple of mouse clicks to see this data within minutes, not days. This solution truly does provide CIO’s with the ability not only to measure the financial health of their department, but also to manage financial performance in a much simpler and repeatable process.

Managing the Healthcare Cloud

How do you manage your virtual environment?  Has it changed since converting from the physical world?  Often times we tend to stick with what we know, and what has worked.  Budgets are not increasing but demand always does, in Healthcare we are constantly faced with new regulations, new security threats, and new demands on our time.  We need to find a new way of managing the healthcare cloud, or we will not be able to keep up with the requirements.

 The Big Picture

Looking at managing the cloud requires taking a step back to look at the bigger picture.  Automation, cloud management, software defined compute, networks, and storage, these are all just parts of a larger strategy.

Think about you smart phone.  When you want new functionality, you go to the application eco system, search, and grab what you want.  If you are actually calling the support line to get an application provisioned to your device, you are not likely to be using that device for long.

The end state goal for any cloud management project should be providing a better, more seamless service to the end user.  As IT professionals we need to be able to predict problems, project growth and needs, and ensure our environment is secure from both accidental, and intentional data compromise.

Management, not just monitoring

As a systems administrator, I spent many hours looking through logs, glued to a console, and looking at a myriad of monitoring products, looking for the cause of problems.  We invested some serious money in various monitoring products, and were very proud of being able to get to the bottom of a problem with only 4-5 different solutions, spanning 3-4 different internal teams.  Then we found we were not great at capacity planning, we were using multiple data sets and trying to build spreadsheets.  On top of that, the security team was always bugging us about some new issue, or wanting us to show them the configurations hadn’t changed.  We brought in more products to solve this issue.

As my career progressed, I made my mark by creating scripts and even wrote some code which would scrape a number of systems and databases and aggregate it into a report we could send to management.  I then spent many hours pouring over data and correcting issues I had created inadvertently by not correctly correlating the data properly.  The problem wasn’t a lack of tools, but the lack of the right tool.

What makes Healthcare different?

When I first started in Healthcare, I was surprised by the sheer number of applications.  It is clear many of these revolve around the core Electronic Health Records, EHR, application, but it is nonetheless staggering.  As we dig a little deeper though, it is remarkable how intertwined these applications really are.  Often times if one application has a problem, a large part of the enterprise can be affected.

Further complicating the healthcare environment is our problems are different than most other places.  One complaint we hear often is that in some EHR programs, print queues are a big problem.  In an electronic system, this is completely counter intuitive, but it is a reality we live with.

Make It Better

So how do we manage the healthcare environment?  The best answer is data aggregation from all sources, and a single management tool that can give you everything.  As we all know, there is no perfect solutions in IT.  There will always be something that will not be addressed, but the key is to find a tool that can handle a majority of the management of the environment, can take data from multiple sources, and most importantly can enable you to remediate and correlate issues and potential issues.

We have all been there, alerts out of control, pages in the middle of the night, digging through logs.  The best way to start making Healthcare clouds more manageable is to provide answers to potential problems before the end users see them.  Providing a data driven methodology for capacity planning and configuration drift, and give management metrics they can use to show the business, will make the Healthcare IT professional’s job much less challenging and enable them to focus on new projects, and providing more business value.

Managing the VMware way

With the advent of virtualization, we are being required to do more with fewer resources.  In healthcare we find that as new regulations are passed, and we modernize to meet new demands, we are especially affected by this trend.  In order to keep pace with the increasing requirements a unified tool becomes more important for managing a healthcare cloud.  With the release of vRealize Operations 6, we are able to more effectively manage larger environments.  As VMware works with more EHR vendors to integrate their data through the Care Systems Analytics products, we are seeing a more healthcare focused solution.

It is highly unlikely there will ever be a scenario where human intervention is not required. It is however necessary that we take advantage of the data available to us to make effective management decisions.  By aggregating all available datasources, and presenting through a single system can increase operational efficiency, and enable a more manageable healthcare cloud.

- Aaron Dumbrow, Systems Engineer

Simplify your Datacenter, Lower Your Costs, and Prepare for the Cloud

Heathcare providers are under pressure to further reduce costs, offer new capabilities, and explore Cloud options for the future. These pressures require that we revisit the design assumptions of the on premise datacenter to lower costs, reduce complexity, and add capabilities. The platform required to deliver new capabilities and lower the total cost of on premise infrastructure exists today in the Software Defined Datacenter, and customers are adopting it to meet current demands and prepare for the future.

Present Healthcare Pressures

  • New capabilities are required e.g. mobility and enhanced security.
  • Meaningful Use Stage 3 will put further pressure on providers to control costs in all areas of the business.
  • IT spend represents 3-8% of Revenue for a Hospital or Healthcare System.
  • CIOs at CHIME have for two years told us that they want to exit the datacenter business in three to seven years.
  • Fully migrating to Cloud will take two to five years for the most committed organizations.
  • The current standard datacenter architecture is complex and expensive.

Optimizing On Premise Infrastructure

SDDC SimplicityControlling IT spend is a priority of every Healthcare system and Hospital. Infrastructure and Staff represent significant portions of that budget, and there are significant opportunities to improve the scale of staff while reducing the complexity and cost of the infrastructure via SDDC. Organizations that are embracing the new technologies are realizing savings in all areas of their infrastructure: Storage, Compute, and Networking, and in so doing they are ensuring their competitive strength relative to Public Cloud alternatives.

Complexity is an oft overlooked reality of the current datacenter design. Simplicity is the key to steady operations: reducing the number of moving parts inherently increases the reliability of a system. Application operation requires a delicate confluence of Compute, FibreChannel, Networking, and Storage. These components come from multiple vendors, scale independently, are managed and monitored separately, yet all must work together. This is very difficult to architect, manage, and troubleshoot effectively and overworks a lot of experienced personnel.


Infrastructure Cost DistributionStorage presently represents roughly 50% of the annual Infrastructure spend in Healthcare. New software storage solutions deliver the same performance and reduce Storage spend by 30-60%. In a recent Healthcare customer project, they were able to realize a 50% savings on storage while gaining additional compute nodes using software defined storage.

The current virtualization standard of distributed compute nodes backed by a highly resilient and available storage array was a necessary stage in the evolution of the datacenter because of the nature of the workloads: they are special and many cannot be made effectively resilient at the application level, so we rely on the infrastructure layer to deliver the availability. The storage array was the way to do this, and it required the expansion of yet another infrastructure element, the FibreChannel SAN.

By leveraging virtual storage in the compute nodes, significant capital and operating savings are being realized, and due to persistent cost pressures and sound business decision making, it is an emerging standard for efficient on premise architecture.


Compute represents another significant chunk of infrastructure spend, roughly 15-25%. Blades have emerged as the popular option, but it is essential to revisit the reason. It isn’t rack space, host identity management, or any other vendor-specific capability. Blades deliver savings on FibreChannel ports to connect the systems to the storage. There are not significant efficiencies gained from Blades in any other aspect, except as affect the ease of connecting them with FC storage. But what happens if the new storage models do not require FC? The fundamental value proposition of blade architecture erodes and vanishes in favor of lower cost, equivalent capability from rack systems with local disk and software defined storage.

The premier Blade compute vendors are commanding a great deal of spend, but they are not delivering value commensurate with that spend, especially in the face of new distributed storage capabilities that they are not optimal to deliver. Rack mount systems offer greater capability and flexibility for less, and all they need to operate and deliver the same outcomes as current SAN attached designs is power and networking.

Appliance Compute

Appliance Compute merits a thorough discussion as well. Reducing complexity and cost while adding capability is a challenge, and both ends can be achieved easily via the EVO platform. The EVO platform is a reference hardware architecture with fewer interchangeable parts. We have seen an increase in host instability due to hardware in the last two years. Ever expanding combinations of firmware, storage controllers, network adapters, and drivers have created a hardware ecosystem so large that it is difficult for hardware vendors to test all permutations and combinations of the components. The solution is simple: reference architecture with fewer variables and greater consistency.

EVO rack systems offer everything in one box: Compute, Storage, and Networking. Like the Rack systems, all they need is power and networking, and they deliver all of the capabilities needed by a modern infrastructure platform with more of the capabilities configured and managed in software than ever before.


Current networking architectures are complex and expensive as well, representing 20-30% of infrastructure spend. That cost is in the gear itself and the enhanced security capabilities tied to it. Virtual Networking allows those security policies to be moved up out of the gear, which has significant implications: security policies can be attached to applications and users instead of IPs and ports, and the capabilities of the gear are reduced to efficient packet switching.

By moving the security policies up in the stack, we gain security capabilities there were prohibitively expensive to implement and impractical to maintain, and we allow choice in the gear from many vendors who cost 30-40% less than the dominant communication vendors.

Virtual Networking allows an ecosystem of devices to share in a global policy definition and implementation. We can easily draw boundaries around applications, policies that travel with the workloads as they move about the datacenter and later into the Cloud. Rules are implemented close to the objects and close to the edge. Workloads that cannot talk to the internet can have their packets dropped at the hypervisor; workloads that are in different security zones on the same host can communicate directly without traversing the edge network; and application access can be granted to specific users at the network level – their packets won’t even flow if they are not allowed.

End User Computing and Mobility

The popular way to deliver applications in Healthcare has the same complexity issues as the rest of the datacenter for the same reasons: it leverages expensive compute and storage. Capital costs to deliver SAN attached End User computing infrastructure is frequently upwards of $500 per user. A modern Always on Point of Care infrastructure can deliver a superior Clinical Experience for capital costs less than $250 per user. The operating efficiencies and flexibility offer tremendous value beyond that, but the capital costs are significant and impossible to ignore.

Path to the Cloud: Act Now to Realize the Savings and Prepare for the Future

CIOs at CHIME repeat that exiting the datacenter business is an objective; it is only a question of when, but that transition will take time. With that in mind, there are two realizable short term objectives: invest in the solutions that lower on premise capital and operating cost, and build the operational excellence required to effect a seamless transition to the Cloud with the time comes.

SDDC is the means to deliver on those objectives. Leveraging software defined storage and virtual networking allows compelling savings in storage, compute, and networking. Beyond that, the platform is designed to loosely couple your on premise datacenter with Public Cloud providers to seamlessly migrate workloads along with their operating and security policies with minimal interruption—sometimes no interruption at all. Imagine it: a stretched datacenter with policies defined in software and implemented in the walls of your datacenter and in your portion of a Cloud provider. Administrative control remains with infrastructure and application owners and allows the easiest choice of runtime with the easiest transition.

This is where we are all headed, and the technologies are in use now, today. We can get you there, too.


Automating Healthcare

Think back to what got you really excited about technology. Why do you do what you do, what is your defining moment in IT? Hopefully if you have been in the industry for a while, that is a fond memory, and you have built on that to make some amazing things happen. Something we are always asking here at VMware Healthcare is what can we do to make patient care better, and how can IT become a partner to the providers.

Defining Automation in the Healthcare World

So in the world of Healthcare, what do we mean when we talk about Automation? We are certainly not going to allow end users, in this case Doctors, to provision servers in most cases. Automation for the Healthcare environment typically means one of two things – Standardization for IT or Self Service for Application Administrators:

Standardization for IT:

As a former IT administrator/engineer I remember many times going through server build processes to hand off to the application teams. I would open my checklist, even on my 300th build, and go line by line checking off each as I completed the task. It got to the point where I memorized the checklist. I had dreams about the checklist. I hated the checklist. But, there was no forgiveness for the person who failed to build a server to the exacting standards we had agreed upon. Virtualization made this better. However, we just moved the checklist into the virtual world – the process didn’t change.

In the Healthcare environment, Automation enables IT to offload repetitive tasks, not to a junior admin or operator, but rather to the system. This then enables the existing teams to improve and focus on what is important – making the technology more available for the care providers. This also ensures that every system is built to the exacting standard every time, with no deviations other than those specified by the blueprint.

Self Service for Application Administrators:

Another use case for Automation is to speed up the application deployment process. In many Healthcare environments, the application administrator requests the system. The infrastructure team has to build the server, physical or virtual, provide network and storage services, and ensure the system is under management prior to handing it over. This whole process can be tied into a change management database, providing oversight and any controls needed can be given to the infrastructure team. Thus an application administrator still submits their request in a similar manner, and can receive their system(s) in far less time since the whole process can be automated.

Components of Automation

It is critical to remember that if we are automating, the Software Defined Data Center becomes more important than ever. We can’t just virtualize compute and put Automation in front of it and expect everything to work. We need an all-in approach. We need to be able to quickly modify storage and networks through a policy driven approach, as well.

This does require a serious look at Virtual Networking, and Software Defined Storage as diagramed below. While the physical infrastructure plays a critical role, the control should be moved into a software defined and policy driven model in order to fully enable Automation.


Automate Everything? What Could Go Wrong With That?

So this all sounds great where do I sign up? It is always good to look at the potential pitfalls of any technology. With Automation, the benefits are many, but we do need to elevate the staff managing these technologies. There are blueprints to build, and we need to ensure that is done properly.

We also need to provide proper process governance. Automation of a bad process will do simply that. The last thing we want is to take something from bad to worse. Any Automation project of any size should start with a review of business processes as well as IT processes. Automation should occur at the process level as well as the technology level.

Why Automate With VMware?

This is a massive undertaking to put it mildly. It really comes down to a question of interoperability. Looking at the larger picture, there will always be point products which will solve individual needs, it becomes a question of scale. Making everything work together from a management and Automation perspective makes the VMware vRealize Cloud Management platform preferable over a large number of products by multiple vendors.

As Healthcare continues to evolve, as we are required to deal with static or shrinking budgets, we in Healthcare IT must continue to evolve and improve our processes. Automation should not be frightening, or dangerous, but rather an opportunity to move forward and provide a better overall experience to our users and their patients.


- Aaron Dumbrow, Systems Engineer

Too Much Stuff: The Problem of Legacy Data in Healthcare

My family recently moved across the country and in that process we discovered something about ourselves; we have too much stuff. I’m not talking about things we use rather, things that we store which, for the most part, really falls into two main categories: things I have to store, old financial records, and things my wife wants me to store, christening gowns, cherished toys my children have long-since outgrown, tokens from our own childhoods, memories really. It occurred to me that my stuff, thank you Mr. Carlin, isn’t really that different than the legacy stuff that I had to deal with in healthcare.

Legacy [stuff] systems are a problem for every healthcare organization in this country. How could they not be? In the years before ARRA and Meaningful Use the medical record had become, for many, a hodge-podge of semi-connected systems and processes. If you checked into an ER then your medical record may have been electronic but, if you were admitted, then it could have been on paper, unless you spent time in the ICU, in which case it could have been on yet another electronic system.

Matters get even more complicated when you consider that this data is regulated. Individual states require the maintenance of a patients’ legal medical record for between 7 and 28 years, depending upon the state and the age of the patient at the time of treatment. Oddly enough, the need doesn’t stop there. Remember your clinicians? They’ve been documenting SOAP notes for years, not just on paper but, electronically as well, and have an expectation that they are going to be available for future episodes of care.

According to the ONC we’ve made massive progress at the provider level towards the adoption of highly integrated electronic medical records that meet the new federal standards. We’ve gone from a 13% adoption rate to over 56%, as of the latest published data. That’s fantastic progress but, in the wake of that transition, we’ve left behind a virtual graveyard of systems with shards of critical data still clinging to their disk drives; systems that have to be maintained – personnel, equipment, licenses, support – for a long time and are standing squarely in the path of achieving your clinical integration objectives and OpEx dreams.

How do we address this problem?

George suggested buying a bigger house but, as he points out, that rarely works. We need to address it head-on with a strategy that considers all of the risks, garners buy-in from in-house legal and compliance, as well as the clinical oversight, and IT. So, much like my personal problem with “stuff,” healthcare organizations face the same dilemma: data you have to maintain for legal and compliance reasons and data that your clinicians want you to maintain because it will one day be useful.

Stay tuned for more notes from me as we dig deeper and examine different alternatives to address this challenge and meet your organizational responsibilities: archival, common repository, and how tying these strategies into the right cloud might really address this problem once and for all. I look forward to the discussion.