Home > Blogs > VMware for Healthcare IT

How VMware NSX Secures and Simplifies M&A – Part 1 of a Multi-Part Series

Why Use VMware NSX for Mergers and Acquisitions?

The trend of mergers and acquisitions (M&A) is on the rise for healthcare providers and it is not slowing down. As MACRA is defined over the next few years and reimbursals are modified, we will see continued pressure to consolidate healthcare organizations. There will be impact to staff, systems, and to all aspects of the combined organizations. Merges take quite a bit of time to integrate systems and migrate patient data to a central system. What is the opportunity cost of that time to patient care, outcomes and the bottom line?  Often times clinical outcome improvement initiatives, capital investments and improving operational efficiency is delayed until leadership decides how they will integrate the organizations. This plan will then fully consume resources, budgets and can take years to complete. These organizations are still working in a legacy hardware based approach and thought process.  Shifting to a software defined architecture approach allows for a simpler and quicker integration of organizations and shared services.

What if there was a way to leverage all existing network and data center hardware between the acquiring or acquired, maximize data center efficiency and utilization across locations, while maintaining security policy and control? Yes, you can with VMware NSX!  NSX can be a significant part of your M&A plan and the run book for on-boarding ambulatory centers, doctors offices, urgent care centers, and other healthcare services.  You are now moving to a streamlined and automated policy based methodology.

UntitledThe use cases demanded by the business determines the features and services you need  to enable with VMware NSX.  You can be selective in what steps you take, your on-boarding approach and where you need to apply specific solution components. Once integrated in your environment, NSX software defined capabilities are always available for insertion and immediately available to serve the business requests.  For example, in a virtual desktop deployment or for a business critical applications architecture you can enable security based only aspects with micro-segmentation, where it is not required to enable the NSX software defined networking capabilities. You can also start with software defined networking but I will discuss why you may want to secure your workloads prior to connecting to unknown entities.


NSX Methodology for M&A

The following four categories layout an approach for an M&A methodology supported by VMware NSX.  The order is important since we are leveraging the NSX platform to reduce risk while providing the ability to move faster than other approaches.  I have listed an overview below and will elaborate on each section in future sections of the series.


Screen Shot 2016-06-28 at 9.26.30 AMliveflowAssured security enables organizations to adopt and operate distributed security models such as micro-segmentation, across data center, across organizations and the cloud. Most organizations do not have the details of application flows between the various application modules.  All flows including East-West data center flows need to be analyzed and categorized.  These categories will be the basis for security groups where security modeling and object based security policies are created to achieve a Zero Trust security model for  your most critical EHR and PCI applications.


isolationSecure the applications with micro-segmentation where workloads are locked down in a zero trust model, while you apply policies around the components of an application, specific workloads, a security quarantine tag, or apply a policy for an entire organization. Policy enforcement moves from static line entries to object level based automated enforcement. You define the policy and the workloads are dynamically protected. The data center is now protected inside the perimeter where your own internal data center infrastructure and east-west traffic is secured.  This lessens the risk that a compromised system integrated during the M&A process can infect your existing systems.  As part of the M&A process you can extend this protection into the acquired data center prior to connecting them to your network.  This allows you to add another level of security by micro-segmenting the acquired data center and gaining greater visibility into their environment.


Screen Shot 2016-06-28 at 4.03.33 PMA secured data center allows you to connect to other data centers or offices with a reduced attack footprint while mitigating and reducing risk.  IP connectivity is all you need between hosts wherever they are located.  There are multiple options to connect at the Layer 2 or Layer 3 level allowing workloads to move without requiring an IP address change.  In many cases there are legacy applications with hard-coded IP addresses where you need to maintain or extend the same segment across data centers.  You can leverage the Internet while waiting on data center interconnects without adding hardware.   You can even start to look at SD-WAN alternatives and reduce your monthly recurring WAN costs. Applications can move freely to where the resources are available.  You can spin up all the components for the connection without needing additional hardware by using NSX Edge Services Gateways.  Now you are turning a manual and tedious process into a run book you have created easily replicated using NSX.


Screen Shot 2016-06-28 at 4.01.23 PMYou have a securely connected environment and you can use it with the great efficiency. Workloads can be moved or migrated to the primary data center as virtual workloads or as a P2V migration. You can start to offer high availability at the application layer while offering BC/DR services much easier than before. You now have planned migration options as well as the ability to spin up on demand security policies, distributed routing and switching and software load balancing from a centralized management platform.  QA, Test, and development platforms can co-exist on the same physical hosts using distributed firewalls, distributed routing and NAT.  The barriers where workloads are segmented between racks or data centers no longer exist and your utilization of assets will increase.


Healthcare organizations can leverage NSX for increased speed, agility, and security, in addition to deep CapEx and OpEx savings. The M&A conundrum using a software defined network and security provides significant savings over traditional physical integration approaches while ensuring application connectivity and data security.  Reducing risk, simplifying the operational component of mergers and driving down costs are all powerful benefits of NSX.

Please also see Securing and Simplifying M&A with NSX by Blane Clark.

Value through commoditization – parallels across devices

Value is such an important topic in Healthcare these days. As we transition from fee for service to fee for outcomes and our technology and data requirements per patient increase, stretching the output of technology capital spend is critical to success: funds not spent today can be invested in expansion and new care solutions.

There is so much hardware that is common out there across most datacenter technologies. The value comes from the software. Since hardware represents the largest portion of many solution investments, minimizing hardware cost is essential.

Inside every cell phone is a nearly identical processor. That’s what the chip and associated peripherals were designed to do, so it emerged as the standard even across chip manufacturers. The differentiation in the market whether you use Android, iPhone IOS, or Windows Mobile is mostly the packaging and software. Now you cannot change the OS on your phone, but those are really design and licensing choices rather than technical limitations. The point is: the difference in the experience comes almost completely from the software.

Inside your laptop is an Intel processor and interface chips from a handful of common manufacturers. Whether you use a Mac or a Windows/Linux machine from a host of vendors, the underlying hardware is almost identical, and it is possible to change the OS: you can run OSX on non-Apple hardware, and you can run Windows or Linux on a Mac (though it will require some skills and time). The difference in the experience comes from the software.

I shopped for a new wifi router this morning. There are a dozen brands, but inside almost all home routers are similar processors and a small handful of wireless chipsets. Nearly all of the functional differentiation comes from the software. There is router software that will work across different brands (dd-wrt, openwrt, tomato), which allow you to change what capabilities are available on the same hardware. ASUS makes a router for $300, but Trendnet makes one for $200 that has EXACTLY the same hardware specs. If I’m going to run the same software regardless of the brand, why would I pay more?

I ordered that $200 router, and it’s going to give me the most service per dollar.

The same thing is true in the datacenter: inside all of those branded servers is an Intel processor and peripherals from a handful of vendors. Whichever compute vendor you choose, once you put vSphere on it, like hardware of different brand delivers exactly the same outcomes. So if you want to maximize your output, you must choose the value hardware.

These trends are happening all over the datacenter:

  • Compute is commodity: it delivers resources to applications, and like hardware delivers like performance.
  • Storage is emerging as a commodity: no matter what storage you buy, it delivers capacity and performance — space and speed, and there’s a clear trend to maximize Terabytes and IOps per dollar.
  • What about network ports? Ports deliver throughput for data. That’s it. They all follow codified standards of interoperability. Prices per port vary as much as 60% across vendors for equivalent performance. Is a network port really different vendor to vendor? Many have already concluded that a port is a port.

So where does this leave us? It leaves us tasked with doing the math.
For all technology decisions, there are value calculations we can do:

  • Compute: Cost per VM
  • Storage: Cost per Terabyte, Cost per IOps
  • Network: Cost per port

Once we have those, then we can consider whether there are other decision criteria like reliability or support. And then we can make a truly informed value-driven decision to make the best use of our capital and operating dollars and allow the savings to be repurposed for our real pursuit: better care.

Monitoring Electronic Medical Records systems using Care Systems Analytics

Downtime in healthcare costs on average nearly $10,000 per minute.  Event correlation in Electronic Medical Records EMR, systems prevents downtime, saving money, and providing a better patient experience.  Healthcare IT professionals must understand the total cost of downtime for the EMR systems they support.  Measuring and understanding these costs prevents outages, in turn preventing costly troubleshooting and resolution.



Most EMR Vendors provide real time, or near real time, monitoring of some type.  This works well when there is an administrator sitting at the screen monitoring, but doesn’t provide the big picture.  Trending data in an EMR system, helps to identify what normal operations look like, as well as what historical trends have been.  By understanding what is expected behavior, trending over longer periods of time, using more data points, the healthcare IT professional is equipped to prevent problems.  Using Care Systems Analytics for this trending helps to prevent false positives which tend to create alert fatigue, presenting only actual potential problems based on real data.  This assists application and infrastructure teams to look at the data to plan project upgrades, and project future capacity needs.

Screen Shot 2016-06-01 at 2.50.55 PM

End to end analytics

Most technology monitoring tools are great at monitoring one thing.  In many healthcare IT organizations, each group is separated into “silos” based on discipline.  When there is an issue with an application, it often requires a representative from each discipline to look at their individual system for the problem, with the event correlation between systems being done as a human interface.  This makes for a tedious process, looking at different data and trying to determine a reason for the problem.  Care Systems Analytics gives a single end to end view of the system from application to the hardware.  Correlating the data and putting it into context allows each group to look at the same information, see the same problems, and quickly address them.  Root cause analysis can be determined by bringing application and infrastructure data together and displaying what events lead to the problem.


Dashboards and reports for everyone

We live in a data driven world.  Everything is generating information, we are at a point of near information overload.  Those in the healthcare IT industry, responsible for maintaining the systems that store, manage, and secure that data have a compelling need for data to be relevant and aggregated.  In healthcare minimizing the lag time of the technology being used enables providers to move more quickly, spend more time with patients, and drive a better patient experience.  By providing healthcare operations staff reports on login times, and workflows; dashboards to alert on potential “hot spots” which could lead to outages, or slowness, IT can become a part of the patient care process.   For application owners, especially the EMR system administrators, reporting and dashboards allow them  to optimize the system.  This not only shows the value they bring to the team, but allows them stay ahead of problems, and demonstrate they are complying with healthcare regulations on EMR usage.  For infrastructure teams, rather than taking calls about an application being slow, dashboards can cut research times, and reports can be customized for clinical units to gain confidence that their IT departments are providing them the best possible experience.



Event correlation in Electronic Medical Records EMR, systems prevents downtime, saving money, and providing a better patient experience.  Benjamin Franklin once said, “An ounce of prevention is worth a pound of cure”.  As EMR update projects are undertaken, or simply as the need is identified, healthcare IT departments should be checking with their local VMware healthcare team.  Get more details today on how Care Systems Analytics can provide the best possible care for patients, and the best possible experience for those providing that care.  

Hyper-converged makes Fujifilm Synapse PACS a Snap

Fujifilm’s Chief Storage Architect Esteban Rubens is delivering a very compelling architecture to improve patient care by simplifying their imaging platform using hyper-converged infrastructure powered by VMware SDDC.

By using VxRail, new systems can be deployed in minutes, the ongoing maintenance is greatly simplified, and the total cost of the system is lower than alternative architectures.

VMworld 2016 Session Voting open May 3 – May 24

Working in healthcare IT we often do things a little bit different.  We may not create monstrous development environments, and our application deployments may happen less often on average than our counterparts in manufacturing, but our applications are often on the front lines of patient care.  Since joining VMware as a Healthcare IT professional I have had the privilege of speaking to many healthcare providers, and payers, and one single theme seems to come to mind.  It is all about the patient experience, about changing the way people interact with their healthcare provider, simplifying the process, enabling the provider to spend more time with the people and less time on administrative tasks.

Last year I wrote about the process of how a session goes from concept to what you see on the stage.  This year we have seen a significant increase in sessions being submitted with healthcare themes.  A small part of this has been due to the growth of our healthcare team here at VMware, we have welcomed some incredible talent in many areas.  A larger part of this has been more interest in getting the message of what VMware is doing to help improve healthcare to a broader audience.  More submissions means more conversations, more ideas being shared, more feedback, and more of us focusing on solving more problems.  The best part is seeing an increase in customer panel sessions where you, our customer, comes and talks about the transformation of patient care.  These sessions are particularly interesting because they are more honest, and often times show the struggles, and victories our customers are dealing with.

This year, we have around 40 sessions in the voting pool which are healthcare focused.  Certainly there are other amazing speakers and worthy sessions, but for those of us in the healthcare field it is a perfect time to come talk to peers, learn more about what is working, and what could be improved on.  Share your stories, your feedback, and meet the VMware healthcare team, and help us continue to improve the patient experience.  To vote go to http://www.vmworld.com/uscatalog.jspa?search=healthcare.  Login in the upper right hand corner, and vote for what you want to see.

Of course no VMworld would be complete without the Hands On Labs, and this year we are working to bring healthcare presence to every aspect of the conference.  Our Hands On Labs will be featuring a number of healthcare specific labs with our healthcare specialist team standing by to assist you with any questions.  Our teams have gone to great lengths to make these available as a part of the conference, and to give you the best opportunity to experience the technology hands on.  As always this is your conference, so vote for the sessions you think will be helpful, but then come visit us.  As presenters, there is no greater feeling than meeting people, answering questions, hearing stories, and making new friends.  If you have a story you think is compelling, reach out to your VMware team so we can get you into one of these sessions.  As always hoping to see you there, and make sure you vote.

Improve Reliability by Shrinking Datacenter Fault Domains with Software

Security and Reliability are the primary objectives of all Healthcare infrastructure. Reliability is a complex emergent property of all of the underlying required components. Today, Blade Chassis and Shared Storage represent large Fault Domains that are no longer necessary, and it’s time to take a good hard look at our datacenter design decisions because we can make the Fault Domains smaller, resulting in more reliable infrastructure and more stable critical service delivery. Software-defined storage allows us to shrink the largest Fault Domains in the datacenter today: Blade Chassis and Share Storage Frames.

I recently had a conversation with a Healthcare CTO who had twice experienced a significant outage that took time to recover. As so many Healthcare customers do, they use Blades in Chassis connected by Fibre Channel to several Shared Storage Frames. They have twice experienced Chassis failure that caused a significant service interruption. In each instance, vSphere functioned by design: seeing the loss of the Hosts, HA restarted the affected VMs on remaining hosts. Each time, that activity overwhelmed the remaining Hosts and caused service delivery issues.

Most in the room agreed that these issues are best addressed by capacity and resource pool priorities to prevent the ill effects of resource scarcity in the future, but it got me thinking more about Fault Domains in general: Shared Storage and Blade Chassis are the biggest Fault Domains in the datacenter today, and we must look at architectures and technologies that allow us to reduce that exposure.

Fault Domains

A Fault Domain defines the maximum scope of an outage that can be caused by a component failure.

In your laptop or phone, the Fault Domain is essentially the entire device: the failure of the screen, the battery, the storage, the memory, even the buttons renders the device unusable for all practical purposes.

In our server infrastructure, Servers, Chassis, and Shared Storage are the Fault Domains to consider. (We’ll ignore power, cooling, and networking for this discussion; they are absolutely critical, but there is little that can be done beyond present best practices to make them more reliable.)


Server failures will happen in the datacenter as a result of component failures within them. That is one of the reasons vSphere is the platform solution of choice in the Datacenter: vSphere HA automatically restarts affected VMs and their services in the event of a server failure. As long as there is sufficient capacity on the remaining Servers, the scope of the impact is limited to the systems that were running on the server that failed; the recovery is automatic and quick.

Engineering around a single server failure requires service redundancy at a higher level, which is accomplished through a variety of methods beyond the scope of this discussion such as Load Balancing, Clustering, or VMware Fault Tolerance.

The Fault Domain of a single server is a generally accepted risk in the datacenter. Where applications allow us to efficiently engineer around it, we do, and where we cannot, VMware allows the service outage to be brief and limited to few services.


Because VMware required Fibre Channel and Shared Storage in the beginning, Blades arose as a way to reduce that cost by consolidating the Fibre Channel ports needed to connect at the Chassis, but that savings comes at a price: a misconfiguration or failure of a Chassis component can cause an interruption of 14 to 16 servers running 150 to 800 VMs. That is a lot of services and a lot of standby capacity necessary to restore those services.

Blade Chassis are large Fault Domains in the datacenter today. What do we get for this large scope of service impact? About 10% savings on compute capital, some operating efficiencies (mostly driver and firmware remediation) and an easier way to manage Fibre Channel connectivity to Shared Storage.

VMware HA can absolutely take care of all of the systems that were on the servers in the failed Chassis, but that requires compute policy management for high and low priority systems or an entire chassis of spare compute capacity. That’s either extra planning or a lot of idle compute and licensing.

The solution is simple: software-defined storage and rack-mount systems or smaller blade enclosures with fewer nodes shrinks the fault domain and reduces the impact of hardware component failures, reducing the spare capacity necessary to maintain service levels.

Shared Storage

Shared Storage from every vendor is designed to be redundant unto itself so that the failure of any component within does not affect the delivery of its storage services, but Shared Storage arrays can and do fail completely and catastrophically, resulting in painful and prolonged outages.

SANs are without a doubt the largest Fault Domain in the datacenter today, and just about every IT leader can tell you at least one very unpleasant story. We hear them all the time, and our support organization usually assists with the recovery efforts.

The solution is simple: Software-defined Storage shrinks the Fault Domain to a single server for hardware failure and a single vSphere cluster for catastrophic software failure. Since most customers build in eight to twelve node clusters, a software-defined storage cluster worst case outage is smaller than that of a shared storage array or existing blase chassis.

Final Thoughts

Fault Domains matter, and VMware can shrink yours with VSAN clusters. We can also dramatically improve your performance, scalability, and lower your cost in the process. It’s the most value we can bring to your datacenter today: cloud scale behavior and economics on premise in your datacenter.

Healthcare Security and Storage: Transformation Better Together

2016 CHIME CIO PrioritiesWe recently polled our Healthcare CIOs as part of our CHIME membership. Their top two priorities for 2016 align perfectly to our security and storage offerings: ‘Security and Compliance’ and ‘Reducing Costs/Financial Restraints’.

The pivot from traditional infrastructure to software-defined security and storage addresses the top two priorities exquisitely, allowing Healthcare providers to reduce the attack surfaces leveraged by modern phishing and malware attacks as well as dramatically reducing the TCO per application and solving a host of other storage challenges in the process.

Healthcare Infrastructure Needs

  • Security: Patient Information and Systems
  • Reliability: Consistently available, self-healing, continuous between sites
  • Performance: Delays reduce productivity and affect satisfaction
  • Value: Persistent pressure, flat budgets despite growth

Healthcare needs an application platform that addresses its most profound infrastructure challenges: security, reliability, continuity, and value. In order to deliver those outcomes, we need new capabilities that take full advantage of your virtual infrastructure as the hub of information in your environment and apply policies where they matter most: to the applications themselves.

VMware marshals your most sensitive Healthcare data all day long: on its way to and from storage and with other systems inside and outside of our environment. That makes VMware the most efficient place to implement security and storage policy, and in doing so, we can simultaneously reduce the risk of the modern breach, add reliability and performance to the storage on which all applications depend, and reduce the total cost per application. And as we look to a future where the boundaries of the datacenter become increasingly flexible, and we look to leverage compute from a variety of cloud providers, we need to apply the policies to the applications directly, to the VMs themselves, to ensure that wherever the applications and data move, the security and performance policies move with them.

Security and storage are the two critical infrastructure components most in need of overhaul, and the technologies necessary to address so many of the modern challenges are already available, delivering an application platform with better security, greater reliability and performance, and lower total cost on the order of 30-50%. It’s powerful, it’s simple, it’s affordable, and it’s in production right now.


Recent headlines tell intimidating stories about ransomware holding patient data and critical systems hostage with encryption until a fee is paid to obtain the decryption key. Prior to that, stories of high profile health record breaches dominated. Breaches are an outcome of present architecture limitations, and what is missing from the headlines are recommendations that point to architectural solutions to reduce the attack surface of applications and systems that house PHI.

Breaches are most often effected by phishing and malware that then exploits the typically absent internal boundaries between systems in an environment. An important element of any modern security strategy requires that we draw purposeful lines around our applications and systems to control what traffic is permitted on a very granular scale, but that requires that we have a new security capability, a new place to effect policy. VMware is already in the path of that data and is the most logical place to implement that policy.

Traditional policies are based on IP and Port, and defining a complete list of permissible traffic in an environment using IPs and Ports is simply infeasible to build and manage. As a result, no one does it – that is why phishing and malware work.

Securing the internal environment requires policy be applied to applications directly. Since nearly all applications run inside VMware VMs, that is the best place to apply those policies. And because we manage the VMs, we can apply new kinds of security policy: we can apply sophisticated policy to groups of VMs by naming convention, group membership, tags, OS versions, etc. It’s an entirely new way to implement internal Zero Trust where traffic can only flow when specifically permitted. We can also apply policy to AD users and groups so that only traffic by authenticated users will flow.

This outcome of Zero Trust is the result of using NSX Distributed Firewall, a core feature of the VMware ESXi hypervisor that runs almost all of your critical applications, and this is a key component of a modern comprehensive security policy.


Nearly all VMware infrastructure today leverages shared storage and fibre channel.

Nearly all VMware infrastructure leverages shared storage, fibre channel.

Healthcare is one of the few industries that has lives at risk in the event of system failures, which makes the application platform absolutely critical to the delivery of care. With that in mind, we should focus on the elements of infrastructure most prone to issue and that can be simplified through innovation and transformation.

Almost all virtual infrastructure today leverages shared storage; it was an essential component of architecture that in itself has become a single point of failure whose risk requires significant capital to mitigate. The era of shared storage is rapidly coming to a close because it is complex and expensive: it accounts for roughly 50% of virtual infrastructure capital, and by its very nature, it is prone to failure with very high operating cost. The policies to manage storage are so distant from the applications themselves that when things go wrong, it takes three different skill-sets to fully surround the potential issues and resolve them.

With lives on the line, why would we allow that to continue if we have a better way?

By moving storage into the compute layer, we reduce complexity and cost while increasing reliability and performance.

By moving storage into the compute layer, we reduce complexity and cost.

VMware VSAN is the solution to the reliability. It’s a core capability of the VMware hypervisor, and by moving the storage into the compute layer and allowing VMware to manage it all directly, we gain new redundancy options, new business continuity options, reduced complexity, and we apply storage redundancy and QOS policies directly to the VMs. There is so much less to go wrong in this distributed storage model, and it is the way all virtual infrastructure will be built. Our customers who have transitioned to this design, some of whom have been operating this way for more than two years tell us they cannot imagine running their infrastructure any other way.


By moving the storage up into the compute, VMware can make critical decisions about how to cache it for rapid repeated retrieval. It still lands on the same spindles and flash as it would with a SAN, but by moving storage to the compute layer and giving VMware control, we get great performance benefits and gain scale benefits by distributing the Iops among the compute nodes.

A modern SAN is designed to scale, but the storage processors in a SAN become bottlenecks over time. Eventually, we reach a point where our applications are performing more transactions than our SAN can process, regardless of how much flash is present behind the processors. This creates significant growing pains as both capital spend and operating complexity.

Distributed storage, on the other hand, scales with you. A modern compute node using NVMe Flash as a cache drive can sustain ~120,000 Iops. As our applications grow and we add compute nodes, we are consistently adding additional Iops, up to 120,000 per node. This architecture by its very nature addresses the single greatest performance challenge of shared storage, and as flash become increasingly affordable, spindles are fading in favor of higher Iops infrastructure without the SAN bottlenecks that have plagued us for years, delivering three to ten times the Iops presently available in customer environments.

VMware VSAN alleviates so much of the performance challenges of storage architecture. The idea that there are no LUNs and no tiering is a radical concept for storage engineers, but it works so much better. My customers who run hybrid configurations using a combination of flash and spindles report no performance challenges for nearly two years.

Value and Cost Control

Did I mention this improvement in reliability and performance also costs less? By simplifying the entire stack, we eliminate capital infrastructure and commodity hardware markup. When we consider the total capital cost of virtual infrastructure, hosts with shared storage cost about 30-50% less per VM than using our distributed storage solution. This is leading to a rapid evolution of infrastructure architecture that began over two years ago but has accelerated dramatically in the last six months. With the ubiquity of commoditized compute and local storage that complies with Ready Nodes (from Cisco, Dell, Fujitsu, Hitachi, HP, Huawei, Inspur, Lenovo, Quanta, Sugon, and SuperMicro), and the launch of VxRail/VxRack HCI from VCE/EMC, there are so many excellent platform options to gain all of these benefits and realize substantial savings.

With these savings so very real and these benefits so very tangible, why would you build your infrastructure any other way?

The Solution: NSX and VSAN

VMware’s Security and Storage solutions are wonderful complements, addressing so many of the infrastructure challenges in Healthcare today. The savings versus your current model will fund the new capabilities, reduce the attack surface of applications, and resolve critical storage challenges all as part of a single transformation event. Healthcare applications have never had such a secure, reliable, performant, and cost effective platform.

Securing and Simplifying M&A with NSX

Securing and Simplifying Mergers and Acquisitions with NSX
You have just been pulled into the planning process for the most recent M&A.  Hundreds of items need to be addressed… The first question is always “when will the new executive team have access to email and critical corporate business systems?”  Followed quickly by “how long will it take and how much will it cost to merge their systems into ours?”

M&A activities are complicated, fast paced and emotional times for organizations.  The temptation to move fast and merge the organizations often leads to technical and cultural missteps that threaten the success of the merger.  Financial pressures grow, costs are estimated, monitored and managed putting pressures on already overburdened IT shops to work their magic.  Being able to merge networks and systems in a timely and secure manner is a key part of controlling these costs.

The Risks
You have walked through and seen the IT operations of the newly acquired organization, but do you really know what is under the covers?  Sure they look like they have some semblance of ITIL process and a half way organized data center, but what about the discipline in day to day operations that are so important to maintaining a safe, secure and clean IT environment?  Do you really know if they have sound policies and procedures, solid security technologies and defenses or have they educated their users on the shared security responsibility?  A lapse in any one of these areas or a thousand others could mean that their systems are compromised with malware or have undetected data breaches.  Do you really want to risk bringing an unknown system directly onto your network and risk exposing your corporate data?

The ongoing financial and operational pressures of M&A can often times put IT shops in the position to move fast and risk the integrity of their existing systems.

Protecting with VMware NSX
The benefits of the Software Defined Data Center (SDDC) are many.  Defining, creating and managing in software allows for nimble and more cost effective operations than the traditional hardware based approach of data center operations.  This holds true for networking and network security.  Inside your data center NSX allows applications to be firewalled from each other (micro-segmentation), securing the east-west traffic and allowing only authorized communications between internal systems.  NSX operates inside your already deployed VMware hypervisor, so the foundation is there today. In addition to security, NSX also provides software based load balancing, routing and switching.


During M&A, the micro-segmentation approach allows for additional benefits.  First, your own internal data center infrastructure and east-west traffic is secured.  This lessens the risk that a compromised system brought in during the M&A process can infect your existing systems.  Secondly as part of the M&A work you can extend this NSX protection into the acquired data center prior to connecting them to your network.  This allows the IT shop to add another level of security by micro-segmenting the acquired data center and gaining greater visibility into the infrastructure.  Applying NSX partner applications such as Trend Micro Deep Security provides additional peace of mind by adding intrusion detection/infection prevention scanning, log scanning, and automatically isolating infected machines prior to merging the networks.

Simpler Network Extension
After ensuring that your networks and systems are protected from the unknowns of the acquired data center you will then have to figure out the combining of the two IP network spaces.  One option is to provide new IP addresses to the acquired systems.  This is a very labor intensive and tricky operation that could have major disruptions to patient care and business operations.  The IT team will need to touch each system in this case and the risk is that poorly engineered critical systems will break because of hard coded IP addresses in systems and integrations.

With SDDC and VMware NSX the network can be dynamically changed in software, rather than at the machine level.  The acquired systems maintain their original IP addresses and are encapsulated with the new IP addresses.  This allows for the simpler integration of systems into your network without the IT staff manually changing IP addresses and risking the availability of critical applications.

Efficient Operations and Better Outcomes
VMware NSX is already making a large impact in security and network operations across the healthcare industry.  Reducing risk, simplifying the operational component of mergers and driving down costs are all powerful benefits of NSX in healthcare.  At the end of the day healthcare is about managing the health and improving outcomes for patients. Anything we can do to make operations simpler, adoptable and cost effective allows our organizations to focus on the most critical aspect of healthcare, the patients.

IT is the Foundation, not the Point of Healthcare Information Technology

Four key themes continue to resonate with healthcare provider CIOs in almost every meeting that I’ve had this year:

Empowered Clinicians – Right information, right device, right time

Engaged Patients – Enable patients to manage their own care

Support a Community – Scale to support a community not just a hospital

Secure Patient Information, Persistent Availability – Intrinsic security, stability, performance, and agility

Yet, as much as CIOs want to focus on these key areas, many cannot because they don’t have the right technical foundation in place to enable these complex, highly-integrated initiatives.

The last several years have seen most provider organizations implement systems and technology at a break-neck pace to support a wide range of initiatives. This includes internal projects, expansion and service line development, Federal initiatives, Meaningful Use and on-again, off-again, on-again ICD-10. They’ve seen their organizations stretched, and operational costs swell, while systems complexity has increased exponentially. The healthcare organizations we work with are focusing on driving value out more mature EHR deployments through analytics, and driving down the operational costs that have crept up as many legacy systems and processes haven’t been able to be retired at a pace commensurate to implementations Meanwhile, healthcare security risks are increasing at a disproportionate rate compared to other industries nationwide. To make matters worse, many provider organizations do not have a solid infrastructure to rely upon as they begin these initiatives.

The transition to software-defined infrastructure through the widespread use of virtualization technology from the data center to the desktop is well underway, and will continue to accelerate in the coming year to enable provider organizations manage spiraling infrastructure expenses as well as increasing healthcare security concerns. Most notable, healthcare IT will increasingly leverage a software defined data center architecture, with network virtualization as its foundation, to deploy clinical systems as services that are continuously available, highly secure and rapidly scale as business dictates.

Most organizations have a single core EHR, but they also have multiple other clinical applications which are required to manage ancillary functions or specialties. These applications have to work together, but by their very nature, increase IT complexity and create security vulnerabilities. By creating a software-defined foundation by virtualizing the network, healthcare IT can deploy security that is native to the infrastructure and can facilitate highly secure, micro-segmented East-West server-to-server communications between every clinical system. Network virtualization and the software-defined data center enables provider IT teams to deploy a Zero Trust network architecture, only allowing explicitly permitted communication between disparate systems. This enables an unparalleled level of secure clinical computing.

VMware sees the opportunity to break down the silos within healthcare IT and change how infrastructure is managed enabling, organizations to focus on patients, not IT. Software-defining IT enables clinical applications to be deployed as a service where security, monitoring and management are fundamental to their delivery, not bolted-on afterwards. This reduces the need to focus on managing physical devices and physical security. Instead, IT can focus on clinician performance employing tools to actively manage overall system performance, as well as that of a single user, with the same toolset. After all, the point of healthcare information technology is to enable the clinician to care for the patient in the most efficient, effective way possible – even to the point of keeping them out of a traditional care setting – not technology.

Hands on with Secure Healthcare Desktops

        Security breaches cost healthcare companies millions of dollars every year.  We continue to become more innovative with our security, but often times focus on the server and perimeter networks.  When it comes to the desktop, security is all to often a small piece of a larger design, something focused only on the operating system.  The best way to design a better secure desktop experience is to get hands on experience with secure healthcare desktops. VMware Healthcare would like to enable you to experience a secure desktop to improve security without sacrificing performance by experiencing our new Healthcare Secure Desktop hands on lab.  Join us and look at Just-In-Time application deployment, Identity based dynamic firewall services, and compliance and regulatory data security to see how VMware’s secure healthcare desktop can help you.

Just-In-Time Application Deployment

        By abstracting the application from the virtual desktop image, VMware App Volumes enables stateless pools of virtual desktops.  Within this section of the lab, you will see how providing applications in real time will help providers, simplifying your desktop engineering and management process.

Identity Based Dynamic Firewall Services

     Moving the security as close to the user as possible allows for threats to be stopped before they can propagate.  The Identity based dynamic firewall services portion of the lab demonstrates delivery of dynamic access controls based on a logged in user, even in a stateless virtual desktop infrastructure, adapting to changing requirements.

Compliance and Regulatory Data Security

     Security goes far beyond simply firewalls and applications.  Compliance monitoring and remediation of violations become far more important in the heavily regulated healthcare world.  The final portion of the lab demonstrates a realistic response to policy violations, triggering automated actions preventing data loss and compliance violations.

     Albert Einstein said, “We cannot solve our problems with the same thinking we used when we created them.”  Security in healthcare is a growing problem, solving it is going to require healthcare IT professionals to rethink architectures, and test out new and innovate ideas.  Get hands on experience with secure healthcare desktops and prevent security incidents before they occur.  Take advantage of VMware Healthcare’s Hands on Lab environment today, and learn how you can deploy secure healthcare desktops in your environment.