Home > Blogs > VMware for Healthcare IT

Innovation by Value

BulbInnovationValueInnovation: Overcome Barriers with Value

Datacenter architectures are evolving everywhere: new designs leveraging new technologies that cut across the technology silos of our organization structures. The increasing use of software to deliver capabilities allows more rapid adoption through more controlled experiments that lead to more rapid success, proof, and deployment, but we are stalled in our efforts to innovate by routine, business as usual, comfort, and fear of changes that affect how things are done and who does them. Value delivered by new solutions merits our full attention, and it is only through value and the measurement thereof that we can choose which technologies to employ.

Innovation is an iterative, constant process executed by people who thrive on improvement, and it raises so many important questions. Who does the experiments and understands the impact of success on various people in technology-specific roles? Where do the tools required to prove the solutions come from? Who must own the evaluation of Software Defined Networking technologies? The Network team? The Virtualization team? Who must evaluate Software Defined Storage? The Storage team? The Virtualization team? If we adopt it, who owns it? Where does additional headcount come from? What if we don’t need as many people?

Impact questions are stifling because they affect people--people we care about, and lots of innovation is actively stalled when individuals and leaders see that they may need new skills, new organization, or worse: not be needed at all. This behavior is understandable, and requires delicate attention, but we must all fight the urge to protect what we did yesterday and continue in the same manner without review. It may be that yesterday’s methods are still the best to solve the problems at hand, but only an evaluation of value can tell us. The inputs to our decisions are constantly changing, and unless we revisit the reasons we chose the current model, we cannot know if it is still the best.

Customer Story: Innovation is Hard

I met with a large healthcare customer recently, an innovator in very many ways: they have invested in automation for reliable provisioning; they have invested in software defined networking to provide agility and scale staff; they are exploring software defined storage to reduce their largest capital expenditure; they are producing software and solutions for sale using a variety of self-service solutions; they are exploring their End User Computing options; and they want to explore microsegmentation because of its impact on their security profile. This is great: they are realizing value from many new solutions and planning to understand the value of more.

The challenges come in operationalizing solutions that cut across the organization to increase adoption of what individual teams have done. Automation was implemented by the Infrastructure team. The software defined networking is being driven by the Networking team and is not integrated into the Infrastructure team’s automation solution. The software development group leverages cloud resources that are provisioned by individuals and exist outside the scope and visibility of the core Infrastructure team. They have not changed their EUC strategy because theie comparisons have used older architectures and associated costs. Conversations about using the Automation solution to provision the resources needed by the development organization stalled over concerns about ownership and headcount. Automation is not available as a self-service resource to enable rapid innovation for people with very short term needs (one-third of all) due to fears of abuse. The integration of Automation and SDN hasn’t happened for many of the same reasons, and there is further question about whether their preferred technology is truly viable given how difficult it has been to implement so far.

This is an innovative customer exploring new technologies with an appetite to adopt them that still struggles with transformation, and they are not alone. We often talk about people, process, and technology as the three elements of transformation. In so many cases and in so many ways, technology is the easy part.

Value is the Metric and the Answer

The way forward must be an objective assessment of value. If the development organization can be made measurably more productive and outweigh the cost of additional investment in Automation, that is the correct decision. If SDN can increase staff scale sufficient to justify the investment in the technology and integration with the Automation platform, then it should be done. If a new EUC architecture can increase Clinical productivity and/or lower the total cost per user over an analysis period, it must be piloted, validated, and selected. Business as Usual has a set of costs. Plans for the near term have known costs. Anything else we evaluate as an alternative must be weighed against those known and expected costs.

For the large customer I mentioned, we are going to partner with them on the analysis of SDN alternatives, see if we have a more valuable alternative. We are going to do a more detailed assessment of their Clinical Environment and EUC infrastructure to determine if our model will deliver better results at lower cost. We are going to evaluate whether we can have a marked impact on development innovation with self-service requests and appropriate resources to support it. We are going to help them show a substantial cost savings and performance improvements using software defined storage. In sum, we’re going to help them model and prove better methods that will lead to a higher functioning infrastructure and increased productivity for those who rely upon it.

These partnerships provide terrific value. For our customers, they provide better decision support resources based on facts and analyses using their data, their cost models, their assumptions. Innovation and the value derived thereby is how IT has leapt to the forefront of competitive differentiation in so many industries and will do for more.

Note: Further discussion in Innovation and Org Structure can be found here.

Making application delivery just a little more friendly

Application delivery in the healthcare world is the reason for healthcare IT.  At the end of the day if the applications weren’t here, the infrastructure wouldn’t be of much value.  We can drive more effective patient outcomes by providing an improved provider and user experience to healthcare users by augmenting existing Citrix application delivery.

Just in time application delivery

One of the biggest challenges we see with most of our healthcare customers is maintaining the applications once they are in place.  Firmware updates, operating system patches, and application upgrades, all forcing downtime, or significant planning.  Rolling updates can mitigate some of this, but it is an incredibly manual process.  Adding capacity requires building a system(s), install the operating system, add the application, test, and deploy, then add this system to the patch/update cycle.  No matter how skilled you are at this process it is incredibly time consuming.  It isn’t a “people or process” problem so much as a technology problem.

Healthcare applications need to become more flexible.  By using a just in time application delivery model, we can simplify the deployment process.  Package the application once, and deploy to an entire farm of servers, want more servers, add the entitlement to that server, and the application is automatically pushed.  Citrix being prevalent in healthcare application presentation creates an opportunity to improve the deployment of applications to large Citrix farms.  This enables a unified approach to application packaging and delivery at the presentation layer.  It then becomes about abstracting the application from the operating system much like we did when we began virtualizing servers and abstracted the operating system from the hardware.

Picture1

Application Catalogs

The rise of consumer devices, smartphones, tablets and the like have lead to the expectation that applications be delivered on demand and in a self service portal.   Technically proficient users are even less willing to go through the pain of application installs.

In today’s modern healthcare enterprise, application delivery is not a single tool.  We are at a transition point in how applications are written, and how they are delivered.  Whether it is a Software as a Service model, or a full-on desktop, healthcare providers need to have a single place to go with a unified experience across their devices and applications.  Notice the incredible flexibility of the delivery methods providing a strategy, not one size fits all. Whether you are delivering a full virtual desktop, a SaaS application, or a Citrix XenApp, everything comes through the same portal with a similar look and feel.  

Picture2

User Environment Management

For healthcare providers, consistency is essential.  Sean Kelly, MD, a practicing ED doctor at Beth Israel, talks about a doctor assessing a stroke victim, and the considerations which go into it.  What would happen if the application icon was moved, how much time would that cost, how much additional brain function would be lost? “Evaluating a stroke patient in the ER is highly time dependent,” said Kelly. “In order to treat a patient with thrombolytics (“clot-buster drugs”), a clinician must rapidly access prior medical history for any contra-indications, order a CT scan to rule out bleeding and review it on PACS, consult neurology, perform an NIH stroke scale and potentially treat blood pressure or other co-morbidities”, according to Kelly.  “Good technology doesn’t just save clinicians time, but also prevents cognitive disruption and contributes to patient safety and better outcomes.”

Picture3

Application Monitoring

Have you had a user call and complain about an application being slow?  Healthcare is fairly unique in the application space because we tend to deliver a large Electronic Health Records application with a number of attached applications surrounding it.  This becomes a larger issue when we consider the infrastructure components and the application delivery method.  How can we tie those events together?

Having a single source of truth for the entire application stack is critical.  When we can tie in the Infrastructure, the Citrix XenApp performance data and application data, it makes the provider experience better, reduces downtime, and allows for predictable performance.  Problems can be resolved before they are seen by end users through predictive analytics.

Picture4

Healthcare IT is a changing space.  We are continuing to make improvements, drive innovations in patient care, and in provider satisfaction.  More and more technology is not just a part of healthcare, it is the critical success factor in the patient experience.  From the moment the patient walks in the office, they are impacted by applications, and how we manage and provide those applications.  A positive user experience leads to higher satisfaction, and improved care.  By improving the experience of the existing Citrix application delivery model we can deliver a better patient experience by improving our existing environments.

VMworld 2015 Healthcare Sessions: Voting is open.

For many of us, presenting in front of large groups does not come easy.  Most of us, at least those of us who came up through the technical ranks, would be just as happy at a keyboard or whiteboard, designing, building, and bringing fun technology to life.  Many of us have realized though that we have interesting stories to share, interesting experiences, and interesting ideas.  For those of us who believe that VMware technologies are among the most fascinating and innovative, the opportunity to present at VMworld is an accomplishment, and a great honor.

VMworld session abstracts are submitted several months before the conference.  The process is rather time consuming and challenging for first timers, but rewarding if selected.  The abstracts are then reviewed by experts, voted on by the community, something which adds additional value to the sessions since you have a say in the process.  If the session is selected, the presentation is written, reviewed, rehearsed, and finally presented.  The process is the same for customers, employees, partners, everyone, the attendees decide what sounds interesting.

This year we are seeing a number of healthcare sessions, over 30 in the voting pool.  These range from customer panels to individual submissions on all series of topics related specifically to healthcare.  I point this out because these are in a large pool of sessions on many topics with many different focuses.  We are seeing an great response from the community to our healthcare team, and the new products focused on managing healthcare environments.  If you plan to be at VMworld, please check out the health care sessions, vote in any you find interesting so we can ensure we get more focus on what we do, and put more emphasis on what you want to hear about.

To vote go to https://vmworld2015.lanyonevents.com/scheduler/publicVoting.do.  You will need a VMworld account, filter by healthcare, and vote in as many healthcare sessions as you want.  Make sure you come visit us at the sessions, and let us know what you want to hear about.  This is your conference, and we want healthcare to be a much bigger part of VMworld, and we want to see you there.

New Research Highlights Clinical Benefits of Virtual Desktops

By James Millington, group product line manager, healthcare solutions, End-User Computing, VMware

New research carried out by HIMSS Analytics at University Hospitals on behalf of VMware has highlighted 6 key areas spanning care provider, patient and IT benefits following their recent VMware Horizon virtual desktop deployment.

HIMSS Analytics interviewed care providers and IT professionals at University Hospitals of Cleveland in order to understand the quantitative ROI of VDI. What they uncovered were numerous benefits including:

  • Workflow improvements that resulted in improved mobility for care providers
  • Substantial time savings from reduced login and application load times
  • More time with patients by healthcare providers

 

HIMSS Analytics concluded these benefits supported the Health and Human Services Department’s aim of improving the patient experience of care, improving the health of populations and reducing the per capita cost of healthcare.

HIMSS Analytics will host a webinar to discuss the detailed findings of this report on Friday, May 15 at 10 am PT/1 pm ET. The webinar will include several healthcare experts such as:

  • Sean O’Brien, president, Axixe
  • Sean Kelly, assistant professor of Medicine, Harvard University
  • Frank Nydam, chief technology officer, healthcare solutions, VMware

To hear more about how virtual desktop technology is helping to transform clinical care in the real world, sign-up to attend the webinar and download the research paper.

VMW0007_Health-Infographic_V3.00

How to Build Security Infrastructure for the Threats of Our Time

As discussed in my previous post, Security was top of mind for CIOs at HIMSS this year, and the progressive among them are looking to the Financial sector for risk reduction strategies. Here we’ll discuss how the openness of the internal environment and end users are the vectors of choice for attackers and how Microsegmentation is the way to mitigate this risk.

Roughly 100 Million records were stolen in the Healthcare industry last year, and at a conservative cleanup cost of $100 per record, that is a minimum cost of $10 Billion to the industry as a whole. The Financial industry has made a substantial investment in a new security model already because they too are targets. According to the FBI, health records are worth a minimum of $50 per record, five to ten times that of a financial record, which makes Healthcare the new target of choice for organized data thieves.

How Are We Doing Network Security Today?

The security model we employ today relies heavily on the border where our network meets the Internet. We have set aside Demilitarized Zones with Edge Firewalls and inspection for those systems that are accessed from outside. These Edge Firewall measures are designed to protect the systems themselves, and these methods are effective to prevent system level attacks. Unfortunately, the vectors of attack have evolved beyond this method's ability to protect us from other methods. Inside the environment is generally considered safe, and most internal systems can talk to just about any other internal system, workstations included. This openness internally is being exploited by organized attackers.

Workstations? Why Are We Talking About Workstations?

Workstations are the new target for attack because workstations run programs at the behest of users, and users can be deceived into running undesirable code. Couple that with sophisticated organizations who are writing custom malware designed to evade detection, and we see quite quickly the need for more granular security. The recent breaches are cautionary tales: exploits are deliberate, targeted, and extremely difficult to detect in today’s complex environments without a change in strategy.

It is believed that a large recent breach, which led to roughly 80 Million customer records being stolen was initiated via emails to employees that linked to malware on false company sites. Once installed, it appears the malware carried out system-level exploits, gained elevated user privileges, and accessed data directly. The first detection came from observing suspicious database queries, possibly because they were causing performance issues. Similar facts are surfacing in another similar case.

I recently overheard a conversation at a customer about suspicious Internet traffic originating from malware on a user workstation. It was detected at the edge, but there was no way to know what internal communication the malware was performing. This is happening all around us, and we are ill-equipped to prevent it without a new security model. This is the new reality: targeted exploits are a fact of life, and the only successful strategy is one of mitigation and detection.

How Do We Reduce This Risk?

NSX-MicroSegmentation

NSX policies govern applications and users allowing granular internal communications.

The new strategy is called Microsegmentation, granular security policy, and it requires that we understand the traffic in a new way. It’s not enough to understand the IP Address and Port of internal network traffic because building policy that way is unsustainable: too many rules, impossible to maintain. We need to understand the traffic and apply policy at a higher level: users and applications.

As the datacenter has become more heavily virtualized, upwards of 90% for many of our customers, the virtual infrastructure sees orders of magnitude more traffic than the edge infrastructure where monitoring is typically implemented. The virtual infrastructure also understands higher level components: the systems from which traffic originates, groups of systems, their locations, and other information that we can attach. We can also understand the user that is initiating a communication.

Knowledge of systems, applications, users, and other information allows us to build policy that is much more sophisticated. We can create a rule that says only Finance employees can even initiate communication with a Finance system but only for application level communication, never for system level. We can create a rule that limits administrative communication to select administrators or administrative workstations; no regular end user workstations could make system calls of any kind to the datacenter. For virtualized desktops, we can create a rule that prevents any virtual desktop from talking to any other, eliminating a key vector of internal propagation. We can create active policies that restrict or log communications when suspicious activity is detected, and we can pass suspicious traffic to Intrusion Detection and Mitigation solutions for further inspection and alerting.

This is the essence Microsegmentation using VMware NSX Distributed Firewall: policies applied to objects that enable sustainable, granular policy that is easy to build and maintain. It’s a powerful new way to effect modern security policy and mitigate the risks of our time. It’s a framework into which the rest of your security solutions can attach and interact in new, intelligent ways. The Financial industry has already implemented these strategies to significantly limit traffic internally, and Healthcare is exploring it in earnest for the same reasons.

Final Thoughts

The recent exploits tell us an important story: data breaches are possible because of our current security model and reliance on edge security, and they are going to continue unless fundamental change is implemented. The best way to reduce our risk is with a new strategy that restricts and studies internal traffic. The benefits realized by the Financial industry serve as great examples, and it’s time for Healthcare to do the same lest we see further headlines and additional costs at a time when the Healthcare industry cannot afford it. We can help you get there. We have the tools and the experience, and we want you all to be successful rising to meet the challenges of today…and tomorrow.

How to Secure Healthcare Mobile App Communications

Security, Mobility, and Physician Experience were top of mind at HIMSS15 in Chicago this week: no one wants to be in a data breach headline, and BYOD programs are proliferating, bringing new challenges and risk. It is perfect timing that we just announced a new way to ensure secure access from mobile device applications all the way to the internal systems they access.

How do mobile devices connect now?

It depends on the device and who owns it:

  • Some are fully managed and Hospital owned, so we can use a Mobile Device Management solution to control everything: all communication is encrypted from the device to a VPN gateway at the edge of our network, and we have total control over every app and capability.
  • More and more are BYOD, so we can’t fully lock them down and secure all traffic, but we can manage aspects of them via MDM. We can then wrap the Hospital apps in a management layer that forces secure communication from the app to a proxy at the edge of our network.

The latter model is emerging as the more popular, and this is a point of ingress to our datacenter. Beyond the secure tunnel that terminates at the edge of our network, there are seldom restrictions that would prevent that app from talking to things other than what was intended. This is a broader problem of internal network security discussed by Aaron Dumbrow's post a few days ago.

How can we do this better?

Software defined networking via VMware NSX and AirWatch Enterprise Mobility Management are coming together in a new way. By combining what AirWatch does to secure the app communication to the edge of our network and NSX to control the path inside the network, we are creating a secure communications path from the app on any mobile device all the way to the application being accessed with no possibility of it accessing anything other than what we define in policy.

This is great news for Healthcare Mobility strategies in an age of breaches where new devices and apps bring new risks.

How do we learn more?

This solution will be unveiled at RSA 2015 next week. Read more about it here and watch a video that explains it very well.

Software Defined Security

In light of recent increased security breaches in a number of industries, it is a good time to revisit the topic of security and the role of software defined networking in this model.

Healthcare as an industry is heavily regulated with extreme penalties and a high personal cost of failure to patient trust, provider reputation, and long lasting embarrassing headlines. As the saying goes, an ounce of prevention is worth a pound of cure. In many cases, the cost of preventing a breach is so significantly lower, it makes for a more extreme ratio.

The Traditional Security Model

One of the lessons we have learned in the past from many of the attacks is that the weakest link in any security model are the people. No matter how good the security, all it takes is one compromised password, one path in through the firewall, and in many environments, there is little to stop a would be attacker from having free reign.

For a number of years, we have written our security models around our applications. In healthcare we often deal with an Electronic Healthcare Records application, and then many ancillary applications around it. This makes securing the individual ancillary applications more challenging. In the traditional model we have isolated systems by their function as below.

Web_App_Database

In the Web/Application/Database model, we have typically put the web server(s) out front, typically in a DMZ, and it is generally the most scalable. Another firewall and then the application server served up the front end application for the users, generally more powerful, and still scalable, but often times somewhat less than the web server. Another firewall and then the Database server, typically the most powerful, but the most difficult to scale. This often requires several hardware firewalls or a larger firewall with multiple modules or line cards. This is a highly secure model for the most part, but also can be quite costly.

Virtual LANs, or VLANs, are often used as a lesser form of security, traffic can be forced to a router or firewall through the use of the VLAN tag, separating the different levels of traffic, and stateful packet inspection can be done at this level. This is a potentially less expensive model, but increases latency, albeit very minimally, and increases network traffic back to a router or firewall. This also increases complexity of design, introducing a number of new challenges.

The NSX Security Model

Interestingly enough the NSX security model doesn't materially change the logical design necessarily. We still see the same concepts being used with the multiple firewalls, and potentially multiple layer 3 networks as needed. VLANs can be used in a similar fashion, but with reduced complexity of design and cost, but increased security.

NSX Web App DB

Physically however this is a significant change. From a performance perspective, this is all happening within the virtual environment. Much of the East-West traffic can be handled without ever leaving the virtual environment, and can be done at line speed. Firewall communication is no longer a bottleneck since packets can be inspected as they move between servers. This can be used to remove objections of vendors or application administrators.

From a security perspective, suddenly we are able to provide a true zero trust model. Literally we can inspect packets flowing between neighboring servers on the same subnet and VLAN. This enables us to assume that every packet is potentially compromised and look at it when it leaves the source, and again when it arrives at the destination. This is all done at the virtual hardware layer, removing the concern that compromising the guest OS could potentially disable the security layer.

The case for NSX in Healthcare

When we talk to our customers about NSX, the objections often come from the perimeter firewall team or the network team. A common concern is that NSX is looking to displace existing firewalls, switches, and other network devices. We also hear the network and security teams concerns that NSX takes away their visibility into the virtual environment. Somehow by implementing NSX, they are losing something. This is simply not the case. One of the driving principles for NSX is that we are bringing the network and security teams to the table in the virtualization discussion. No longer are we just requesting ports and VLANs. No longer can we afford to assume that things are secure because they are in the virtual environment. With the NSX model we need to have network and security teams involved in designing, implementing, and managing the virtual environment. Since we are expanding the functionality of the virtual network, and we are increasing visibility, the virtual environment becomes the domain of the entire IT team.

In the healthcare environment this means as we bring in new applications, or even retrofit existing applications we look to the broader team to come together and design security into the application deployment. This is true whether the application is built in house, or purchased. Application administrators become crucial to defining the security requirements for the Virtualization and Network teams. Security continues to be one of the most important conversations in healthcare IT. Bringing the entire infrastructure team and the business unit to the table with security is the best way to prevent a security breach. Bringing VMware NSX into the discussion provides a flexible and powerful tool which can give all sides options without compromise.

Springtime Promise - for Healthcare IT?

Here we are, just a couple of days before the beginning of spring and it is finally starting to look and feel that way (at least here in Ohio!)  The snow pack that didn't get here until mid-February is gone and temperatures are warming up nicely.  The promise of a new beginning for nature is on the rise, but does this promise also ring true for Healthcare IT?  I say yes, and here is why.

Just as the long winter starts fading and we begin to see signs of flowers pushing through the ground, I am hearing many of my customers talking about and asking how to reinvent their IT departments, and not just in minor ways.  Many are talking about major overhauls in structure, operations, and processes around the concept of cloud based IT, whether private or hybrid, but the ground swell is growing.

As we all know, human beings resist change. Change is hard – it takes us out of our comfort zone and threatens our sense of purpose. Adding the daily complications and the sheer inertia of keeping an organization running smoothly to this natural resistance to change can create what might seem like a mountain of snow!  So how does a leader begin to make significant progress with blizzard forces working again them?   One "shovel full" at a time; divide, conquer and persevere!

We have all heard the old saying, "You can't eat an elephant in one bite," and you cannot change you IT organization in one fell swoop, either.  In my experience, there are 4 stages to follow in effecting major change:  Operate, Automate, Integrate, and Innovate.  I will cover each of these separately, but first let's focus on Operate.

In general, one of the biggest complaints coming from the user community about IT is the length of time it takes to get new systems implemented. I have heard times ranging from 6 weeks to 6 months, and these are primarily in largely virtualized environments!  What is causing the delay in delivery?  Where is the bottleneck?  Wasn't one of the promises of virtualization to provide faster delivery of systems without the delays of procurement or physical installations?

This is where vRealize Operations comes into play.  With this tool and the built-in analytical engine, organizations can see in an instant where they are from a capacity standpoint today, and be able to forecast the future as to when this capacity will be exhausted.  This allows Infrastructure managers to be proactive in their requests for new hardware, rather than reactive to each new project. vRealize Operations gives you not only the committed resources that have been allocated to the environment,  gives insight into how much resource is actually being utilized, and where there might be opportunities to "right-size" systems, recoup and extend your investments.

vRealize Operations provides an organization with the foundation of the IT transformation.  Just like flower buds pushing up through late winter snow, the capacity planning/reporting feature is only the start of what vRealize Operations can do for your organization.

In my next installment I will talk about the concept of "Automate". Please take some time to enjoy the spring weather and allow the new beginning of nature to spark a new beginning for your IT department.

Until next time…

Creating the Perfect Clinical Desktop with Horizon View

I am often asked which aspect of a Horizon View implementation is the most important: SAN, SAN-less, pod-based, security, end points, tap-n-go, time savings, you name it, but what typically doesn’t come up is how a Horizon View project can completely change the way IT works with clinicians to improve workflow and facilitate excellent patient care.

The proper design and implementation of the clinical desktop is a critical element for a Horizon View project, one that cannot be designed or deployed without deep clinical buy-in. Certainly IT has to ensure that it’s constructed and deployed with technologies that can be supported but, how the clinical workspace integrates into workflow is a clinical function, and that requires clinical input for optimal utilization in a care setting. It should be clear that the introduction of Horizon View into the clinical desktop mix doesn't make the process around clinical desktop more challenging, quite the opposite in fact, rather, it's that the introduction provides IT with an opportunity to work more closely with the clinical team to design the optimal clinical experience.

The success of a large clinical implementation is just as dependent upon properly preparing the clinical users as it is about properly implementing the software and the supporting – keyword: ‘supporting’ technology; too often we focus on one, and neglect the other. Horizon View is one area where this is expressed; often more acutely and certainly more visibly, than any other, and this offers the opportunity to bring these two groups together to reimage the roles of the patient, caregiver, and technology

Working with the project team including: physician and nursing champions, informaticists, departmental specialists, i.e. phlebotomists, respiratory therapists and dieticians, as well as IT, the objective should be the construction of a future case workflow that minimizes interactions with the technology and empowers clinicians to focus on the patient, as efficiently as possible. The clinical desktop workflow needs to be streamlined and tested, thoroughly.

Key activities in the process include:

  • Define clinical device approach – computing as well as peripheral selection - in conjunction with representative, clinical steering subcommittee, team members
  • Work with key clinical stakeholders to identify future state clinical application mix and establish a branded clinical desktop as part of the initiative. The focus of the clinical desktop being ease of use and reliability – tie this to measureable service level agreements

◦  Define clinical desktop content and workflows

◦  Define clinical desktop technology

◦  Define clinical desktop security model

◦  Test, refine, test

  • Conduct device fairs to showcase device alternatives, including compute as well as peripherals, that IT can support and that are within budgetary constraints
  • Conduct technology lunch-n-learns to showcase the new clinical desktop being implemented and the technology that enables the workflow
  • Conduct show-me session in clinical lounges prior to go-live to offer clinicians the opportunity to log onto the new clinical desktop to become comfortable with the process as well as to take one more opportunity to verify security access and proper group/role placement
  • Design a seamless remote access methodology that should include the same seamless access to the same, consistent, clinical desktop as defined within the clinical inpatient or ambulatory settings, as appropriate. The clinicians should focus on learning the clinical application not the remote access technology. Desktop virtualization is a solid tool to ensure clinicians can work on premise and remotely with the same workflow.

Adopting processes that incorporates these elements not only supports the rapid clinical adoption of any new system by enabling the clinicians to focus on the patients that they are treating, rather than the integration points between software and technology, but also provides the unprecedented opportunity for clinicians and IT to work together. Additionally, aligning the clinical workflow and technology, creates a foundation for future process improvement initiatives that span these disciplines. This alignment will result in better patient care, which, after all, is really the point.

Policy Driven Storage the Healthcare Way

Looking at enterprise storage is a daunting task.  For years we have looked at the cost per gigabyte, cost per performance, and other metrics.  We have differentiated solutions based on small differences and what value they provide.  In Healthcare, we are particularly focused on solutions that are “certified” for our applications, with many enterprise healthcare environments running a number of storage platforms.

A case for policy driven storage

Early in my career I became involved in Storage Engineering.  I understood how the storage system worked, and I was able to quickly provision, and document what I was working on.  It was tedious, and there weren’t many people on the team who had the confidence to work on the system.  Storage tiering was either a manual process, or sometimes functions of add on software.  Deduplication and compression were all post process, and SSD was prohibitively expensive.  As we progressed, the technology didn’t really change much until the “All Flash Array” (AFA) was introduced.  Inline deduplication and compression were born out of necessity, and we saw the cost of SSD technology drop to the point where we expect to see Fibre Channel/SAS drives become irrelevant in the coming years.

This change has brought out a need to do things differently.  We have seen many vendors release better products; bigger, faster, with more features.  But the way we have handled storage at the virtual layer hasn’t kept up with improvement.  While capabilities like the VAAI have improved with each release, and we have continued to offload more and more of the storage workloads to the storage array, the way we manage the storage has not changed.  We have continued to present storage in a big logical drive and then proceed to share it among a number of virtual systems.  Not a terrible way to go, but that leaves performance and features on the table.  There must be a better way.

What does Policy Driven Storage look like?

To take full advantage of the new capabilities, we needed to find a way to remove some of the layers of abstraction.  As with anything, generally speaking, the fewer layers between two components the better.  In order to manage directly though, we need a common interface, a common way of doing things.  Again with the multiple storage vendors we often find in many healthcare environments, it is important to manage each through a common set of policies.  Things like performance, deduplication, compression, or anything a system is capable of providing, should be handled at the individual virtual disk level.  This also makes replication and recovery far more granular and manageable.

To make policy driven storage a reality, VMware gives two different options.  Virtual Volumes (VVOLs), and Virtual SAN (VSAN).  These are two different ways of getting to the same point, and both have their merits.  The real value is that policies can be used to manage both, and once configured, it becomes seamless to the VMware administrator.

VVOLs

The concept behind VVOLs is not so much different from the original VASA.  We have worked with our storage partners and they have exposed their capabilities to a common interface.  Previously vendors would install plugins to manage their storage through vCenter, with some tasks offloaded to the storage array.  The interfaces varied in their value, and didn’t really provide a unified way of managing the storage; especially for a customer having multiple array vendors.

With the introduction of VVOLs, a policy is created to enable a variety of attributes, such as high performance and deduplication.  When a VM is created or moved, the VMware administrator is provided a list of compatible datastores to select from, based on the policy.  If the workload changes, the administrator may change policies and move the workload to a more appropriate datastore.  This is available because the storage array advertises its capabilities to the virtual environment.  The storage policies are then created based on these advertised capabilities.  Since everything is handled by the array, there is lower overhead on the host, and more granular control since each VVOL is a separate object rather than a group of objects on a single lun.

             VVOLs

 VSAN 

VSAN is the next generation of Software Defined Storage (SDS).  The general concept behind software-defined storage is taking disks internal to a host server, and using them to create a logical storage system. This is managed through software, and historically has been done as a virtual machine controller sitting on each host.

VSAN differs because it is a kernel module, built into the hypervisor itself.  This removes much of the complexity, and overhead typically associated with SDS.  The deployment and expansion is literally only a few clicks, and provisioning storage is as simple as creating a policy.

Because VSAN is designed to be policy driven, it becomes incredibly simple to manage, and we often find that it is considered to be a part of the VMware system by customers who deploy it.  Since it is server-based storage, the storage team does not often need to be involved.

 VSAN

It is important to note, the concept of a datastore changes with both VVOLs and VSAN.  Each virtual disk becomes a lun on the storage array, or in the case of VSAN, a series of separate objects.  The policy simply manages the placement of the objects, and what capabilities it needs.  The Datastore simply appears as a higher-level construct representing a logical grouping of similar virtual disks, not a logical device as previously. 

The Healthcare Difference

What is the value of policy driven storage for healthcare?  Aside from the simplified management, ease of deployment and granular control, policy driven storage unifies the various types of storage.  In many of our customers we find that multiple vendors provide storage arrays with varying capabilities.  This often requires working with different members of the storage team to provision new storage capabilities, and creates challenges when upgrades are required or new implementations.

As we look at healthcare we regularly encounter with new regulations, new requirements, and we always seem to be struggling to keep up with the latest trends.  By using a policy driven approach, we can not only respond more quickly to our customers and security teams, but we can also create cross functional teams who can provide more value to the internal customer, and ultimately to our end customer, the healthcare