Home > Blogs > VMware for Healthcare IT

Springtime Promise – for Healthcare IT?

Here we are, just a couple of days before the beginning of spring and it is finally starting to look and feel that way (at least here in Ohio!)  The snow pack that didn’t get here until mid-February is gone and temperatures are warming up nicely.  The promise of a new beginning for nature is on the rise, but does this promise also ring true for Healthcare IT?  I say yes, and here is why.

Just as the long winter starts fading and we begin to see signs of flowers pushing through the ground, I am hearing many of my customers talking about and asking how to reinvent their IT departments, and not just in minor ways.  Many are talking about major overhauls in structure, operations, and processes around the concept of cloud based IT, whether private or hybrid, but the ground swell is growing.

As we all know, human beings resist change. Change is hard – it takes us out of our comfort zone and threatens our sense of purpose. Adding the daily complications and the sheer inertia of keeping an organization running smoothly to this natural resistance to change can create what might seem like a mountain of snow!  So how does a leader begin to make significant progress with blizzard forces working again them?   One “shovel full” at a time; divide, conquer and persevere!

We have all heard the old saying, “You can’t eat an elephant in one bite,” and you cannot change you IT organization in one fell swoop, either.  In my experience, there are 4 stages to follow in effecting major change:  Operate, Automate, Integrate, and Innovate.  I will cover each of these separately, but first let’s focus on Operate.

In general, one of the biggest complaints coming from the user community about IT is the length of time it takes to get new systems implemented. I have heard times ranging from 6 weeks to 6 months, and these are primarily in largely virtualized environments!  What is causing the delay in delivery?  Where is the bottleneck?  Wasn’t one of the promises of virtualization to provide faster delivery of systems without the delays of procurement or physical installations?

This is where vRealize Operations comes into play.  With this tool and the built-in analytical engine, organizations can see in an instant where they are from a capacity standpoint today, and be able to forecast the future as to when this capacity will be exhausted.  This allows Infrastructure managers to be proactive in their requests for new hardware, rather than reactive to each new project. vRealize Operations gives you not only the committed resources that have been allocated to the environment,  gives insight into how much resource is actually being utilized, and where there might be opportunities to “right-size” systems, recoup and extend your investments.

vRealize Operations provides an organization with the foundation of the IT transformation.  Just like flower buds pushing up through late winter snow, the capacity planning/reporting feature is only the start of what vRealize Operations can do for your organization.

In my next installment I will talk about the concept of “Automate”. Please take some time to enjoy the spring weather and allow the new beginning of nature to spark a new beginning for your IT department.

Until next time…

Creating the Perfect Clinical Desktop with Horizon View

I am often asked which aspect of a Horizon View implementation is the most important: SAN, SAN-less, pod-based, security, end points, tap-n-go, time savings, you name it, but what typically doesn’t come up is how a Horizon View project can completely change the way IT works with clinicians to improve workflow and facilitate excellent patient care.

The proper design and implementation of the clinical desktop is a critical element for a Horizon View project, one that cannot be designed or deployed without deep clinical buy-in. Certainly IT has to ensure that it’s constructed and deployed with technologies that can be supported but, how the clinical workspace integrates into workflow is a clinical function, and that requires clinical input for optimal utilization in a care setting. It should be clear that the introduction of Horizon View into the clinical desktop mix doesn’t make the process around clinical desktop more challenging, quite the opposite in fact, rather, it’s that the introduction provides IT with an opportunity to work more closely with the clinical team to design the optimal clinical experience.

The success of a large clinical implementation is just as dependent upon properly preparing the clinical users as it is about properly implementing the software and the supporting – keyword: ‘supporting’ technology; too often we focus on one, and neglect the other. Horizon View is one area where this is expressed; often more acutely and certainly more visibly, than any other, and this offers the opportunity to bring these two groups together to reimage the roles of the patient, caregiver, and technology

Working with the project team including: physician and nursing champions, informaticists, departmental specialists, i.e. phlebotomists, respiratory therapists and dieticians, as well as IT, the objective should be the construction of a future case workflow that minimizes interactions with the technology and empowers clinicians to focus on the patient, as efficiently as possible. The clinical desktop workflow needs to be streamlined and tested, thoroughly.

Key activities in the process include:

  • Define clinical device approach – computing as well as peripheral selection – in conjunction with representative, clinical steering subcommittee, team members
  • Work with key clinical stakeholders to identify future state clinical application mix and establish a branded clinical desktop as part of the initiative. The focus of the clinical desktop being ease of use and reliability – tie this to measureable service level agreements

◦  Define clinical desktop content and workflows

◦  Define clinical desktop technology

◦  Define clinical desktop security model

◦  Test, refine, test

  • Conduct device fairs to showcase device alternatives, including compute as well as peripherals, that IT can support and that are within budgetary constraints
  • Conduct technology lunch-n-learns to showcase the new clinical desktop being implemented and the technology that enables the workflow
  • Conduct show-me session in clinical lounges prior to go-live to offer clinicians the opportunity to log onto the new clinical desktop to become comfortable with the process as well as to take one more opportunity to verify security access and proper group/role placement
  • Design a seamless remote access methodology that should include the same seamless access to the same, consistent, clinical desktop as defined within the clinical inpatient or ambulatory settings, as appropriate. The clinicians should focus on learning the clinical application not the remote access technology. Desktop virtualization is a solid tool to ensure clinicians can work on premise and remotely with the same workflow.

Adopting processes that incorporates these elements not only supports the rapid clinical adoption of any new system by enabling the clinicians to focus on the patients that they are treating, rather than the integration points between software and technology, but also provides the unprecedented opportunity for clinicians and IT to work together. Additionally, aligning the clinical workflow and technology, creates a foundation for future process improvement initiatives that span these disciplines. This alignment will result in better patient care, which, after all, is really the point.

Policy Driven Storage the Healthcare Way

Looking at enterprise storage is a daunting task.  For years we have looked at the cost per gigabyte, cost per performance, and other metrics.  We have differentiated solutions based on small differences and what value they provide.  In Healthcare, we are particularly focused on solutions that are “certified” for our applications, with many enterprise healthcare environments running a number of storage platforms.

A case for policy driven storage

Early in my career I became involved in Storage Engineering.  I understood how the storage system worked, and I was able to quickly provision, and document what I was working on.  It was tedious, and there weren’t many people on the team who had the confidence to work on the system.  Storage tiering was either a manual process, or sometimes functions of add on software.  Deduplication and compression were all post process, and SSD was prohibitively expensive.  As we progressed, the technology didn’t really change much until the “All Flash Array” (AFA) was introduced.  Inline deduplication and compression were born out of necessity, and we saw the cost of SSD technology drop to the point where we expect to see Fibre Channel/SAS drives become irrelevant in the coming years.

This change has brought out a need to do things differently.  We have seen many vendors release better products; bigger, faster, with more features.  But the way we have handled storage at the virtual layer hasn’t kept up with improvement.  While capabilities like the VAAI have improved with each release, and we have continued to offload more and more of the storage workloads to the storage array, the way we manage the storage has not changed.  We have continued to present storage in a big logical drive and then proceed to share it among a number of virtual systems.  Not a terrible way to go, but that leaves performance and features on the table.  There must be a better way.

What does Policy Driven Storage look like?

To take full advantage of the new capabilities, we needed to find a way to remove some of the layers of abstraction.  As with anything, generally speaking, the fewer layers between two components the better.  In order to manage directly though, we need a common interface, a common way of doing things.  Again with the multiple storage vendors we often find in many healthcare environments, it is important to manage each through a common set of policies.  Things like performance, deduplication, compression, or anything a system is capable of providing, should be handled at the individual virtual disk level.  This also makes replication and recovery far more granular and manageable.

To make policy driven storage a reality, VMware gives two different options.  Virtual Volumes (VVOLs), and Virtual SAN (VSAN).  These are two different ways of getting to the same point, and both have their merits.  The real value is that policies can be used to manage both, and once configured, it becomes seamless to the VMware administrator.

VVOLs

The concept behind VVOLs is not so much different from the original VASA.  We have worked with our storage partners and they have exposed their capabilities to a common interface.  Previously vendors would install plugins to manage their storage through vCenter, with some tasks offloaded to the storage array.  The interfaces varied in their value, and didn’t really provide a unified way of managing the storage; especially for a customer having multiple array vendors.

With the introduction of VVOLs, a policy is created to enable a variety of attributes, such as high performance and deduplication.  When a VM is created or moved, the VMware administrator is provided a list of compatible datastores to select from, based on the policy.  If the workload changes, the administrator may change policies and move the workload to a more appropriate datastore.  This is available because the storage array advertises its capabilities to the virtual environment.  The storage policies are then created based on these advertised capabilities.  Since everything is handled by the array, there is lower overhead on the host, and more granular control since each VVOL is a separate object rather than a group of objects on a single lun.

             VVOLs

 VSAN 

VSAN is the next generation of Software Defined Storage (SDS).  The general concept behind software-defined storage is taking disks internal to a host server, and using them to create a logical storage system. This is managed through software, and historically has been done as a virtual machine controller sitting on each host.

VSAN differs because it is a kernel module, built into the hypervisor itself.  This removes much of the complexity, and overhead typically associated with SDS.  The deployment and expansion is literally only a few clicks, and provisioning storage is as simple as creating a policy.

Because VSAN is designed to be policy driven, it becomes incredibly simple to manage, and we often find that it is considered to be a part of the VMware system by customers who deploy it.  Since it is server-based storage, the storage team does not often need to be involved.

 VSAN

It is important to note, the concept of a datastore changes with both VVOLs and VSAN.  Each virtual disk becomes a lun on the storage array, or in the case of VSAN, a series of separate objects.  The policy simply manages the placement of the objects, and what capabilities it needs.  The Datastore simply appears as a higher-level construct representing a logical grouping of similar virtual disks, not a logical device as previously. 

The Healthcare Difference

What is the value of policy driven storage for healthcare?  Aside from the simplified management, ease of deployment and granular control, policy driven storage unifies the various types of storage.  In many of our customers we find that multiple vendors provide storage arrays with varying capabilities.  This often requires working with different members of the storage team to provision new storage capabilities, and creates challenges when upgrades are required or new implementations.

As we look at healthcare we regularly encounter with new regulations, new requirements, and we always seem to be struggling to keep up with the latest trends.  By using a policy driven approach, we can not only respond more quickly to our customers and security teams, but we can also create cross functional teams who can provide more value to the internal customer, and ultimately to our end customer, the healthcare

A Happy New Year for Healthcare IT?

When January rolls around you most likely think of new beginnings, resolutions, a sense of renewal, and so on. For IT organizations in the Healthcare industry, I wonder how many share these feelings? Shrinking budgets, increased customer demands, growing governmental regulations, just to name a few have many CIO’s wondering where their “new beginnings” will start as they struggle to push forward their efforts to meet the needs of their enterprise.

The Problem

Most hospital entities today are re-evaluating their future spend projections based upon the changes in government reimbursements and the new “pay for performance” measures that are being instituted. Even the most profitable institutions are looking at flat or negative revenue growth estimates due to these changes. Now, more than ever before, it is imperative that IT departments have the ability to provide financial transparency to the organization and the ability to properly demonstrate their value to the company. Management 101 states, “If you do not measure it, you cannot manage it.”

As a former Healthcare IT director, gathering all of this information would take me a couple of weeks of dedicated work; pouring it all into spreadsheets, slicing and dicing the data, and then working with the Finance department to validate those outcomes. This equates to a very onerous process fraught with the chance of human error. Then once all of this work is complete, it would start all over again for the next month. Wouldn’t life be better if there was some type of automated system that could ingest all of this data from all of the various sources and produce the reports and executive dashboards that the organization needs?

Having the ability to produce a “Bill of IT” to better understand the overall costs of the respective areas (Infrastructure, Applications, Service Desk, etc.) I believe, are imperative for today’s healthcare IT departments. For as long as I can remember, most of these “cost numbers” were very much akin to educated guesses versus prescriptive, repeatable figures driven from the company’s own general ledger.

I recently led a focus group session around “Financial Transparency in Healthcare IT” with over 20 CIO’s from various institutions. The general feelings from the vocal members of the group were a little surprising to me, and to my associates in the room. We heard statements like “We don’t see the value in tracking costs to this level of detail (Cost per VM, Cost per Mailbox, etc)” and “This is something more along the lines that the CFO would utilize, not the CIO.”

How can a Service Line Executive (CIO) not want to understand their unit costs and be able to show this to their peers and leadership in a simple, repeatable format? And not just once a month, but at anytime via an easy to understand executive dashboard?

How do we address this problem?

Enter vmware’s vRealize Business solution. vRealize Business (formerly ITBM) is a tool set from vmware that brings together all of the disparate data points in an organization automatically and will display the fully loaded cost of any point in your IT environment. The cost of a VM, cost of a mailbox, cost of your PACS system or your EHR. Anything and everything! Utilizing an organizations data sources, (General Ledger, ERP, CMDB, etc.) these reports and cost models can not only be generated for a “where are we today” view, but that same data can be utilized for forecasting, budgeting, or helping determine IT costs for mergers and acquisitions.

vRealize Business is a solution for the modern Healthcare IT department. With this solution, IT will finally be able to answer the questions with a high degree of accuracy like, “Why is IT so expensive?” or “If we add 200 physicians next year, what is the incremental costs?” and so on. I know that most of my former peers have had to face those questions, and for the most part have provided the best number that they could muster from manually pulling all of this information together. With vRealize Business, this is now just a couple of mouse clicks to see this data within minutes, not days. This solution truly does provide CIO’s with the ability not only to measure the financial health of their department, but also to manage financial performance in a much simpler and repeatable process.

Managing the Healthcare Cloud

How do you manage your virtual environment?  Has it changed since converting from the physical world?  Often times we tend to stick with what we know, and what has worked.  Budgets are not increasing but demand always does, in Healthcare we are constantly faced with new regulations, new security threats, and new demands on our time.  We need to find a new way of managing the healthcare cloud, or we will not be able to keep up with the requirements.

 The Big Picture

Looking at managing the cloud requires taking a step back to look at the bigger picture.  Automation, cloud management, software defined compute, networks, and storage, these are all just parts of a larger strategy.

Think about you smart phone.  When you want new functionality, you go to the application eco system, search, and grab what you want.  If you are actually calling the support line to get an application provisioned to your device, you are not likely to be using that device for long.

The end state goal for any cloud management project should be providing a better, more seamless service to the end user.  As IT professionals we need to be able to predict problems, project growth and needs, and ensure our environment is secure from both accidental, and intentional data compromise.

Management, not just monitoring

As a systems administrator, I spent many hours looking through logs, glued to a console, and looking at a myriad of monitoring products, looking for the cause of problems.  We invested some serious money in various monitoring products, and were very proud of being able to get to the bottom of a problem with only 4-5 different solutions, spanning 3-4 different internal teams.  Then we found we were not great at capacity planning, we were using multiple data sets and trying to build spreadsheets.  On top of that, the security team was always bugging us about some new issue, or wanting us to show them the configurations hadn’t changed.  We brought in more products to solve this issue.

As my career progressed, I made my mark by creating scripts and even wrote some code which would scrape a number of systems and databases and aggregate it into a report we could send to management.  I then spent many hours pouring over data and correcting issues I had created inadvertently by not correctly correlating the data properly.  The problem wasn’t a lack of tools, but the lack of the right tool.

What makes Healthcare different?

When I first started in Healthcare, I was surprised by the sheer number of applications.  It is clear many of these revolve around the core Electronic Health Records, EHR, application, but it is nonetheless staggering.  As we dig a little deeper though, it is remarkable how intertwined these applications really are.  Often times if one application has a problem, a large part of the enterprise can be affected.

Further complicating the healthcare environment is our problems are different than most other places.  One complaint we hear often is that in some EHR programs, print queues are a big problem.  In an electronic system, this is completely counter intuitive, but it is a reality we live with.

Make It Better

So how do we manage the healthcare environment?  The best answer is data aggregation from all sources, and a single management tool that can give you everything.  As we all know, there is no perfect solutions in IT.  There will always be something that will not be addressed, but the key is to find a tool that can handle a majority of the management of the environment, can take data from multiple sources, and most importantly can enable you to remediate and correlate issues and potential issues.

We have all been there, alerts out of control, pages in the middle of the night, digging through logs.  The best way to start making Healthcare clouds more manageable is to provide answers to potential problems before the end users see them.  Providing a data driven methodology for capacity planning and configuration drift, and give management metrics they can use to show the business, will make the Healthcare IT professional’s job much less challenging and enable them to focus on new projects, and providing more business value.

Managing the VMware way

With the advent of virtualization, we are being required to do more with fewer resources.  In healthcare we find that as new regulations are passed, and we modernize to meet new demands, we are especially affected by this trend.  In order to keep pace with the increasing requirements a unified tool becomes more important for managing a healthcare cloud.  With the release of vRealize Operations 6, we are able to more effectively manage larger environments.  As VMware works with more EHR vendors to integrate their data through the Care Systems Analytics products, we are seeing a more healthcare focused solution.

It is highly unlikely there will ever be a scenario where human intervention is not required. It is however necessary that we take advantage of the data available to us to make effective management decisions.  By aggregating all available datasources, and presenting through a single system can increase operational efficiency, and enable a more manageable healthcare cloud.

- Aaron Dumbrow, Systems Engineer

Simplify your Datacenter, Lower Your Costs, and Prepare for the Cloud

Heathcare providers are under pressure to further reduce costs, offer new capabilities, and explore Cloud options for the future. These pressures require that we revisit the design assumptions of the on premise datacenter to lower costs, reduce complexity, and add capabilities. The platform required to deliver new capabilities and lower the total cost of on premise infrastructure exists today in the Software Defined Datacenter, and customers are adopting it to meet current demands and prepare for the future.

Present Healthcare Pressures

  • New capabilities are required e.g. mobility and enhanced security.
  • Meaningful Use Stage 3 will put further pressure on providers to control costs in all areas of the business.
  • IT spend represents 3-8% of Revenue for a Hospital or Healthcare System.
  • CIOs at CHIME have for two years told us that they want to exit the datacenter business in three to seven years.
  • Fully migrating to Cloud will take two to five years for the most committed organizations.
  • The current standard datacenter architecture is complex and expensive.

Optimizing On Premise Infrastructure

SDDC SimplicityControlling IT spend is a priority of every Healthcare system and Hospital. Infrastructure and Staff represent significant portions of that budget, and there are significant opportunities to improve the scale of staff while reducing the complexity and cost of the infrastructure via SDDC. Organizations that are embracing the new technologies are realizing savings in all areas of their infrastructure: Storage, Compute, and Networking, and in so doing they are ensuring their competitive strength relative to Public Cloud alternatives.

Complexity is an oft overlooked reality of the current datacenter design. Simplicity is the key to steady operations: reducing the number of moving parts inherently increases the reliability of a system. Application operation requires a delicate confluence of Compute, FibreChannel, Networking, and Storage. These components come from multiple vendors, scale independently, are managed and monitored separately, yet all must work together. This is very difficult to architect, manage, and troubleshoot effectively and overworks a lot of experienced personnel.

Storage

Infrastructure Cost DistributionStorage presently represents roughly 50% of the annual Infrastructure spend in Healthcare. New software storage solutions deliver the same performance and reduce Storage spend by 30-60%. In a recent Healthcare customer project, they were able to realize a 50% savings on storage while gaining additional compute nodes using software defined storage.

The current virtualization standard of distributed compute nodes backed by a highly resilient and available storage array was a necessary stage in the evolution of the datacenter because of the nature of the workloads: they are special and many cannot be made effectively resilient at the application level, so we rely on the infrastructure layer to deliver the availability. The storage array was the way to do this, and it required the expansion of yet another infrastructure element, the FibreChannel SAN.

By leveraging virtual storage in the compute nodes, significant capital and operating savings are being realized, and due to persistent cost pressures and sound business decision making, it is an emerging standard for efficient on premise architecture.

Compute

Compute represents another significant chunk of infrastructure spend, roughly 15-25%. Blades have emerged as the popular option, but it is essential to revisit the reason. It isn’t rack space, host identity management, or any other vendor-specific capability. Blades deliver savings on FibreChannel ports to connect the systems to the storage. There are not significant efficiencies gained from Blades in any other aspect, except as affect the ease of connecting them with FC storage. But what happens if the new storage models do not require FC? The fundamental value proposition of blade architecture erodes and vanishes in favor of lower cost, equivalent capability from rack systems with local disk and software defined storage.

The premier Blade compute vendors are commanding a great deal of spend, but they are not delivering value commensurate with that spend, especially in the face of new distributed storage capabilities that they are not optimal to deliver. Rack mount systems offer greater capability and flexibility for less, and all they need to operate and deliver the same outcomes as current SAN attached designs is power and networking.

Appliance Compute

Appliance Compute merits a thorough discussion as well. Reducing complexity and cost while adding capability is a challenge, and both ends can be achieved easily via the EVO platform. The EVO platform is a reference hardware architecture with fewer interchangeable parts. We have seen an increase in host instability due to hardware in the last two years. Ever expanding combinations of firmware, storage controllers, network adapters, and drivers have created a hardware ecosystem so large that it is difficult for hardware vendors to test all permutations and combinations of the components. The solution is simple: reference architecture with fewer variables and greater consistency.

EVO rack systems offer everything in one box: Compute, Storage, and Networking. Like the Rack systems, all they need is power and networking, and they deliver all of the capabilities needed by a modern infrastructure platform with more of the capabilities configured and managed in software than ever before.

Networking

Current networking architectures are complex and expensive as well, representing 20-30% of infrastructure spend. That cost is in the gear itself and the enhanced security capabilities tied to it. Virtual Networking allows those security policies to be moved up out of the gear, which has significant implications: security policies can be attached to applications and users instead of IPs and ports, and the capabilities of the gear are reduced to efficient packet switching.

By moving the security policies up in the stack, we gain security capabilities there were prohibitively expensive to implement and impractical to maintain, and we allow choice in the gear from many vendors who cost 30-40% less than the dominant communication vendors.

Virtual Networking allows an ecosystem of devices to share in a global policy definition and implementation. We can easily draw boundaries around applications, policies that travel with the workloads as they move about the datacenter and later into the Cloud. Rules are implemented close to the objects and close to the edge. Workloads that cannot talk to the internet can have their packets dropped at the hypervisor; workloads that are in different security zones on the same host can communicate directly without traversing the edge network; and application access can be granted to specific users at the network level – their packets won’t even flow if they are not allowed.

End User Computing and Mobility

The popular way to deliver applications in Healthcare has the same complexity issues as the rest of the datacenter for the same reasons: it leverages expensive compute and storage. Capital costs to deliver SAN attached End User computing infrastructure is frequently upwards of $500 per user. A modern Always on Point of Care infrastructure can deliver a superior Clinical Experience for capital costs less than $250 per user. The operating efficiencies and flexibility offer tremendous value beyond that, but the capital costs are significant and impossible to ignore.

Path to the Cloud: Act Now to Realize the Savings and Prepare for the Future

CIOs at CHIME repeat that exiting the datacenter business is an objective; it is only a question of when, but that transition will take time. With that in mind, there are two realizable short term objectives: invest in the solutions that lower on premise capital and operating cost, and build the operational excellence required to effect a seamless transition to the Cloud with the time comes.

SDDC is the means to deliver on those objectives. Leveraging software defined storage and virtual networking allows compelling savings in storage, compute, and networking. Beyond that, the platform is designed to loosely couple your on premise datacenter with Public Cloud providers to seamlessly migrate workloads along with their operating and security policies with minimal interruption—sometimes no interruption at all. Imagine it: a stretched datacenter with policies defined in software and implemented in the walls of your datacenter and in your portion of a Cloud provider. Administrative control remains with infrastructure and application owners and allows the easiest choice of runtime with the easiest transition.

This is where we are all headed, and the technologies are in use now, today. We can get you there, too.

 

Automating Healthcare

Think back to what got you really excited about technology. Why do you do what you do, what is your defining moment in IT? Hopefully if you have been in the industry for a while, that is a fond memory, and you have built on that to make some amazing things happen. Something we are always asking here at VMware Healthcare is what can we do to make patient care better, and how can IT become a partner to the providers.

Defining Automation in the Healthcare World

So in the world of Healthcare, what do we mean when we talk about Automation? We are certainly not going to allow end users, in this case Doctors, to provision servers in most cases. Automation for the Healthcare environment typically means one of two things – Standardization for IT or Self Service for Application Administrators:

Standardization for IT:

As a former IT administrator/engineer I remember many times going through server build processes to hand off to the application teams. I would open my checklist, even on my 300th build, and go line by line checking off each as I completed the task. It got to the point where I memorized the checklist. I had dreams about the checklist. I hated the checklist. But, there was no forgiveness for the person who failed to build a server to the exacting standards we had agreed upon. Virtualization made this better. However, we just moved the checklist into the virtual world – the process didn’t change.

In the Healthcare environment, Automation enables IT to offload repetitive tasks, not to a junior admin or operator, but rather to the system. This then enables the existing teams to improve and focus on what is important – making the technology more available for the care providers. This also ensures that every system is built to the exacting standard every time, with no deviations other than those specified by the blueprint.

Self Service for Application Administrators:

Another use case for Automation is to speed up the application deployment process. In many Healthcare environments, the application administrator requests the system. The infrastructure team has to build the server, physical or virtual, provide network and storage services, and ensure the system is under management prior to handing it over. This whole process can be tied into a change management database, providing oversight and any controls needed can be given to the infrastructure team. Thus an application administrator still submits their request in a similar manner, and can receive their system(s) in far less time since the whole process can be automated.

Components of Automation

It is critical to remember that if we are automating, the Software Defined Data Center becomes more important than ever. We can’t just virtualize compute and put Automation in front of it and expect everything to work. We need an all-in approach. We need to be able to quickly modify storage and networks through a policy driven approach, as well.

This does require a serious look at Virtual Networking, and Software Defined Storage as diagramed below. While the physical infrastructure plays a critical role, the control should be moved into a software defined and policy driven model in order to fully enable Automation.

image001

Automate Everything? What Could Go Wrong With That?

So this all sounds great where do I sign up? It is always good to look at the potential pitfalls of any technology. With Automation, the benefits are many, but we do need to elevate the staff managing these technologies. There are blueprints to build, and we need to ensure that is done properly.

We also need to provide proper process governance. Automation of a bad process will do simply that. The last thing we want is to take something from bad to worse. Any Automation project of any size should start with a review of business processes as well as IT processes. Automation should occur at the process level as well as the technology level.

Why Automate With VMware?

This is a massive undertaking to put it mildly. It really comes down to a question of interoperability. Looking at the larger picture, there will always be point products which will solve individual needs, it becomes a question of scale. Making everything work together from a management and Automation perspective makes the VMware vRealize Cloud Management platform preferable over a large number of products by multiple vendors.

As Healthcare continues to evolve, as we are required to deal with static or shrinking budgets, we in Healthcare IT must continue to evolve and improve our processes. Automation should not be frightening, or dangerous, but rather an opportunity to move forward and provide a better overall experience to our users and their patients.

 

- Aaron Dumbrow, Systems Engineer

Too Much Stuff: The Problem of Legacy Data in Healthcare

My family recently moved across the country and in that process we discovered something about ourselves; we have too much stuff. I’m not talking about things we use rather, things that we store which, for the most part, really falls into two main categories: things I have to store, old financial records, and things my wife wants me to store, christening gowns, cherished toys my children have long-since outgrown, tokens from our own childhoods, memories really. It occurred to me that my stuff, thank you Mr. Carlin, isn’t really that different than the legacy stuff that I had to deal with in healthcare.

Legacy [stuff] systems are a problem for every healthcare organization in this country. How could they not be? In the years before ARRA and Meaningful Use the medical record had become, for many, a hodge-podge of semi-connected systems and processes. If you checked into an ER then your medical record may have been electronic but, if you were admitted, then it could have been on paper, unless you spent time in the ICU, in which case it could have been on yet another electronic system.

Matters get even more complicated when you consider that this data is regulated. Individual states require the maintenance of a patients’ legal medical record for between 7 and 28 years, depending upon the state and the age of the patient at the time of treatment. Oddly enough, the need doesn’t stop there. Remember your clinicians? They’ve been documenting SOAP notes for years, not just on paper but, electronically as well, and have an expectation that they are going to be available for future episodes of care.

According to the ONC we’ve made massive progress at the provider level towards the adoption of highly integrated electronic medical records that meet the new federal standards. We’ve gone from a 13% adoption rate to over 56%, as of the latest published data. That’s fantastic progress but, in the wake of that transition, we’ve left behind a virtual graveyard of systems with shards of critical data still clinging to their disk drives; systems that have to be maintained – personnel, equipment, licenses, support – for a long time and are standing squarely in the path of achieving your clinical integration objectives and OpEx dreams.

How do we address this problem?

George suggested buying a bigger house but, as he points out, that rarely works. We need to address it head-on with a strategy that considers all of the risks, garners buy-in from in-house legal and compliance, as well as the clinical oversight, and IT. So, much like my personal problem with “stuff,” healthcare organizations face the same dilemma: data you have to maintain for legal and compliance reasons and data that your clinicians want you to maintain because it will one day be useful.

Stay tuned for more notes from me as we dig deeper and examine different alternatives to address this challenge and meet your organizational responsibilities: archival, common repository, and how tying these strategies into the right cloud might really address this problem once and for all. I look forward to the discussion.

Innovation for Improved Care Through Cross-Functional Teams

Information Technology has become an essential component of care delivery. The Healthcare industry has made significant investments in infrastructure and EMR systems to comply with regulatory guidelines, but the most impactful transformative opportunities remain. Physician Satisfaction and Patient Satisfaction through better accessibility and availability of critical systems and new mobile services are already emerging as a source of competitive advantage in Healthcare.

Deploying these new capabilities requires continual evaluation of new technologies and resources dedicated to the purpose. This is much easier said than done and requires that IT leaders take a strategic look at how IT organizations are structured and how innovation can be made inherent.

Present Challenges: Technology Silos
Most IT organizations in Healthcare are still built around functional disciplines: virtualization, storage, networking, security, desktop, application delivery, etc. While this alignment yields expertise in each functional discipline, it creates silos of communication and leadership. When new technologies emerge that blur the boundaries between the disciplines or require substantial changes across them, it can be extremely difficult to even explore those technologies, regardless of their value to improve Physician Satisfaction, Patient Care, or even reduce cost.

Organizations built around technology disciplines create isolated functions and management that inhibit cross-functional communication and innovation. This is a significant barrier to the review and adoption of new solutions in Healthcare.

Organizations built around technology disciplines create isolated functions and management that inhibit cross-functional communication and innovation. This is a significant barrier to the review and adoption of new solutions in Healthcare.

Disruptive technologies have extraordinary impact, but their very nature creates operational and political challenges. A pertinent example is virtualization itself. The value proposition was outstanding: compelling capital and operating savings, and the impact to availability and change management to support growth had a measurable impact on application availability. Despite these benefits, many organizations saw slow adoption due to challenges across technology disciplines (chiefly server, storage, and networking) who all had to learn new vocabularies and engage in a more collaborative fashion to design, build, and operate the platform. In time, all three have come to understand the relevant concepts, and virtualization has become the standard for x86 infrastructure. In many cases, this change was facilitated by changes in leadership and the creation of a virtualization team with expertise in the relevant disciplines.

Almost identical challenges exist today with several new capabilities and service delivery models: Mobility, Virtual Networking, Microsegmentation, Software-defined Storage, and Hybrid Cloud strategies. Successful adoption requires substantial collaboration and new responsibilities for existing staff.

Solution: Cross-Functional Teams for Lifecycle Stages
The solution can be found in organizations whose core business is technology such as Software as a Service providers. For such companies, the business and technology platform are inseparable, and long-term success requires that the infrastructure and operating platform evolve to leverage the best technologies that enable accessibility, availability, and security in the most efficient manner while controlling cost. These technology companies build innovation and operational excellence into their very structure.

Organizations that are built to support application and infrastructure lifecycle management using cross-functional teams are better equipped to evaluate and implement new solutions that lead to greater return on capital, lower operating expense, and superior clinical experiences.

Organizations that are built to support application and infrastructure lifecycle management using cross-functional teams are better equipped to evaluate and implement new solutions that lead to greater return on capital, lower operating expense, and superior clinical experiences.

Instead of technical silos, evolving infrastructure groups leverage cross-functional teams focused on application lifecycle stages: Architecture, Engineering, and Operations. Each team has experts in the traditional technical disciplines, but the structure requires collaboration and cross-education to succeed: experts in individual areas educate the rest to evaluate new solutions and develop new infrastructure and application models. As new technologies are selected and new designs chosen, there is tight collaboration with the Engineering team to build the infrastructure and input from the Operations team to ensure intelligent monitoring and feedback are part of the design.

Architecture is tasked with exploring new technologies and architectures. Their primary purpose is to innovate: to evaluate solutions that might better serve the delivery of care and the efficient operation of clinicians regardless of the technical discipline. They review technologies through evaluation and small pilots in close collaboration with clinicians and other stakeholders. They answer important questions: How can we better serve clinicians and address their mobility needs? How can we enable service delivery to patients on their own devices? How can we effectively deliver applications as a service to affiliates and partners? What new security capabilities are required to address a complex communications and regulatory environment? What is the right solution for newly acquired clinics and remote users? How do we handle multiple-os clients and user-owned devices? What is the value of Business Continuity, and how can it best be made available to application owners? What does the next generation of our infrastructure look like?

Engineering builds next generation infrastructure and owns change management. The Engineering team works with Architecture on pilots to begin operationalization of new solutions and ensure necessary infrastructure changes are implemented smoothly. When composed of members who are experts in a broad range of technical disciplines, the team as a whole can develop a more comprehensive understanding of the operating environment, which improves design and reduces time to overcome complex challenges that continually encompass more of the traditional technical disciplines.

Operations owns break-fix, monitoring, support, and feedback to Engineering and Architecture to resolve issues in the platform and application stack. The support function is essential to the feedback loop: understanding the clinical challenges as a function of training, infrastructure, or application issues so the solutions can be quickly developed and implemented. This team needs an integrated understanding of the entire environment and access to sophisticated and comprehensive monitoring solutions to ensure that problems with infrastructure and applications can be observed and mitigated before affecting a service delivery. In the event of service interruptions, their multi-disciplined structure expedites restoration with feedback to Architecture and Engineering for relevant design changes.

Build for Change: Improve Satisfaction and Care
Innovation is vital, especially as IT services become ever more critical to new healthcare services delivery. The skills are present in the technology silos in most organizations, and they key to unlocking those skills is within the control of IT leadership. Cross functional teams tasked with designing, building, and running the next generation infrastructure will build platforms for success and ensure the continued advancement of Care delivery and new services to support it.

Real World Use Cases for VMware in Healthcare IT

Do you have vCloud for Healthcare

No, it’s not something you can catch on a cross-country flight or from playing in the rain without a jacket. If you are in Healthcare IT, vCloud for Healthcare is something that you want and need.

vc4hc main image

Why do I need vCloud for Healthcare, again? Because VMware has created a bundle of technology solutions and services that map to the healthcare-specific outcomes you are being asked to deliver.

“Well, I didn’t know VMware had healthcare-focused IT solutions.” We do. Many vendors say they have a Healthcare practice because they sell into Healthcare.  At VMware, Healthcare is a true vertical.  We have a team of hundreds of people dedicated to collaborating with a broad range of healthcare ecosystem partners like hospitals, clinical application providers, and industry groups, to help deliver safer and more efficient healthcare solutions. VMware is also a Premier Foundation Member of CHIME and a Diamond Member of HIMSS.

Now, before it sounds like we are tooting our own horn… We understand your mission.  At VMware, our team knows that this is the only industry we are all going to be consumers of one day and we need to help get it right.  No one has all of the answers yet, regarding what Healthcare will look like in the future.  However, VMware has a long history in healthcare going back to helping the pharmaceutical companies achieve faster FDA validation. Maybe we don’t wear lab coats, but we still feel like we are in patient care.

Tell us if we got this right… We know you are being asked to deliver better Clinical Outcomes, improved Patient Experience, and all while driving costs down and increasing Operational Efficiency.  Oh, by the way, your Electronic Medical Records costs are going up, while reimbursements are trending downward.  And as all this is happening, Mergers & Acquisitions are taking place all around you at a frenzied pace, Regulatory and Security requirements are exploding, and everyone wants Mobility and access from anywhere.  Hopefully, you are starting to see that we are in the boat with you, and rowing in the same direction.

You are being asked to go from filling the hospitals (a volume care model) to emptying them (a wellness versus fee-for-service model).  Yet, that is a huge cultural shift which requires more physicians, and no one wants to become a General Practitioner anymore. Phew.

“Sure, that’s all true, but I work in IT.  My challenges are different.”  We live in your world every day and know that you are being asked to take legacy environments and go from system performance and availability (right in your wheelhouse), to somehow helping deliver long-term sustainability, services sharing, cost/revenue modeling, and mature IT analytics.  This new end state is currently a foreign land to many of you.

In comes vCloud for Healthcare… Here is where we can help.  Think of vCloud for Healthcare as a Software-Defined Enterprise with healthcare outcomes in mind.  vCloud for Healthcare offers support for both Legacy and New systems, has built-in IT Analytics and Automation, delivers Mobility and Point of Care solutions, Hybrid and Public Cloud support, Fault Tolerance and Business Continuity, Self-Service capabilities, and even Financial Analytics.

Ignore HIPAA and PCI and just do the jail time. Well, no one is thinking that. vCloud for Healthcare, above all, offers Integrated Industry Security and Compliance, validation by the world’s leading clinical application providers that you know and love, and leverages all of your existing investments in our KLAS-rated vSphere platform.  Thank you for your business, by the way!  Even our vCloud Air offering will sign a Business Associate Agreement and share the risk with you.

So now that you trust us, I’m sure you are thinking, “Am I the first to do this?”… Here is how your peers are leveraging our software in Healthcare, right now, to help drive operational efficiency and better patient outcomes.  Contact your VMware team for specific references.

1 - vc4hc

 

2- vc4hc

 

Secure Access, Clinical Productivity, BYOD & a Superior User Experience from Anywhere

3 - vc4hc

4 - vc4hc

5 - vc4hc6 - vc4hc

7 - vc4hc

Security and Compliance is Baked in to the Platform

 

8- vc4hc

9- vc4hc

 

Ensuring Zero Downtime

10- vc4hc

11- vc4hc

12- vc4hc

13- vc4hc

14- vc4hc

Automation and enabling Self-Service is Key to Reducing OPEX

16- vc4hc

Running Healthcare IT like a Business

15- vc4hc

Seamlessly extending your environment with security, agility, and control

 

17- vc4hc

18- vc4hc19- vc4hc