Home > Blogs > VMware vFabric Blog

Four Strategies for Modernizing Mainframe Applications to the Cloud

Once a company realizes the potential benefits of modernizing mainframe applications to the cloud, and moves beyond the fear of modernization, there are several modernization strategies to consider.    And while the modernization strategies may not be all that new, they bear repeating in the context of modernizing mainframe applications by transforming them into cloud applications.  Let’s discuss four proven strategies:

1.    Lift-and-Shift
Several technologies exist for recompiling and executing mainframe applications onto commodity servers in the cloud.  Vendors offer emulation platforms to run COBOL, VSAM, ISAM, CICS 3270 green screen, among others as-is within a Linux environment.   Once “shifted” to a commodity server platform, the application can be placed into virtualized workloads, such as Linux running on vSphere, and managed from within a cloud, such as vCloud.  This approach may be the least risky and lowest cost option.  Its primary advantage is that it significantly reduces costs by moving from an expensive, proprietary mainframe environment to a commodity-based processing environment.  However, it brings little value to the modernized application itself as you are still operating within a mainframe context, maintaining brittle application code and relying on mainframe skill sets and experience. This may be the best modernization strategy if your primary goal is to reduce capital expenditures and short-term TCO.

2.    Greenfield Approach
A complete mainframe application rewrite is probably the most expensive and risky modernization approach.  However, automated code analysis, code conversion, testing, and cloud deployment tools can greatly reduce the risks and costs associated with this “Greenfield” approach.  Taking a Greenfield approach potentially allows you to create the most advanced, cloud-based application available; but this must be weighed against the costs and risks of such an approach.

A candidate mainframe application for the Greenfield approach will be one that continues to provide significant value to the business but is extremely outdated in terms of modern features and functions. In addition to being valuable, but antiquated, a relatively small codebase will lower the risk and costs of a rewrite effort.  For instance, a client decided to rewrite their mainframe Time and Attendance system into a web and cloud-based system.   The previous mainframe application was only usable by specialists with access to client-side terminal emulation software, and knowledge of cryptic function keys and codes.  As a result, employees had to submit paper timecards and leave requests, which were then entered by mainframe application specialists.  The new system was accessible to all employees, using a web-browser or smartphone, and was intuitive and easy to use with drop-down selections, templates and wizards to assist with time and leave entries.  The new relational data store allowed for better reporting and analytics, permitting them to detect fraud and abuse with greater speed and accuracy.

3.    Incremental Replacement
An incremental replacement approach modernizes a mainframe application one module, function, or procedure at a time.  This approach has proven to be both cost-effective and less risky than other modernization approaches.  This approach is sometimes called the  “Strangler Application” because, much like a Strangler vine eventually overtakes its host tree, the modernized cloud application eventually overtakes the legacy mainframe application.

Using an incremental replacement strategy, you first identify the legacy modules or functions that are primarily contained within single data domain and business process or stakeholder group.  For example, in a large and complex HR system, you might first target a telework module for modernization: it’s relatively contained to telework data with minimal interfacing necessary to personnel data.   In addition, a telework module is a great candidate function for mobile access:  allowing users to request telework on a smartphone or tablet from home offers great convenience.  Also, the telework module might require elastic scalability during an annual enrollment/re-enrollment period.  End users gain immediate and tangible value from your modernization effort.

The process of modernizing a mainframe application module requires converting the legacy mainframe code (e.g. COBOL) to a modern cloud application language (e.g. Java, C#, Perl, Ruby, Node, etc.) using manual and/or automated methods.   Technologies including middleware, user-exit routines, and API libraries allow the cloud application modules to be integrated with the remaining mainframe legacy application.  Ideally, you develop an abstraction layer (APIs or an interface) between the legacy application and the modernized cloud based application module.  The abstraction-layer presents more work upfront but pays off in the long run through risk mitigation—you can operate the legacy mainframe functionality side-by-side with the modernized cloud application module.  Once fully tested and confirmed, the mainframe functionality can be decommissioned and the cloud-based functionality takes over.

4.    Application Tier Replacement
This approach focuses on the mainframe application’s data-tier or user interface.   Using integrating technologies (JCICS, COBOL user-exit routines, middleware, etc.) you replace an entire legacy application layer with a modern cloud-based application layer.   For example, we have clients that have replaced mainframe data stores, such as VSAM or ISAM files on DASD, with VMware’s vFabric GemFire: an elastic, in-memory, distributed data fabric.  The results are impressive—faster access to data with the ability to easily share that data in real-time with other applications.  Also, GemFire provides the ability to offload batch processing to cloud based applications, potentially reducing mainframe costs since high-end MIPS usage typically occurs during batch processing.

The tiered replacement approach can be used for batch and transactional applications.  Batch may be a better starting point for replacement, as it doesn’t require user interactions and is typically isolated to sequential execution across a standard and finite set of data.   When I supported a legacy mainframe application, I vividly remember several occasions hoping our nightly batch processes would “win the race to sunrise” so that our users had access to the application in the morning.   Like most mainframe batch processing, our processes were embarrassingly parallel—moving them to a distributed data grid like GemFire would have improved performance by orders of magnitude, and I would have slept better at night!

Additional Considerations when Modernizing Mainframe Applications
Regardless of the strategy you select, other concerns must be addressed, which go beyond the application itself:

  • Consider all the integrations between the mainframe and other applications—integrating applications will need to be updated and tested.
  • Data schemas and formats may need to be restructured, and data cleansed.
  • Business processes may need to be updated.  For example, why batch-up and spool reports to centralized printers when users can view electronic reports on-line and on demand in the cloud?
  • Operations, monitoring, and management tools may need to change in order to take full advantage of cloud offerings—e.g. metered usage, migrating workloads,

Pick the Right Strategy, Pick the Right Partners
So, depending upon your business needs and constraints, proven strategies exist which allow you to modernize your mainframe applications into cloud applications.  VMware and its vast partner ecosystem offer the tools, technologies, and people and processes to help you successfully implement these strategies.

For my next (and last) blog entry on this topic, I’ll offer some design considerations you should make when modernizing mainframe apps to the cloud, explaining why just because you drop an application into the cloud doesn’t mean it’s a cloud application.

About the Author: Mel Stockwell is a Deputy Chief Cloud Strategist focused on VMware’s Public Sector customers, helping organizations address the opportunities, costs, and challenges, of application development and modernization efforts through the adoption of Cloud Application Platforms. Mel brings over 23 years experience developing, selling and implementing enterprise software in the public sector.  Mel has worked for the Department of the Interior, FDIC, US Patent and Trademark Office, IONA Technologies, Sterling Software, and EDS.

7 thoughts on “Four Strategies for Modernizing Mainframe Applications to the Cloud

  1. Stock

    Good article!
    Personally I don’t think we could remove MF in the future. It’s still teaching how to build enterprise application and not only (infrastructure as well).

    Nevertheless I think we should use MF for the right use cases (and customers!!).

    I’m contributing on open source solution to have a batch execution environment on a cloud enviroment. It’s called “JEM, the BEE” and could see it statring from this site (www.pepstock.org).
    Spring Batch is supported, customized to use entities and concepts close to MF, as GDG.
    We’ve evaluated Gemfire … But we decided to use another InMemory data grid.

    I think the success for open system is learning from MF… On virtualization topic, VMWare knows well what I mean.


    1. Jon

      All I’ve seen is complete failure when trying to rewrite a legacy application with the most innefficient language ever invented (java). Any volume. You will fail. Z/OS will still be the backbone of the worlds most critical systems when everyone who reads this is dead. How much you willing to bet? They said it was dead 40 years ago, Good luck.

      1. Stock

        Jon, my experiences on the REAL enterprise say that to move legacy concepts on open system environment is always a good thing.
        I’ve worked on mainframe for more than 20 years and I know well the strength of that platform.
        But I don’t agree with your view.
        In my post, I wrote that we couldn’t remove MF… I don’t think we should substitute MF but maybe we should use it for its real strengths, like TP monitors or the database, where you can leverage on I/O subsystem, the real difference on the open system.
        About java, that’s just your opinion. Doesn’t make sense to start a long discussion where both have got own reasons.
        Good luck U2.

  2. Mani

    Excellent post ! Modernizing the applications in the legacy mainframe system in cloud will be real feast for major large scale business.

  3. Doss

    Hello Mel,
    It’s a pleasure to read thought provoking articles of yours again. Well researched and presented paper.
    “Transformation of Effectiveness in Business Strategy as the execution conditions and competition changes dynamically will be the continuing theme in IT Strategy”.
    I am leaning more towards a hybrid strategy with “Green Field” as the target strategy; having been in leadership roles in a great success case of similar approach of modernization of mission critical legacy Patent Application processing system from its legacy COBOL (on Unisys A16) roots to a open system architecture (no Cloud at that time). Given the maturity of technology and tools availability in present times, I would choose in this proposed hybrid strategy, modernization of Application Integration services (loosely coupled Web Services), Incremental replacement of business functionality (application sub-systems/modules), Application IT tier replacement to blend-in the right mix as tactical execution strategy to achieve modernization goals while mitigating the operational risks and ensure smooth transition.
    In calculating the Cost-Benefits and TCO, one must explore and account for the potential savings due to simple process improvements that could be undertaken along the way and resulting productivity gains the re-engineering would bring-in. In the case of USPTO prior to legacy modernization, hundreds of preprinted forms were used and most of the documents filed by applicants and generated by PTO themselves were maintained in hard copy papers and large file wrappers. The mainframe only maintained the administrative data. PTO has spent large sums and resources to maintain stock of forms, storage and handling; while they are now maintained in electronic formats. The entire investment in re-engineering has been paid off in the past 10 years just on these few factors alone. With Client-Server architecture, BI analytics and Web Services for integration with other Information Systems, the applications is more suited for next Business Process modernization to reduce patent pendency (time taken to grant a patent from the day its filed) from 20-22 months to 15 months PTO has been promising to its customers for decades now! Even though there is not a single line of COBOL code that exists in patent processing, the business goals have not been achieved. The Transformation needs to continue in USPTO to achieve its stated business goals.

    I am looking forward to your next article on the design strategy. With best regards and Good Luck!


Leave a Reply

Your email address will not be published. Required fields are marked *


You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>