Home > Blogs > VMware vFabric Blog


5 Steps to Mainframe Modernization with a Big Fast Data Fabric

For growth initiatives, many companies are looking to innovate by ramping analytical, mobile, social, big data, and cloud initiatives. For example, GE is one growth-oriented company and just announced heavy investment in the Industrial Internet with GoPivotal. One area of concern to many well-established businesses is what to do with their mainframe powered applications. Mainframes are expensive to run, but the applications that run off of them are typically very important and the business can not afford to risk downtime or any degradation in service.  So, until now the idea of modernizing a mainframe application has often faced major roadblocks.

There are ways to preserve the mainframe and improve application performance, reliability and even usability.  As one of the world’s largest banks sees, big, fast data grids can provide an incremental approach to mainframe modernization and reduce risk, lower operational costs, increase data processing performance, and provide innovative analytics capabilities for the business—all based on the same types of cloud computing technologies that power internet powerhouses and financial trading markets.

The Fast Data component

One customer used used vFabric GemFire to save $75 million dollars on a mainframe modernization project, and it has been used for the past ten years as highly performant, horizontally scalable data transaction layers, or big data grid, for mission-critical applications. Both GemFire and it’s sister product, SQLFire, are known to achieve linear scale. Key use cases include credit-card transaction systems, stock trading platforms, foreign exchange systems, web-based travel reservation systems, and mainframe batch data offloading. As an in-memory data grid, its main advantages are being able to do in-memory, sub-millisecond transactions while still maintaining the highest standards for fault-tolerance, high-availability, and linear scalability on a distributed platform.

The Big Data component

For More Information:

vFabric GemFire
vFabric SQLFire
Greenplum

With big data analysis, Greenplum has a multitude of customer case studies with companies like O’Reilly Media, Skype, and NYSE Euronext. These solutions have become well-known when it comes to analytical analysis on multiple terabyte or petabyte data sets where traditional relational databases begin to break down, stop scaling, or fail to deal well with un-structured data. Greenplum technology provides a complete big data solution for both structured and unstructured data, based on the Greenplum Database and Pivotal HD—a commercially supported distribution of Hadoop that includes HDFS, MapReduce, Hive, Pig, HBase, Zookeeper, Sqoop, and Flume. The recently announced Pivotal Advanced Database Services powered by HAWQ allow for SQL queries to run on the fastest Hadoop-based query interface on the market today—a 100X+ faster solution.

Fast Data + Big Data: Better together

Big data and fast data solutions make a lot of sense together as we’ve seen on many customer solution blueprints delivered over the past several months. This is because most business owners and administrators aren’t able to fully utilize the data being captured in their transactional systems on a daily basis. From a business value perspective, the fast data layer can bring scalability and reliability to the business while reducing the cost per transaction. Most transactional systems also benefit from predictive analytics on transacted data, and the fast data layer enables this type of real-time transaction analysis that can also incorporate big data result-sets. The big data layer provides insight on mountains of data to help with decision making and support traditional performance metrics or enable more advanced types of visualization and data science.

From Mainframe to Big Fast Data Architecture

Moving from mainframe to big, fast data is an evolution. A phased approach—step by step—is certainly the most recommended way of modernizing applications. It makes sense because it minimizes risk and better justifies investments. After working with many customers who face this problem, here is one approach we recommend.

1. Selecting the Pilot: Pick a Starting Point

As with most major initiatives, an initial use case or small scope should be used as a pilot to validate the architecture choices and prove a return for the overall project. The ideal project candidate should a) have little or no integration points with other systems on the legacy platform, b) be small but critical to existing business processes, c) consume a considerable amount of operational expenses, and/or d) represent a business risk in its current state. By screening this way, we should be able to deliver something of value to the business,  reduce OpEx, and make the improvement quickly while avoiding bad decisions.

2. Designing the Modern Data Architecture for Co-Existence

The goal of this step is determine what legacy data stays, migrates, or integrates. First, there is an analysis on the pilot’s data model. Then, we begin to design a data architecture that makes sense for a highly scalable, distributed data grid and still supports the existing business model and processes. The analysis should identify which entities are transactional, a mix of transactional and analytical (e.g. part of a real-time analytics model), or purely analytical. During this process, we make decisions regarding data model partitioning, replication, colocation, disaster recovery, transaction consistency, and more. We also decide which data to leave on the legacy platform, accessing it on the fly as needed using the GemFire integration layer capabilities.

3. Integrating Mainframe and Big Data Grid

While the data architecture is being defined, we start building the initial big fast data infrastructure. Then, the pilot migrates the first use case to the modernized architecture. By using the GemFire/SQLFire asynchronous integration layer, we can provide data consistency between the new application and legacy mainframe application. Transactions done on the modernized system are delivered simultaneously to both the legacy system and the analytics platform. Integration with legacy can be achieved using either a mainframe connector, CICS Web Services, messaging platform, or any other integration protocol.

4. First Deployment Risk Mitigation Plans

When the pilot is complete and has proven to be better performing than the legacy system with much lower maintenance costs, we are ready to partially turn off our first piece of the legacy system. The legacy system, especially if living on a mainframe, should stay there for a period of time to support ongoing business. During this time, new transactions should start happening on the new system and data can be validated against the original system to make sure it is behaving exactly as expected. This will minimize risk, assure a seamless architecture evolution, and avoid headaches from unexpected problems. While the deployment acts as an advanced, operational cache for the mainframe, the mainframe still receives the data it needs while both analytical and real-time or predictive analytics data stores are updated.

5. Evolution

Step-by-step, other applications or portions of the mainframe can be carefully migrated to the new platform in a similar manner—without risk. As this happens, we gradually reduce mainframe usage, costs, and time to market for new deployments. We gain a level of scalability proven by data grids that run the most rigorous and high-performance data environments on the planet—those that power financial transactions. We also enable new methods of analysis to unleash business insight and value.

Of course, there is an initial capital expense to make; however, the investment is justified by reduced operational expenses. Companies can also save on capex by leveraging existing, partially used infrastructure since the software runs on commodity hardware.

About the Author: Frederico Melo (a.k.a. Fred Melo) has a degree in Computer Science and has been working with Software Engineering for the last 14 years. His areas of expertise include Grid Computing, Highly Scalable Architectures, Big Data, Fast Data and Legacy Modernization. He is currently based in Sao Paulo, Brazil working as Field Engineer for Pivotal.

49 thoughts on “5 Steps to Mainframe Modernization with a Big Fast Data Fabric

  1. Sanjay Bagal

    How can I transfer mainframe files to HDFS directly ?

    Reply
  2. Balaje MS Visanaathan

    eXCELLENT INFO

    Reply
  3. vincent johnson

    great post

    Reply
  4. سرور مجازی ایران

    I am truly happy to read this webpage posts which includes plenty of helpful facts, thanks for providing such statistics.

    Reply
  5. خرید وی پی ان

    Big Fast Data Fabric oked

    Reply
  6. دانلود نرم افزار های روز دنیا

    zz

    Reply
  7. آموزش برنامه نویسی

    برنامه نویسی اندروید

    Reply
  8. خرید vpn

    ok

    Reply
  9. وردپرس

    Hi Frederico Melo
    I,m very happy to read this webpage , nice this article

    Reply
  10. apnapaise.in

    very good

    Reply
  11. content4u.in

    hjudjsdkksdkfd

    Reply
  12. empbank.in

    zxzxcxxvbhhgrtrjkiolk

    Reply
  13. ekrishinaip.in

    nvmmbvnb krgjigiofdgjfgjiofoijffg

    Reply
  14. تصفیه آب خانگی

    very good

    Reply
  15. طراحی سایت در تبریز

    It was really great Mmnvnm.mtalbtvn

    Reply
  16. خرید vpn

    Thanks

    Reply
  17. قالب وردپرس

    It was beautiful Morsi

    Reply
  18. فاميلی ژورنال

    It was good fun

    Reply
  19. فروش لوازم موبایل

    Thanks man

    Reply
  20. ahmad

    Thanks man

    Reply
  21. ahmadreza

    very good

    Reply
  22. خرید کاندوم

    خرید کاندوم

    Reply
  23. تابلو ال ای دی

    تابلو ال ای دی

    Reply
  24. کارت تخفیف دندانپزشکی

    hello
    thanks
    goood
    like

    Reply
  25. دانلود آهنگ

    goooooooooooooooooodsssssssss

    Reply
  26. فارسی موزیک

    فارسی موزیک

    Reply
  27. تور هوایی مشهد

    تور مشهد

    Reply
  28. Mihanpix.com

    nice article

    Reply
  29. قالب وردپرس

    Hi Frederico Melo
    I,m very happy to read this webpage ,nice this article

    Reply
  30. rahim

    thanks for this article
    i like this post

    Reply
  31. دانلود سریال game of thrones

    with-a-big-fast

    Reply
  32. دانلود فیلم کیلو آلبالو

    with-a-big-fast post

    Reply
  33. real psychics

    C’est un sujet proche de mon coeur… Merci!

    Où sont vos coordonnées? http://realpsychicnow.biz

    Reply
  34. std testing near me

    Très rapidement ce site sera célèbre parmi tous les visiteurs du blog, en raison de ses bons messages http://stdclinicsnearme.info

    Reply
  35. local std testing

    Enregistré comme un favori, j’aime vraiment votre site! http://stdtestingnearme.biz

    Reply
  36. are psychics real

    Wow, superbe format de blog! Combien de temps ont
    vous avez déjà blogué? vous faites paraître le blog facile.
    Le look complet de votre site web est super, aussi bien que le contenu! http://realpsychicreadings.info

    Reply
  37. cheap phone psychic readings

    Des biens magnifiques de votre part, mec. J’ai pris en compte tes trucs avant et tu es trop magnifique.
    En fait, j’aime ce que vous avez obtenu ici,
    certainement aimer ce que vous dites et la façon dont, au cours de laquelle vous
    dis-le. Vous le rendez divertissant et vous prenez toujours soin de rester sage.
    Je suis impatiente de lire plus de ta part. C’est en fait un super
    site. http://cheapphonepsychics.online

    Reply
  38. std testing near me

    Hello, after reading this amazing article i am also cheerful to share my experience here with colleagues.
    http://stdscreening.online

    Reply
  39. تزریق-ژل تزریق-ژل تزریق-ژل تزریق-ژل تزریق-ژل

    !!!!!!
    I have experience in training different body types and fitness levels, so whatever your goals are I can help.

    Reply
  40. چاپ عکس روی تخته شاسی

    very good and interesting page

    Reply
  41. جهیزیه عروس

    یکی از مراکز معتبر خرید جهیزیه عروس در ایران و خرید لوازم آشپزخانه فروشگاه اینترنتی نوین جهاز هست که میتونید برای خرید جهیزیه ها به اونجا سر بزنید.
    ممنون از سایت خوبتون

    Reply
  42. ترجمه متن فارسی به انگلیسی

    cool

    Reply
  43. فیستول

    می توان گفت بهترین راه درمان فیستول لیزر می باشد. با بررسی مزایای لیزر نسبت به جراحی باز می توان به این مسئله پی برد. در عمل با لیزر از بی حسی موضعی استفاده می شود بنابراین احتیاجی به بیهوشی و بستری شدن بیمار در بیمارستان نمی باشد. خونریزی ناشی از عمل با لیزر در مقایسه با جراحی بسیار کمتر است. برخلاف جراحی که دوران نقاهت یک تا دو ماهه دارد، با این روش فرد به راحتی می تواند فعالیت های روزانه خود را از سر بگیرد. به وسیله عمل با لیزر فرد دچار درد شدید نمی شود و به نسبت عمل با جراحی درد کمتری احساس می کند.

    Reply
  44. درمان شقاق

    لیزر در هر دو نوع شقاق قابل استفاده است. برای درمان شقاق حاد از لیزرهای با توان پایین استفاده می شود. هفته ای ۲ تا ۳ جلسه و در مجموع تا ۱۲ جلسه این کار می تواند ادامه داشته باشد. برای درمان شقاق پیشرفته یا مزمن از لیزرهای با توان بالا استفاده می شود. با این روش زخم شقاق و زایده پوستی آن برداشته می شود و عمل اسفنکتروتومی انجام می شود

    Reply
  45. ویزای کانادا

    very good and so useful text

    Reply
  46. Schniz

    you can count on us in best modern and classic design for your home.
    visit our products.
    http://schniz.co

    Reply
  47. مقاله

    مقاله
    دانشجویی و دانش اموزی

    Reply
  48. rasmyar.com

    thanks
    goood

    Reply
  49. گروه تحقیقاتی و اموزشی کرامت

    عشق

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

*