Many different reasons do exist on why customers want to migrate from mainframe to a modern platform. Reduzing cost is obviosly one of those main reasons and it basically come on different aspects:
- Reduzing load (MIPs)
- Reduzing space and sometimes power usage on Data Centers
- Increasing the consolidation ratio
- Be able to run commodity, much less expensive hardware
- Use a more productive development environment
- Employ less specialized developers, since mainframe developers are more rare at each day (and so more expensive)
Other reasons may be related to time-to-market (or how can I change my mainframe application overnight to comply to new market regulations or support the new product will be launched next week?) or eliminating vendor lock-in.
Regardless, many customers are still very cautious when offloading from mainframe, and this is very understandable. These legacy applications still run the core business for many of them, and although they might cost a lot to maintain are usually very reliable. So, the main strategy used has been writing the new applications (for example, to serve new devices or products) on a modern environment and make those applications access the mainframe to read and write data related to the core business – which is still kept on the mainframe.
Although this strategy can speed-up the development of new systems sometimes, the mainframe is still needed for the vast majority of data access and MIP usage is usually *not* reduced. Actually, it can be increased as new users (and business transactions) come on new channels and devices, which at the very end always access mainframe data.
Also, another long-term problems are created by this approach. Data is segregated on two different models – the legacy mainframe model and the new modernized model – which co-exist but are more different at each day. Complex (and high costly) hooks must be wriiten on the applications to convert from-and-to the "new data model" to-and-from "the legacy mainframe model". Sincronization of data is also a challenge and frequently cause issues, leading the customers to lose credibility on those new platforms and getting more scared each day on offloading from the so trusted mainframe.
Offloading from mainfame using Gemfire
Based on this, a new strategy has been used by lots of customers worldwide to sucessfully migrate from mainframe to a much more cost-effective modern pleatform but still extremely reliable, with very close-to-real-time performance on an incremental step-by-step approach. This would not only allow those companies to modernize their development environment but more important highly reduce their MIP usage, allowing them to even fully migrate from mainframe when they are sufficient confortable for it.
Using Gemfire Data Fabric platform – based on an elastic high performant data grid model – customers can build a distributed, high performant and horizontal-scalable data access layer on top of their legacy platforms (e.g. mainframe). Data can be loaded from the mainframe (or any other legacy platform) and written back to it as needed, although the transactions are done on a micro-second latency rate, using the distributed memory of the Gemfire data grid cluster members, which is far faster and more scalable than the traditional transactions based on disk persistence. Replication between server peers during transactions is transparent, scalable and guarantees as much transaction consistency and durability as needed, fully reliable. Although data is asynchronously written to disk (so, not depending on disk poor I/O throughput) it is is replicated through the memory of the participating peers and chance of any data loss is limited to a catastrophic failure (data-center complete failure) – or as small as on a traditional disk-persistency, traditional database or mainframe approach, However, Gemfire still takes care of data-center reliable replication through WAN network – possibly an alternate data-center (either backup or active-active) – guaranteeing geographical redundancy.
This way, the mainframe can still be used to load the legacy data from and as archival data storage, but will not necessarily participating on any transactions (although it can for particular cases). This will immediately speed-up transactions to memory and local data-center network rates and enable to scale horizontally on demand, while reducing the load on mainframe. Gemfire will guarantee data will still be written to mainframe asynchronously (usually in batches and on a sub-second base) when needed, thus not creating any challenge for other legacy applications which still rely on the data from the legacy data storage.
Consistency between legacy datastore and the Data Fabric is kept using events which are triggered on Gemfire on each data access. As an example, data can be written also on mainframe to keep syncronization at each time it is inserted or updated on Gemfire. It can also be loaded from mainframe on a timely fashion or each time a value is not found on the Data Fabric, for example. On the other way, a change on data kept on the legacy datastore can be sent to a queue or trigger a function in order to let Gemfire know some value has changed.
The same events can be used to notify client applications on simple changes in values stored on the Data Fabric or even based on complex criteria (so, server-side continuously running query).
A combination of those data events which can be based on either simple or complex criteria and trigger other events can be seen as an embedded, data-friendly Complex Event Processing (CEP) platform, which can also build a extremely valuable business real-time on demand data platform.
However, probably one of the most exciting characteristic of this approach is the Data Fabric will run on commodity hardware and will scale horizontally on a linear base. This means customers can start with very small environments and add more servers when needed / desired, which would immediately increase not only the memory capacity, but also the processing power, while Gemfire works as a grid computing platform, distributing the processing between peers (Read: Running jobs on mainframe < link to other article>). Most cases report enhancements on transaction throughput on an order of hundreds to thousands of times and speed-up jobs which traditionally ran on hours to a few minutes or even seconds.
Based on this, Gemfire would be suitable for basically two different use cases on offloading from Mainframe:
- Low-latency, high volume data transactions.
- Long-running, data intensive jobs – such as batches running overnight
Gemfire has been used with great success as the distributed data platform for core business of large enterprises all over the world for the last decade on important industries like finance trading and telco pre-paid real-time charging. It has a very important fit on VMWare's Cloud Application Platform offering, solving a number of challenges for data in a modern world, such as the classical horizontal scalability issue of relational databases, disk I/O bottleneck, big data / data explosion and scalable access to legacy. Return on investment for such projects has been as fast as a few months, based on high platform cost savings and business advantage acchieved.
Most customers start by using Gemfire on top of their legacy platforms (e.g. traditional RDBMS', mainframes, file-based persistence) to immediately gain dramatic performance increase on their transactions. Overtime, they gradually start modernizing their applications to access data directly from Gemfire, and some of them even realize they don't need their legacy platforms anymore.