The world’s largest banks have historically relied on mainframes to manage all their transactions and the related cash and profit. In mainframe terms, hundreds of thousands of MIPS are used to keep the mainframe running these transactions, and the cost per MIP can make mainframes extremely expensive to operate. For example, Sears was seeing the overall cost per MIP at $3000-$7000 per year and didn’t see that as a cost-effective way to compete with Amazon. While the price of MIPS has continued to improve, mainframes can also face pure capacity issues.
In today’s world of financial regulations, risk, and compliance, the entire history of all transactions must be captured, stored, and available to report on or search both immediately and over time. This way, banks can meet audit requirements and allow for scenarios like a customer service call that results in an agent search for the transaction history leading up to a customer’s current account balance. The volume of information created across checking, savings, credit card, insurance, and other financial products is tremendous—it’s large enough to bring a mainframe to its knees.
Mainframe Jam can be Sticky
No, this is not some type of jelly you find in a mason jar at a farmer’s market in Armonk, NY.
Just like other data stores, a spike in resource requests can bring a system to a halt. When mainframes get jammed and go offline for a few minutes—it costs banks millions of dollars per minute. Mainframe jams eventually cause the executive team to drop what they are doing until they know the CIO and IT organization are all focused 100% on removing the jam and working to avoid it in the future.
What causes a mainframe jam? In the case of one of Latin America’s largest banks, it is simply the volume of data. Their mainframe runs transactions and creates logs for compliance, risk, and audit purposes. These logs are sent to a message queue and database that also runs on the mainframe hardware. If this database is overloaded, the messages queue up, and the system begins to slow down until it begins to deny transactions.
Making Big, Fast Data Scale for Financial Services Mainframes
What the bank needed is a way to ingest a large number of transaction log messages and never deny transactions. Valued at over $60 billion, their client’s transactions generate a message volume of around 200-300 million per day with peaks of up to 15,000 messages per second. With each message being about 4KB of data, this totals up to be about 1.2 terabytes per day of financial transactions.
This is big data for sure, but it was not yet fast data.
Architecture Approaches with Pivotal’s vFabric GemFire and Greenplum
The VMware vFabric team partnered with the bank to show them how this data jam could be avoided in the future by building out a proof of concept.
Below you’ll find an architecture diagram of the proposed solution. We used vFabric GemFire as the fast ingest layer to consume MQSeries messages right as they hit the server. This prevents messages from queueing up and also allows them to be asynchronously persisted to a Greenplum appliance for analytics. For the customer, this deployment worked like a black box because GemFire is embedded inside the Greenplum hardware appliance, so setup was minimal and easy. vFabric tc Server, Spring Integration, and RabbitMQ could also be used to host various web services. .
Basically, this solution allowed the high volume set of log events and messages need to be stored on a persistence layer in GemFire outside the mainframe that could scale elastically as needed by simply adding more nodes. Since the data is placed in-memory with GemFire, it also allows the data to available for search or reporting as soon as it leaves the MQSeries queue.
The bank saw several additional advantages to running a big, fast data grid outside the mainframe:
- Running the messages outside of the mainframe would save MIPS and money for a set of data that was not considered core to the business operations.
- The mainframe would be able to handle a much higher transaction throughput, avoid jams and have a greater ability to scale on it’s current hardware.
- Business intelligence analyses could be run on the transaction logs. Once this data could be placed on a big data grid outside the mainframe, it would allow various departments to do analysis on customer profiles, usage characteristics, fraud, security, customer service, and marketing.
For more information on vFabric GemFire and Greenplum, see: