Several technologies exist for recompiling and executing mainframe applications onto commodity servers in the cloud. Vendors offer emulation platforms to run COBOL, VSAM, ISAM, CICS 3270 green screen, among others as-is within a Linux environment. Once “shifted” to a commodity server platform, the application can be placed into virtualized workloads, such as Linux running on vSphere, and managed from within a cloud, such as vCloud. This approach may be the least risky and lowest cost option. Its primary advantage is that it significantly reduces costs by moving from an expensive, proprietary mainframe environment to a commodity-based processing environment. However, it brings little value to the modernized application itself as you are still operating within a mainframe context, maintaining brittle application code and relying on mainframe skill sets and experience. This may be the best modernization strategy if your primary goal is to reduce capital expenditures and short-term TCO. Continue reading →
Disabled SSL/TLS Compression. OpenSSL compression is now disabled by default for protection against the CRIME exploit vector. The mod_ssl “SSLCompression on” configuration option is added to allow the administrator to re-enable compression. See Vulnerability Summary for CVE-2012-4929. Continue reading →
The VMware Partner Exchange (PEX) is coming up next month from February 25-28 in Las Vegas. This event is the premier event for partners to meet with product specialists from VMware, to educate and enable them to sell VMware products and services. Many partners attend just for the virtualization software itself, but especially over the past few years, many partners have opened their go-to-market strategies to include how to build cloud-ready apps using the vFabric product family.
This year, partners attending the event may be thinking of the news of the Pivotal Initiative announced last December, where a significant portion of the vFabric products are moving out of the VMware umbrella and into a new company. These partners attending or deciding to attend PEX may question if they should still pay attention to the vFabric portion of this event. Our answer is resoundingly YES. Here’s why: Continue reading →
In a guest post today, David Klee, a solutions architect from House of Brick Technologies shares with us some of the top data disasters in recent IT, and one way he sees to avoid it:
What good is a security camera in the dark?
It’s not any good at all.
Without light (infra-red or otherwise), a security camera does nothing to help prevent or record theft, and the same goes for “Shadow IT.” When we don’t have data in the light and under surveillance, our ability to watch over it is drastically impaired.
Chief Security Officers and CIOs know that somewhere in their organization, a well-intentioned developer or business person is moving valuable data into the shadows by putting it in the cloud. This scares the “stuff” out of security minded executives because 2012 was another wild year of data (in)security around the world. How secure is your data? Do you know who has access to your sensitive data or where each and every copy of your data resides? Do you have a list of all the places corporate data lives in the cloud? If you don’t know, you are in the shadows. Continue reading →
So, why haven’t more IT organizations embarked upon modernization efforts?
Well, modernizing applications, especially mainframe applications, comes with a perceived set of formidable challenges. As part of our “Mission Possible 2013” series, let’s take a closer look at the six main reasons companies shy away from even approaching a mainframe modernization effort. (Note: The next blog will explain why these challenges are not so formidable, and I’ll offer proven strategies for overcoming each one.)
1. Interruptions to Business
Mainframes are highly reliable, available, and serviceable so they tend to run your business and mission-critical apps. In addition, mainframe apps are very mature because they’ve been in production for years, if not decades. IT organizations fear pulling the plug on a mainframe app without extensively testing the new app (perhaps for months or years) because it may cause catastrophic issues to the business. To decrease the possibility of service interruptions, IT teams can do two things—utilize modern software testing methods or run the legacy system in parallel with the modernized app for some time. But why risk testing an entire mission critical application wholesale? In the next blog, I’ll describe incremental approaches for modernizing mainframe apps.
First National Bank (FNB) is the second largest financial institution in South Africa. Formed in 1998, the bank has grown significantly over the years, both through normal growth as well as acquisitions. Still, despite the rapid growth, the bank has prided itself on technology and innovation to compete in today’s demanding markets, largely investing in applications for mobile and banking apps that expand their access to customers. Their efforts at innovation have paid off as they were awarded the World’s Most Innovative Bank for 2012.
Earlier this month, we had the opportunity to talk to Mark Jeffery and Ramon Nogueira, 2 of FNB’s application architects behind their call center, about how they are automating many of the customer processes. A new telephony platform was introduced and has greatly improved customer service levels, speeding up the amount of time a customer must wait on the phone to resolve their enquiry or perform a transaction. One of the secrets to speeding these transactions up was to separate the telephony platform from the CRM system, and place a messaging broker between them. After careful consideration of several potential tools, including ZeroMQ and HornetMQ, FNB decided to use VMware’s vFabric RabbitMQ. Continue reading →
According to one of our partners, vFabric SQLFire is a product he wishes more customers would use.
“SQLFire is a game-changer. I think many companies underestimate the value of scaling the data later horizontally. Every project I propose has a business case, and I see a tremendous amount of value being unlocked with this product—not just for the CIO or CTO’s agenda, but for the CFO and CEO. Then, you add the fact that the whole application stack is virtualized and has solid integrations. It’s a simple story, the product allows you to add a lot of value in a really cost effective way.”
What makes SQLFire such a game-changer?
In this article, we’ll talk more about three game-changing capabilities: server groups, partitioning, and redundancy.
If you haven’t been following our stories on SQLFire, see the end of this article for a list of posts and key capabilities that help explain how transformative SQLFire can be to your data management strategies. Continue reading →
In our last post, we 1) covered how geographic data can release value in mobile and machine-based applications, 2) explained how technology is used to overcome barriers to these types of big data scenarios, and 3) detailed the architecture for a data fabric or grid (like vFabric GemFire) that works with geographic data and specialized or alternative indexes. There were also code examples to explain the object model, the spatial index, and data changes.
Now, we will continue the examples, show you how to make the index highly available, and use a function to access the data via the index.
The Scenario for a Highly Available Index
In some cases, a piece of data may be added to a node, or become primary on a node without a clean method call. This happens in the cases of both failover and rebalancing. In the case of failover, a bucket that is on a node (that was also a redundant copy) may suddenly become the primary copy if the node that held the primary failed.
In the case of rebalancing, an entire bucket can be moved to a new node that was added to the system without the benefit of capturing the “put” call on each piece of data. Continue reading →
Apache Derby is used for its RDBMS components, JDBC driver, query engine, and network server.
The partitioning technology of GemFire is used to implement horizontal partitioning features of vFabric SQLFire.
vFabric SQLFire specifically enhances the Apache Derby components, such as the query engine, the SQL interface, data persistence, and data eviction, as well as adding additional components like SQL commands, stored procedures, system tables, functions, persistence disk stores, listeners, and locators, to operate a highly distributed and fault tolerant data management cluster.
Today, we are pleased to have a guest blogger from a VMware customer share with us their story of how RabbitMQ transformed their business by “solving some really interesting problems”. The following is sent courtesy of Pablo Molnar of MercadoLibre:
If you haven’t heard of MercadoLibre (NASDAQ: MELI), we are the largest e-commerce ecosystem in Latin America. Our website offers a wide range of services to sellers and buyers throughout the region including marketplace, payments, advertising, and e-building solutions. Our products are present in over 14 countries, and the company is ranked as 8th largest online retailer in the world. We were also on Fortune’s list of the fastest growing companies in 2012, and we use RabbitMQ to solve some interesting problems.
About Our Technology Stack and How RabbitMQ Helps
In terms of technology infrastructure, MercadoLibre is fully committed to the open source development model. Most of our apps are primarily written in Grails, Groovy, and NodeJS, but we don’t stick to any language or framework. We entrust tool selection responsibilities to the Software Engineers on each team. Almost all applications are hosted by our in-house cloud computing provisioning system and implemented via OpenStack with more than +7000 virtual instances at the moment. Also, we have successfully launched applications using emerging storage solutions like Redis and MongoDB. With an average of 20 million requests per minute and 4GB bandwidth per second, our traffic management layer is crucial and most of the routing rules job is done by Nginx proxy servers. Our labs department includes a huge Apache Hadoop cluster to perform complex analytical queries, and we are experimenting with real-time data processing using Apache Kafka and Storm.