Home > Blogs > VMware vFabric Blog > Tag Archives: grid

Tag Archives: grid

5 Steps to Mainframe Modernization with a Big Fast Data Fabric

For growth initiatives, many companies are looking to innovate by ramping analytical, mobile, social, big data, and cloud initiatives. For example, GE is one growth-oriented company and just announced heavy investment in the Industrial Internet with GoPivotal. One area of concern to many well-established businesses is what to do with their mainframe powered applications. Mainframes are expensive to run, but the applications that run off of them are typically very important and the business can not afford to risk downtime or any degradation in service.  So, until now the idea of modernizing a mainframe application has often faced major roadblocks.

There are ways to preserve the mainframe and improve application performance, reliability and even usability.  As one of the world’s largest banks sees, big, fast data grids can provide an incremental approach to mainframe modernization and reduce risk, lower operational costs, increase data processing performance, and provide innovative analytics capabilities for the business—all based on the same types of cloud computing technologies that power internet powerhouses and financial trading markets. Continue reading

7 Myths on Big Data—Avoiding Bad Hadoop and Cloud Analytics Decisions

Hadoop is an open source legend built by software heroes.

Yet, legends can sometimes be surrounded by myths—these myths can lead IT executives down a path with rose-colored glasses.

Data and data usage is growing at an alarming rate.  Just look at all the numbers from analysts—IDC predicts a 53.4% growth rate for storage this year, AT&T claims 20,000% growth of their wireless data traffic over the past 5 years, and if you take at your own communications channels, its guaranteed that the internet content, emails, app notifications, social messages, and automated reports you get every day has dramatically increased.  This is why companies ranging from McKinsey to Facebook to Walmart are doing something about big data.

Just like we saw in the dot-com boom of the 90s and the web 2.0 boom of the 2000s, the big data trend will also lead companies to make some really bad assumptions and decisions.

Hadoop is certainly one major area of investment for companies to use to solve big data needs. Companies like Facebook that have famously dealt well with large data volumes have publicly touted their successes with Hadoop, so its natural that companies approaching big data first look to the successes of others.  A really smart MIT computer science grad once told me, “when all you have is a hammer, everything looks like a nail.” This functional fixedness is the cognitive bias to avoid with the hype surrounding Hadoop. Hadoop is a multi-dimensional solution that can be deployed and used in different way. Let’s look at some of the most common pre-concieved notions about Hadoop and big data that companies should know before committing to a Hadoop project: Continue reading

Understanding Speed and Scale Strategies for Big Data Grids and In-Memory Colocation

The new database is opening up significant career opportunities for data modelers, admins, architects, and data scientists. In parallel, it’s transforming how businesses use data. It’s also making the traditional RDBMS look like a T-REX. 

Our web-centric, social media, and internet-of-things are acting as a sea-change to break traditional data design and management approaches. Data is coming in at increasing speeds, and 80% of it cannot be easily organized into the neat little rows and columns associated with the traditional RDBMS.

Additionally, executives are realizing the power of bigger and faster data—responding to customer demands in real-time. They want analysis, insights, and business answers in real-time. They want the analysis to be done on data that is integrated across systems. And, they don’t want to wait a day to load it into a data warehouse or data mart. As a result, developers are changing how they build applications.  They are using different tools, different design patterns, and even different forms of SQL to parse data. Continue reading

Disaster Recovery Jackpot: Active/Active WAN-based Replication in GemFire vs Oracle and MySQL

Ensuring your systems run smooth even when your data center has a hiccup, or a real disaster strikes is critical for many companies to survive when hardships befall them.  As we enter the age of the zettabyte, seamless disaster recovery has become even more critical and difficult. There is more data than we have ever handled before, and most of it is very, very big.

Most disaster recovery (DR) sites are in standby mode—assets sitting idle, waiting for their turn. The sites are either holding data copied through a storage area network (SAN) or using other data replication mechanisms to propagate information from a live site to a standby site.  When disaster strikes, clients are redirected to the standby site where they’re greeted with a polite “please wait” while the site spins up.

At best, the DR site is a hot standby that is ready to go on short notice.  DNS redirects clients to the DR site and they’re good to go.

What about all the machines at the DR site?  With active/passive replication you can probably do queries on the slave site, but what if you want to make full use of all of that expensive gear and go active/active?  The challenge is in the data replication technology. Most current data replication architectures are one-way. If it’s not one-way, it can come with restrictions—for example, you need to avoid opening files with exclusive access. Continue reading