Home > Blogs > VMware vFabric Blog > Monthly Archives: April 2013

Monthly Archives: April 2013

5 Steps to Mainframe Modernization with a Big Fast Data Fabric

For growth initiatives, many companies are looking to innovate by ramping analytical, mobile, social, big data, and cloud initiatives. For example, GE is one growth-oriented company and just announced heavy investment in the Industrial Internet with GoPivotal. One area of concern to many well-established businesses is what to do with their mainframe powered applications. Mainframes are expensive to run, but the applications that run off of them are typically very important and the business can not afford to risk downtime or any degradation in service.  So, until now the idea of modernizing a mainframe application has often faced major roadblocks.

There are ways to preserve the mainframe and improve application performance, reliability and even usability.  As one of the world’s largest banks sees, big, fast data grids can provide an incremental approach to mainframe modernization and reduce risk, lower operational costs, increase data processing performance, and provide innovative analytics capabilities for the business—all based on the same types of cloud computing technologies that power internet powerhouses and financial trading markets. Continue reading

Webinar Recap: Pivotal Opens For Business, GE Gets 10% Stake and How Pivotal Plans to Deliver Next-Generation PaaS

Pivotal is now open for business!

Pivotal, first announced in December, is a new venture started by VMware and EMC that is focused on Big Data and Cloud Application Platforms. Formally launched as a stand-alone entity today, Pivotal is led by former VMware CEO Paul Maritz, who has been working as Chief Strategy Officer at EMC since last August.

In a webinar today, Maritz not only confirmed the new initiative is now a stand-alone business with 1,250 employees from VMware and EMC, but he also surprised listeners with an announcement that General Electric is making a strategic investment of $105 million into Pivotal. GE’s Vice President and Corporate Officer Bill Ruh joined the webinar today and said GE will hold a 10% stake in the new company. CEO Jeff Immelt also joined the call to explain This brings the value of the newly launched Pivotal to $1 billion.

GE also announced this morning that their Software Center is standardizing on several of Pivotal’s technologies, essentially being the first public customer to endorse the new company. Continue reading

15% Discount for Spring Java Training in May

Training is a great way to speed up development, learn how to improve performance and usability for your applications and generally build confidence in your skills. This month, SpringSource is offering java developers a 15% discount code on all VMware trainings including Core Spring, Spring Web, Enterprise Integration, and Hibernate classes.

To secure your 15% discount, be sure to use the promo code springcustomerpromo during your registration process (promo is not available for partners). All of the following qualifying classes for May, 2013 can be found below:

Step 1: Core Spring

Americas

7 Myths on Big Data—Avoiding Bad Hadoop and Cloud Analytics Decisions

Hadoop is an open source legend built by software heroes.

Yet, legends can sometimes be surrounded by myths—these myths can lead IT executives down a path with rose-colored glasses.

Data and data usage is growing at an alarming rate.  Just look at all the numbers from analysts—IDC predicts a 53.4% growth rate for storage this year, AT&T claims 20,000% growth of their wireless data traffic over the past 5 years, and if you take at your own communications channels, its guaranteed that the internet content, emails, app notifications, social messages, and automated reports you get every day has dramatically increased.  This is why companies ranging from McKinsey to Facebook to Walmart are doing something about big data.

Just like we saw in the dot-com boom of the 90s and the web 2.0 boom of the 2000s, the big data trend will also lead companies to make some really bad assumptions and decisions.

Hadoop is certainly one major area of investment for companies to use to solve big data needs. Companies like Facebook that have famously dealt well with large data volumes have publicly touted their successes with Hadoop, so its natural that companies approaching big data first look to the successes of others.  A really smart MIT computer science grad once told me, “when all you have is a hammer, everything looks like a nail.” This functional fixedness is the cognitive bias to avoid with the hype surrounding Hadoop. Hadoop is a multi-dimensional solution that can be deployed and used in different way. Let’s look at some of the most common pre-concieved notions about Hadoop and big data that companies should know before committing to a Hadoop project: Continue reading

How fast is a Rabbit? Basic RabbitMQ Performance Benchmarks

One of the greatest things about RabbitMQ is the community that surrounds it. With open source at its roots, people come together to share their code, their knowledge and their stories of how they’ve deployed it in their projects. At a recent meetup near Nice, France, database engineer Adina Mihailescu shared a presentation on choosing messaging systems. Supported by Murial Salvan’s benchmark comparing ActiveMQ, RabbitMQ, HornetQ, Apollo, QPID, and ZeroMQ, they shared some interesting performance comparisons that we’d like to share with you.

In a single laptop benchmark, Salvan ran four different scenarios in order to obtain some insight on performance of the default setups for these messaging solutions. Each test had 1 process dedicated to enqueuing and another dedicated to dequeuing. The message volume and size ranged from 200 to 20,000 to 200,000 messages and 32 to 1024 to 32768 bytes. Both persistent and transient queues and messages were used. Continue reading

10 Ways to Make Hadoop Green in the CFO’s Eyes

Hadoop is used by some pretty amazing companies to make use of big, fast data—particularly unstructured data. Huge brands on the web like AOL, eBay, Facebook, Google, Last.fm, LinkedIn, MercadoLibre, Ning, Quantcast, Spotify, Stumbleupon, Twitter, as well as some more brick and mortar giants like GE, Walmart, Morgan Stanley, Sears, and Ford use Hadoop.

Why? In a nutshell, companies like McKinsey believe the use of big data and technologies like Hadoop will allow companies to better compete and grow in the future.

Hadoop is used to support a variety of valuable business capabilities—analysis, search, machine learning, data aggregation, content generation, reporting, integration, and more. All types of industries use Hadoop—media and advertising, A/V processing, credit and fraud, security, geographic exploration, online travel, financial analysis, mobile phones, sensor networks, e-commerce, retail, energy discovery, video games, social media, and more. Continue reading

Upcoming Webinar: Paul Maritz on Pivotal and The New Platform for the New Era

The cloud, mobile applications and big, fast data are fundamentally changing how applications are built and modernized today. To speed this transformation at the enterprise level, Pivotal, the new venture by VMware and EMC, will host a live streaming event on April 24th at 10:00 am Pacific/1:00 pm Eastern with a special announcement and an unveiling of its plans to build “A New Platform for a New Era”.

The Pivotal platform will unite data, application, and cloud fabrics, helping enterprises to develop faster, understand more, and succeed at an even greater scale. It is a platform that makes the consumer grade enterprise a reality.

Pivotal brings together a prodigious set of technologies and talent from a number of EMC and VMware entities, which include Greenplum, Cloud Foundry, Spring, GemFire and other products from the VMware vFabric Suite, Cetas, and Pivotal Labs.

>> Register for webinar here!

Paul Maritz, the Pivotal Leadership Team, and special guests will unveil this platform, and make a special announcement during a live streaming event on Wednesday, April 24th at 10:00 am Pacific/1:00 pm Eastern.

Sign up for the event at gopivotal.com and follow @gopivotal on Twitter for updates.

Banks Are Breaking Away From Mainframes to Big, Fast Data Grids

The world’s largest banks have historically relied on mainframes to manage all their transactions and the related cash and profit. In mainframe terms, hundreds of thousands of MIPS are used to keep the mainframe running these transactions, and the cost per MIP can make mainframes extremely expensive to operate. For example, Sears was seeing the overall cost per MIP at $3000-$7000 per year and didn’t see that as a cost-effective way to compete with Amazon. While the price of MIPS has continued to improve, mainframes can also face pure capacity issues.

In today’s world of financial regulations, risk, and compliance, the entire history of all transactions must be captured, stored, and available to report on or search both immediately and over time. This way, banks can meet audit requirements and allow for scenarios like a customer service call that results in an agent search for the transaction history leading up to a customer’s current account balance. The volume of information created across checking, savings, credit card, insurance, and other financial products is tremendous—it’s large enough to bring a mainframe to its knees. Continue reading

How Instagram Feeds Work: Celery and RabbitMQ

Instagram is one of the poster children for social media site successes. Founded in 2010, the photo sharing site now supports upwards of 90 million active photo-sharing users. As with every social media site, part of the fun is that photos and comments appear instantly so your friends can engage while the moment is hot.  Recently, at PyCon 2013 last month, Instagram engineer Rick Branson shared how Instagram needed to transform how these photos and comments showed up in feeds as they scaled from a few thousand tasks a day to hundreds of millions.

Rick started off his talk demonstrating how traditional database approaches break, calling them the “naïve approach”. In this approach, when working to display a user feed, the application would directly fetch all the photos that the user followed from a single, monolithic data store, sort them by creation time and then only display the latest 10:

SELECT * FROM photos
WHERE author_id IN
(SELECT target_id FROM following
WHERE source_id = %(user_id)d)
ORDER BY creation_time DESC
LIMIT 10;

Instead, Instagram chose to follow a modern distributed data strategy that will allow them to scale nearly linearly. Continue reading

Understanding Speed and Scale Strategies for Big Data Grids and In-Memory Colocation

The new database is opening up significant career opportunities for data modelers, admins, architects, and data scientists. In parallel, it’s transforming how businesses use data. It’s also making the traditional RDBMS look like a T-REX. 

Our web-centric, social media, and internet-of-things are acting as a sea-change to break traditional data design and management approaches. Data is coming in at increasing speeds, and 80% of it cannot be easily organized into the neat little rows and columns associated with the traditional RDBMS.

Additionally, executives are realizing the power of bigger and faster data—responding to customer demands in real-time. They want analysis, insights, and business answers in real-time. They want the analysis to be done on data that is integrated across systems. And, they don’t want to wait a day to load it into a data warehouse or data mart. As a result, developers are changing how they build applications.  They are using different tools, different design patterns, and even different forms of SQL to parse data. Continue reading