For growth initiatives, many companies are looking to innovate by ramping analytical, mobile, social, big data, and cloud initiatives. For example, GE is one growth-oriented company and just announced heavy investment in the Industrial Internet with GoPivotal. One area of concern to many well-established businesses is what to do with their mainframe powered applications. Mainframes are expensive to run, but the applications that run off of them are typically very important and the business can not afford to risk downtime or any degradation in service. So, until now the idea of modernizing a mainframe application has often faced major roadblocks.
There are ways to preserve the mainframe and improve application performance, reliability and even usability. As one of the world’s largest banks sees, big, fast data grids can provide an incremental approach to mainframe modernization and reduce risk, lower operational costs, increase data processing performance, and provide innovative analytics capabilities for the business—all based on the same types of cloud computing technologies that power internet powerhouses and financial trading markets. Continue reading →
Just like we saw in the dot-com boom of the 90s and the web 2.0 boom of the 2000s, the big data trend will also lead companies to make some really bad assumptions and decisions.
Hadoop is certainly one major area of investment for companies to use to solve big data needs. Companies like Facebook that have famously dealt well with large data volumes have publicly touted their successes with Hadoop, so its natural that companies approaching big data first look to the successes of others. A really smart MIT computer science grad once told me, “when all you have is a hammer, everything looks like a nail.” This functional fixedness is the cognitive bias to avoid with the hype surrounding Hadoop. Hadoop is a multi-dimensional solution that can be deployed and used in different way. Let’s look at some of the most common pre-concieved notions about Hadoop and big data that companies should know before committing to a Hadoop project: Continue reading →
The cloud, mobile applications and big, fast data are fundamentally changing how applications are built and modernized today. To speed this transformation at the enterprise level, Pivotal, the new venture by VMware and EMC, will host a live streaming event on April 24th at 10:00 am Pacific/1:00 pm Eastern with a special announcement and an unveiling of its plans to build “A New Platform for a New Era”.
The Pivotal platform will unite data, application, and cloud fabrics, helping enterprises to develop faster, understand more, and succeed at an even greater scale. It is a platform that makes the consumer grade enterprise a reality.
Paul Maritz, the Pivotal Leadership Team, and special guests will unveil this platform, and make a special announcement during a live streaming event on Wednesday, April 24th at 10:00 am Pacific/1:00 pm Eastern.
The world’s largest banks have historically relied on mainframes to manage all their transactions and the related cash and profit. In mainframe terms, hundreds of thousands of MIPS are used to keep the mainframe running these transactions, and the cost per MIP can make mainframes extremely expensive to operate. For example, Sears was seeing the overall cost per MIP at $3000-$7000 per year and didn’t see that as a cost-effective way to compete with Amazon. While the price of MIPS has continued to improve, mainframes can also face pure capacity issues.
In today’s world of financial regulations, risk, and compliance, the entire history of all transactions must be captured, stored, and available to report on or search both immediately and over time. This way, banks can meet audit requirements and allow for scenarios like a customer service call that results in an agent search for the transaction history leading up to a customer’s current account balance. The volume of information created across checking, savings, credit card, insurance, and other financial products is tremendous—it’s large enough to bring a mainframe to its knees. Continue reading →
Ensuring your systems run smooth even when your data center has a hiccup, or a real disaster strikes is critical for many companies to survive when hardships befall them. As we enter the age of the zettabyte, seamless disaster recovery has become even more critical and difficult. There is more data than we have ever handled before, and most of it is very, very big.
Most disaster recovery (DR) sites are in standby mode—assets sitting idle, waiting for their turn. The sites are either holding data copied through a storage area network (SAN) or using other data replication mechanisms to propagate information from a live site to a standby site. When disaster strikes clients are redirected to the standby site where they’re greeted with a polite “please wait” while the site spins up.
At best, the DR site is a hot standby that is ready to go on short notice. DNS redirects clients to the DR site and they’re good to go.
What about all the machines at the DR site? With active/passive replication you can probably do queries on the slave site, but what if you want to make full use of all of that expensive gear and go active/active? The challenge is in the data replication technology. Most current data replication architectures are one-way If it’s not one-way, it can come with restrictions—for example, you need to avoid opening files with exclusive access. Continue reading →
Have you ever heard of a zettabyte? If you work in IT, you’ll be hearing more and more about zettabytes, exabytes, and petabytes while the data terms we think are big, such as terabytes and gigabytes wane away from our vocabulary. Right now, we are growing our data stores by 50% year-over-year, and its only accelerating.
While data volumes are skyrocketing, the type of data is also becoming more difficult for traditional databases to handle. Over 80% of it will be unstructured file based data that does not work well with block-based data storage typical of your typical relational databases (RDBMS). So, even if hardware innovations could keep up to support greater volume, the kinds of data we are now storing break traditional RDBMS at today’s speeds.
The bottom line is the volume and types of data being stored is unrealistic for a single, monolithic, structured RDBMS data store. They need to be broken apart and re-architected to survive the Information Explosion we are experiencing today.
Day 2 of the O’Reilly Strata Conference is starting here in Santa Clara, California and the focus is very much on data. In 2005, Tim O’Reilly predicted: “Data is the Next Intel Inside.” At VMware, big, fast data has never been so critical for our customers and innovations are transforming the cloud applications landscape at an unprecedented rate. This conference comes at the perfect time to reset what everyone knows about big, fast data.
The conference kicked off yesterday with several brief 20 minute keynotes. They were all succinct and to the point. Greenplum‘s Scott Yara reflected on how the big data market has grown tremendously over the past few years and mentioned several key data scientist practitioners. Scott also mentioned the increased investment in open source Hadoop. Of course, Strata comes on the heels of the Greenplum Pivotal HD announcement on Monday which launched their distribution of Hadoop which can improve performance 50X to 500X when compared to existing SQL-like services on top of Hadoop.
Another great keynote presentation was from Yael Garten, a Senior Data Scientist from LinkedIn. Yael leads the mobile data analytics team. She began by polling the audience and noting that many in the audience had already been on 3 different devices that morning and it wasn’t even 9:30 am yet. She noted we’re constantly connected, and we need to use data to personalize the experience for users no matter what device we’re on. She had an interesting graph highlighting device use and laptop use during our morning time of “coffee to couch”. And those uses are different in the US compared to places like India. Continue reading →
Universally, applications are faster, deal with large data sets, and provide more compelling user experiences than ever before.
Competition is steep.
As a result, competitive organizations demand that IT leaders speed the rate of new application innovation and development. IT must rise to the challenge or face competitive threats, missed business opportunities, and lose momentum within their user base. In short, IT leaders and providers that do not accelerate will face a backlash from executives.
In order to meet these challenges, IT is renovating application architectures to thrive in the cloud. This is an organization-wide change involving people redirection, process redesign, and technology exploitation. For many, there is a steep learning curve. Continue reading →
If you aren’t familiar with Strata, it is a great conference for those building apps in the cloud. Its focus is all about the future of big data and how to use big data successfully. Speakers include representatives from Google, VMware, Amazon, Microsoft, and many other software companies focused in the big data space. Topics include: Continue reading →
Yet, the pace of information technology often forces IT executives to do that.
In today’s world, mainframe-to-cloud decisions need solid thinking or we risk a technology tornado. This article outlines some key lessons learned at the front-line of IT decision-making.
As previously discussed, it’s possible to “modernize” mainframe legacy applications to the cloud. You can get there with little to no modification by using a “lift-and-shift” strategy. Several of my clients have taken this approach to quickly satisfy a “cloud mandate”. The results have been less than desirable:
Without the use of pooled resources, the applications do not scale well.
Timely user provisioning and access from any device is still a challenge because the apps do not provide on-demand, ubiquitous access.
In addition, utility-based pricing/costing is performed manually, with little accuracy to the realities of actual usage.
Most importantly, the applications continue to have monolithic, stove-piped architectures, which are difficult and expensive to maintain and enhance.
These “cloud” applications are more like funnel cloud apps or tornoado apps—waiting to cause IT organizations extreme havoc. Assuming you want to avoid funnel clouds and IT tornadoes, consider applying the following five application architecture and design principles indicative of a true cloud application: Continue reading →