Home > Blogs > VMware vFabric Blog > Author Archives: vFabric Team

5 Steps to Mainframe Modernization with a Big Fast Data Fabric

For growth initiatives, many companies are looking to innovate by ramping analytical, mobile, social, big data, and cloud initiatives. For example, GE is one growth-oriented company and just announced heavy investment in the Industrial Internet with GoPivotal. One area of concern to many well-established businesses is what to do with their mainframe powered applications. Mainframes are expensive to run, but the applications that run off of them are typically very important and the business can not afford to risk downtime or any degradation in service.  So, until now the idea of modernizing a mainframe application has often faced major roadblocks.

There are ways to preserve the mainframe and improve application performance, reliability and even usability.  As one of the world’s largest banks sees, big, fast data grids can provide an incremental approach to mainframe modernization and reduce risk, lower operational costs, increase data processing performance, and provide innovative analytics capabilities for the business—all based on the same types of cloud computing technologies that power internet powerhouses and financial trading markets. Continue reading

Banks Are Breaking Away From Mainframes to Big, Fast Data Grids

The world’s largest banks have historically relied on mainframes to manage all their transactions and the related cash and profit. In mainframe terms, hundreds of thousands of MIPS are used to keep the mainframe running these transactions, and the cost per MIP can make mainframes extremely expensive to operate. For example, Sears was seeing the overall cost per MIP at $3000-$7000 per year and didn’t see that as a cost-effective way to compete with Amazon. While the price of MIPS has continued to improve, mainframes can also face pure capacity issues.

In today’s world of financial regulations, risk, and compliance, the entire history of all transactions must be captured, stored, and available to report on or search both immediately and over time. This way, banks can meet audit requirements and allow for scenarios like a customer service call that results in an agent search for the transaction history leading up to a customer’s current account balance. The volume of information created across checking, savings, credit card, insurance, and other financial products is tremendous—it’s large enough to bring a mainframe to its knees. Continue reading

How-To: Build a Geographic Database with PostGIS and vPostgres

Mobile Location Based Services are on the rise. After several false starts back in the mid 2000s, every mobile user now depends on their phones to tell them where they are, where their friends are, and to engage with social media like Facebook and Foursquare.  A report by Juniper Research suggests this market is expected to breach over $12 billion next year, where it hardly existed a few years ago at all.

This is in part because mobile apps have become ubiquitous now. In order to remain relevant, businesses need to interact socially and have a web store to remain accessible to their wandering customers.

Building a geographically aware application from scratch sounds daunting and like a lot of initial data setup. It doesn’t have to be. Products like vFabric Postgres (vPostgres) can be used along with the PostGIS extensions to perform geographic-style queries. Then,  public data and an open source visualizer can be used to transform the query into a meaningful result for your application or end user.

Continue reading

Disaster Recovery Jackpot: Active/Active WAN-based Replication in GemFire vs Oracle and MySQL

Ensuring your systems run smooth even when your data center has a hiccup, or a real disaster strikes is critical for many companies to survive when hardships befall them.  As we enter the age of the zettabyte, seamless disaster recovery has become even more critical and difficult. There is more data than we have ever handled before, and most of it is very, very big.

Most disaster recovery (DR) sites are in standby mode—assets sitting idle, waiting for their turn. The sites are either holding data copied through a storage area network (SAN) or using other data replication mechanisms to propagate information from a live site to a standby site.  When disaster strikes, clients are redirected to the standby site where they’re greeted with a polite “please wait” while the site spins up.

At best, the DR site is a hot standby that is ready to go on short notice.  DNS redirects clients to the DR site and they’re good to go.

What about all the machines at the DR site?  With active/passive replication you can probably do queries on the slave site, but what if you want to make full use of all of that expensive gear and go active/active?  The challenge is in the data replication technology. Most current data replication architectures are one-way. If it’s not one-way, it can come with restrictions—for example, you need to avoid opening files with exclusive access. Continue reading

How to Perform Security Updates on vFabric Postgres

The PostgreSQL community announced last week that an important security update will be released on April 4, 2013. This release will include a fix for a high-exposure security vulnerability and all users are strongly urged to apply the update as soon as it is available. Knowing how disruptive urgent security updates can be to IT and developers, the PostgreSQL community issued advanced warning in the hopes that it would ease the impact to day-to-day operations while helping as many companies as possible to adopt the update quickly.

As such, we would like to take the time to remind us all how important these security updates are to your business, and how to apply them most efficiently for vFabric Postgres.

The Cost of Missing Security Updates

Maintenance and security software updates are essential in extending application longevity as well as in keeping the confidence of customers who use services based on the application.

When big data disasters hit, the impacts quickly move beyond financial and affect reputation and trust. Databases are a particular area of concern. A recent article titled, “Making Database Security Your No. 1 2013 Resolution,” cited a Verizon study that showed only 10 percent of total security spend goes into database protection, while 92 percent of stolen data comes out of databases.

According to the seventh annual U.S. Cost of a Data Breach report from Ponemon Institute, the cost of an average data breach was $5.5M in 2011 or $194 per record. While $5.5M may not sound like a lot to some companies, losing one million records at a cost of $194 per record adds up. Continue reading

Tips and Tricks for Internal Use of Your vFabric Postgres VM

Ever since VMware released our new version of the popular PostgreSQL, we have been working hard to publish a series of informative articles on what’s new in the vFabric Postgres (vPostgres) 9.2 release. There are a number of cool things like major performance and scale improvements, elastic memory for vPostgres, contributions to open source, master-slave clusters, and new GUI capabilities. In this post, we are going to dig into some tips and tricks for internal use of a vFabric Postgres virtual machine, talk about scripts, and explain the management of the network interface.

First, to see what we are talking about, it would be helpful to login to your vFabric Postgres VM. vPostgres supports SSL by default (see last weeks post on securing your vPostgres deployment for more security tips). Therefore, it is pretty easy to connect to a server and set things up inside for those who really want to personalize things at a very low level (like pg_bha.conf for connection restrictions). After the first initialization, you can connect either as user postgres or root with the same password you used at first boot.

As with every other system, it is never really recommended to connect with the user root for security reasons. By connecting with user postgres you will find the following things once connected. Continue reading

Securing your vFabric PostgreSQL VM

Especially in today’s world, security is top of mind for app developers, DBAs, and CIOs alike. One of the benefits that VMware strives to include in every product is a system of reasonable defaults for security. This generally means that users should expect a reasonably secure middleware application when they deploy a VMware app by default.

vFabric Postgres (vPostgres) is no different. There are not that many security settings in vFabric Postgres. However, there are a few things you can look at as options to further harden your deployment, and of course, the virtual machine that you are deploying them on, particularly if it is exposed to an external environment.

SSL Connection Restrictions

vFabric Postgres has as default users postgres and root, and both can connect to the virtual machine with SSL. If you want to restrict access to the virtual machine for certain users or a group of users, here is some advice to follow:

1. In order to restrict SSL connection only to the members of the group vFabric (the user postgres is a member of this group by default), add this line at in /etc/ssh/sshd_config.

AllowGroups vfabric

Continue reading

Exploring the New Database Server GUI Features in vFabric Postgres 9.2

The vFabric Postgres 9.2 release seriously upped the user interface (UI) experience. In our post last week, we talked about the built in, VM-based GUI to help manage the system, network, and updates. This week, we’d like to take you through the changes your DBAs and developers will see when using the updated database server—listing the databases on the server, seeing database global data, and drilling into processes and locks. All of this comes out of the box with the vFabric Postgres appliance.

Connecting to the Database Management Interface

Once your vFabric Postgres server is up and running, you can connect to its database management interface using this URL in a web browser. The connection is made with https, port 8443 at the IP address or domain for the server: Continue reading

10 Lessons from Spring Applied to Java Virtualization with vFabric

The Spring Framework became the de-facto standard for developing enterprise Java applications, and its radical simplicity was fundamental to its success. Why the “radical” simplicity? Because at the time, it was hard to imagine how creating such applications could be made simple.

By tackling issues such as portability, understanding the importance of cross-cutting concerns, and making it trivial to develop automated tests, Spring allowed developers to focus on what matters: what makes their application unique.

As I was pulling together my presentation for SpringOne2GX 2012, I reflected on the parallels between Spring’s success and the direction we were going with EM4J. Why did Spring succeed? Why did simplification win? Where are we replicating these patterns within VMware, vFabric, and Java?

In short, complexity is expensive, and simplification has many economic benefits. By giving people better, simpler, and easier to use tools to help build, run, and manage applications, we create economic advantages.

In a nutshell, there are some core reasons why Spring succeeded, “Spring values” if you will: Reducing complexity, increasing productivity, provisioning flexibility, tooling and monitoring, extensibility, automation, flexible integration and ease of testing. Continue reading

Putting the ‘Single’ Back in Single Sign-On (SSO)

Modern companies and IT organizations have many applications, both internal and customer facing.  With these many applications your users are faced with the challenge of not only managing multiple sets of credentials, but are also forced to login to each and every individual application separately.  This creates a bad experience for your users.

To improve user experience, IT created a concept called Single Sign On (SSO). The idea was users could sign on once, and the SSO software would automatically authenticate them for all their applications. This not only helped the user experience, but also helped IT by cutting down on the number of ‘forgot password’ tickets opened and when users left the organization, it made de-authenticating them really easy. The idea is great, but in practice it frequently stopped short at authentication.  Continue reading