Apache Derby is used for its RDBMS components, JDBC driver, query engine, and network server.
The partitioning technology of GemFire is used to implement horizontal partitioning features of vFabric SQLFire.
vFabric SQLFire specifically enhances the Apache Derby components, such as the query engine, the SQL interface, data persistence, and data eviction, as well as adding additional components like SQL commands, stored procedures, system tables, functions, persistence disk stores, listeners, and locators, to operate a highly distributed and fault tolerant data management cluster.
Application and operations teams sometimes reach a point where they must upgrade the database. Whether it’s due to data growth, lack of throughput, too much downtime, the need to share data globally, adding ETLs, or otherwise, it’s never a small project. Since these projects are expensive, any recommendation requires a solid justification. This article a) characterizes 3 signs where traditional databases hit a wall, b) explains how vFabric SQLFire provides an advantage over traditional databases in each case, and c) should help you make a case for moving towards an in-memory, distributed data grid based on SQL.
For those of us tasked with upgrading (or architecting) the data layer, we all go through similar steps. We build a project plan, make projections and sizing estimates, perform architecture and code reviews, create configuration checklists, provide hardware budgets and plans, talk to vendors about options, and more. Then, we work to plan the deployment with the least downtime, procure hardware and software, test different data load times, evaluate project risks, develop back-up plans, prepare communications to users about downtime, etc. You know the drill. These projects can take months and consume a fair amount of internal resources or consulting dollars. If you are starting or working on one of these types of projects with a traditional database architecture in mind, are you considering these 3 signs as you consider your options? Continue reading →
Memory is faster than disk. People realize that when they need to support high performance on-line applications. Recently many traditional database providers latched onto this and started “washing” their offerings with in-memory variations. At the same time, new companies are jumping into the In-Memory Data Grid (IMDG) space with unproven offerings. However, enterprise data is not something many are willing to experiment on.
VMware has virtually pioneered the IMDG, even before it was a category. Its vFabric GemFire team has been at this for a while now with a proven, production-grade offering called vFabric GemFire. In its latest release, vFabric GemFire 7.0 brings a couple of key enhancements for developers and IT pros alike:
Improving developer productivity
Increasing operational efficiencies
These improvements are in addition to the already proven data consistency and reliability that many have come to expect form vFabric GemFire in their scale-out data architectures. Once more, VMware has shown, both the technical knowhow and the necessary experience in enterprise-grade in-memory data to support on Cloud-scale. Continue reading →
Despite what people tell you, managing on-line applications on a cloud-scale is hard. One of the main challenges is related to the fact that as an application gets more and more popular, the underlining database often becomes the bottleneck.
When demand spikes, organizations are comfortable scaling their Web and App Server layers. However, as they increase the number of application instances to accommodate the growing demand, their data layer is unable to keep up.
We all know that a solution’s overall performance is only as good as it’s lowest common denominator. Increasingly, the lowest common denominator of today’s on-line applications is the database.
A Customer Example
Recently, a large retail customer spoke to us about their experiences in dealing with demand spikes during holidays. Their virtualized infrastructure was more than capable of scaling horizontally to address the growing demand. However, their underlying, traditional database could not handle the large load increases. The database started to experience deadlocks, connection timeouts, and various other problems.