Home > Blogs > VMware vFabric Blog


3 Signs Your Relational Database Must Go

Application and operations teams sometimes reach a point where they must upgrade the database. Whether it’s due to data growth, lack of throughput, too much downtime, the need to share data globally, adding ETLs, or otherwise, it’s never a small project. Since these projects are expensive, any recommendation requires a solid justification.  This article a) characterizes 3 signs where traditional databases hit a wall, b) explains how vFabric SQLFire provides an advantage over traditional databases in each case, and c) should help you make a case for moving towards an in-memory, distributed data grid based on SQL.

For those of us tasked with upgrading (or architecting) the data layer, we all go through similar steps. We build a project plan, make projections and sizing estimates, perform architecture and code reviews, create configuration checklists, provide hardware budgets and plans, talk to vendors about options, and more.  Then, we work to plan the deployment with the least downtime, procure hardware and software, test different data load times, evaluate project risks, develop back-up plans, prepare communications to users about downtime, etc. You know the drill. These projects can take months and consume a fair amount of internal resources or consulting dollars. If you are starting or working on one of these types of projects with a traditional database architecture in mind, are you considering these 3 signs as you consider your options?

Sign #1: Traditional RDMBS Scalability

One of the biggest challenges facing DBAs of distributed databases is maintaining the speed of query execution as the data grows and is partitioned across the network. Often times, people create lengthy start-up scripts to make queries run quicker.  These start-up scripts “pre-cook” the data and load it into memory buffers. This approach makes it difficult to run ad-hoc queries or switch between reporting and quick transactional workloads. As well, many profiles of data creation and consumption make it difficult to tune bufferpools and manage memory so data stays ‘hot’ and doesn’t slip into ‘cold’ disk storage, which affects overall performance. In addition, as the data is spread across a cluster, problems arise. For example larger joins end up throwing data back and forth between members, leading to network bottlenecks.

SQLFire Scalability

The SQLFire distributed cluster allows for horizontal, near-linear scalability as your clients and data grow in size. Adding new members will allow increased query throughput for tables partitioned across the cluster, as SQLFire distributes data load. A 10-member cluster can grow to 1000 or more—we have seen our customers do it.

Every table is in-memory and can be partitioned for scale and/or replicated for high availability. There is no need for start-up scripts and buffer tuning. Adding a new member to the cluster is as simple as starting a SQLFire instance on a server and telling it where to find the locator member (or using multicasting, just telling it which multicast port to use!). The new member becomes a part of the cluster and instantly can handle queries from clients due to SQLFire’s automatic repartitioning of data. There is no need to configure the entire cluster at startup time, no need to bring down and restart the entire cluster just to add a new member, and no need to manually load data into the new member or alter the database schema. Each member is a single hop away from all other members, ensuring that no member becomes a chokepoint for the rest of the cluster.

Tables created in SQLFire are not partitioned across specific members, rather they are assigned to ‘server groups’. When a new member joins the cluster, it identifies itself as belonging to one or more server groups. This enables granularity of sharing of data between members, but allows for on-the-fly addition of more storage space to the cluster. Tables can also be replicated across members for added redundancy. Provisioning speed therefore becomes negligible, and a newly provisioned SQLFire member has immediate utilization of data within the cluster. De-provisioning is as simple as stopping the member while the rest of the cluster continues to run. Client-side queries are not disrupted as the SQLFire topography changes.  This means downtime is largely a past memory.

Finally, we don’t run into the same bottlenecks as we see with traditional clusters or replication. Asynchornous WAN replication allows for global scalability because SQLFire clusters asynchronously replicate and mirror data across the network. Our customers achieve near real-time visibility of data no matter what portion of the cluster they connect to!

Sign #2: Traditional RDMBS Speed

Distributed databases encounter problems when the network becomes unreliable and data nodes are unreachable. Most databases have to wait until all data nodes have responded to a query, or else complex and difficult-to-maintain failover and hot standby replicas of the data must be added to enable a coherent view of the data from a customer’s perspective. As the cluster starts to encounter multiple data node failures, the entire system can be brought to a standstill as queries are stalled in mid-transaction.

SQLFire Speed

The in-memory data storage means instant ‘hot’ data, no running ‘priming queries’ to buffer the data. Tables can be configured to be completely in-memory or a mix of in-memory and persistent data. Persistence is configurable by tracking LRU or heap percentage usage, allowing DBAs to keep the most active data in memory and the least active data on disk. Data can be written asynchronously to disk, eliminating performance delays for writes.

But, SQLFire is not just a memory cache, joins are executed in parallel across the cluster, in memory on each member, maximizing query performance and giving an order of magnitude benefit compared to disk-based RDBMS clusters. As the number of members hosting data for a partitioned table increases, query performance can linearly scale.

SQLFire routes queries to members that hold data instead of sending data to members that execute queries, lowering network latency. Members that hold data and do not participate in the query are pruned from the query execution. This yields increased query throughput without the expensive hardware upgrades of traditional RDBMS systems.

Sign #3: Traditional RDMBS Stability and Reliability

Keeping a traditional database up and running takes time and money. Every database implementation weighs the cost of downtime against the cost of regular back-up and restore versus software-based clustering, hardware based failover, etc.  Disaster recovery or “no single point of failure” models require additional steps in planning, implementing, testing, and operations. Is a cold standby enough? Warm? Hot? Mirror, log shipping, syncing? The fact is that a single node or two nodes is not a model for stability when compared to 10 or 20 or 100 nodes that are all hot, live, synced, and running in parallel.

SQLFire Stability and Reliability

The SQLFire shared-nothing cluster is fault-tolerant and will automatically route queries to active members, so there is no single point of failure. High availability is achieved via automatic synchronous data replication across the cluster. Members that go down can be brought back up without interfering with running queries, minimizing data loss.

WAN support comes with failover and active-active replication scenarios supported. Disaster recovery is handled in the cluster, as clients connected to members that drop off the cluster automatically get their queries rerouted to another part of the global cluster. Data can be persisted to disk, for recovery and restart purposes, synchronously or asynchronously. User-configured redundancy levels are maintained by SQLFire to allow queries to run even if multiple nodes experience failure. Of course, manual configuration, tuning. and optimization is also possible.  SQLFire was designed to be distributed as a data-grid from the beginning (as it is from GemFire), and all of the stability-related functions are built in.

Implementations and the Cost of Rip and Replace

OK, if you see the advantages and have read this far, the next question that comes up is—how much work do we have to do to replace our database versus the upgrade we were considering? Or, is it possible to add SQLFire to our architecture without replacing the traditional database?

>> >> Download Trial Here

Even if the solution (like SQLFire) is based on non-proprietary, economical hardware nodes, there are other considerations. For replacement decisions, many people probably make a gut-level decision about an in-memory data store when they hear the words NoSQL. We have all heard of NoSQL, but how many of us are willing to translate all of our terabytes of data and applications into a new paradigm that is unfamiliar? This requires months of rework, testing of the new queries, careful consideration of the new database’s rules about transactions or datatypes, and retraining of DBAs and end users alike. SQLFire is NOT NoSQL. It is NewSQL, and there are two categories of capabilities to consider for keeping “replacement costs” down on your path to a data grid:

Adding SQLFire to Existing Data Stores

SQLFire can act as a front-end memory cache to a back-end traditional RDBMS. User-configurable and pluggable write-behind or read-through event handlers can cache results from queries and deliver them to clients quickly, and you can propagate updates to the backend RDBMS synchronously or asynchronously. You can also connect SQLFire to multiple backend data stores for increased heterogeneous query performance. The backend does not need to be an RDBMS—any pluggable data storage paradigm, from flat files to XML to object stores can be used with SQLFire’s flexible pluggable interface classes.

User-configurable and extendable per-table eviction settings allow for the most queried data to be kept in-memory. As well, you can trigger code off of SQLFire events (i.e. insertion of a row, cache miss of reading a row), and these can be extended with user Java classes. Java classes can also be stored inside the database as objects, with specialized serialization and comparison logic, allowing for Object-Relational functionality. There is also the ability to plug in Hadoop jobs through the event trigger mechanism to take advantage of map-reduce jobs from the cluster.

Clients can connect to SQLFire in either thin client mode (the client does not host data and requests it from the cluster, requiring no more than two hops to reach the member hosting the data needed), or peer client mode (the client joins as a data host, and is always at most one hop from the member hosting the data needed).

Replacing an RDBMS with SQLFire

SQLFire can also act as an in-memory data store. You don’t need to write new application code, learn a low-level API, or consider a new query language to start sending queries to SQLFire. It is a fully ANSI SQL-92 compliant and comes packaged with JDBC and ADO.NET support. Existing queries and scripts can be run on SQLFire with minimal schema modification (primarily, to describe the partitioning strategy to be used for a table). Data is stored internally as a key-value store, but externalized as relational tables. Clients do not need to know the internal representation or physical location of the data to query the cluster, whether it be on a local network or across a WAN topology.

Transactional or non-transactional queries can be run concurrently with full ACID support for query consistency and optimistic reduced-locking conflict detection between concurrent transactions. DDLs are atomically executed across SQLFire, persistent across member shutdown and startup, and replayed to new nodes as they join, ensuring cluster-wide consistency of schema.

Please feel free to ask questions or make requests below for future articles. Thank you!

For more about SQLFire, visit the SQLFire resources page, read the documentation, or download a trial.

About the Author: Randy Spalten is a Senior Programmer with vFabric SQLFire and has 15 years experience in database internals, index management, and query optimization. He’s passionate about making Big Data queries faster, easier, and more scalable.

6 thoughts on “3 Signs Your Relational Database Must Go

  1. Pingback: Big, Fast Data Opportunities in Mobile Applications | VMware vFabric Blog - VMware Blogs

  2. Pingback: VMware vFabric Blog: Part 1: The Value, Architecture, & Code for Building Geography-Based Apps | Virtualization

  3. Pingback: Part 1: The Value, Architecture, & Code for Building Geography-Based Apps | VMware vFabric Blog - VMware Blogs

  4. Pingback: Scaling and Modernizing .NET and Java: SQLFire Performance Test Blows Away Traditional RDBMS | VMware vFabric Blog - VMware Blogs

  5. Pingback: VMware vFabric Blog: Scaling and Modernizing .NET and Java: SQLFire Performance Test Blows Away Traditional RDBMS | Virtualization

  6. Pingback: VMware vFabric Blog: Scaling and Modernizing .NET and Java: SQLFire Performance Test Blows Away Traditional RDBMS | Strategic HR

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>