Some of the key trends in big data infrastructure over the past couple of years are:
• Decoupling of Compute and Storage Clusters
• Separate compute virtual machines from storage VMs
• Data is processed and scaled independently of compute
• Dynamic Scaling of compute nodes used for analysis from dozens to hundreds
• SPARK and other newer Big Data platforms can work with regular filesystems
• Newer platforms store and process data in memory
• New platforms can leverage Distributed Filesystems that can use local or shared storage
• Need for High Availability & Fault Tolerance for master components
Storage – the final frontier. These are the voyages of any Business Critical Oracle database, its endless mission: to meet the business SLA, to sustain increasing workload demands and seek out new challenges, to boldly go where no database has gone before.
Storage is one of the most important aspect of any IO intensive workloads, Oracle workloads typically fit this bill and we all know how a mis-configured Storage or incorrect tuning often leads to database performance issues, irrespective of any architecture where the database is hosted on.
As part of my pre-sales Oracle Specialist role where I talk to Customers , Partners and VMware field, I always bring up the fact how we can go and procure ourselves the biggest and baddest piece of infrastructure on this face of earth and all it takes is one incorrect setting or mis-configuration and everything goes to “Hell in a Handbasket”.