Plain and simple: Apache Hadoop has become the technology disrupter that is sending every enterprise into overdrive to get up to speed on and figure out how to exploit their data. Adoption is accelerating at 60% a year, yet 26% of the most sophisticated Hadoop users say that the time it takes to put Hadoop into production is gating its success.
From the agenda on this year’s Hadoop Summit in San Jose on June 26 & 27th, it looks like the industry is primed to fix this issue. This year, it is one of the first Hadoop/Big Data conferences that is supporting a full infrastructure track. VMware is also serious about this too, but we need your help—we need to meet you there!
Strategy Feedback Sessions
VMware's big data experts, along with colleagues such as EMC’s Chuck Hollis, will be at the conference running a series of strategy feedback sessions concentrating on how extending virtualization will meet tomorrow’s requirements for big data analytics environments. We’d very much like to have you participate—and who knows, you may help shape the very future of Hadoop in big data web applications.
These 90 minute sessions will be run as small groups throughout the conference and will allow you to meet some of our top minds on how Hadoop will transform itself to seize the cloud. We’ll share with you some of what we see happening with a shift to make Hadoop more on-demand in the cloud, and some of our enabling technologies such as Serengeti and Hadoop Virtual Extensions (HVE). For your part of these sessions, we will concentrate on questions like:
- How far along is your investment in Hadoop? Just starting out, couple clusters in production or further than that?
- What toolsets are you using? What does your environment workflow look like?
- Where is the data coming from? Legacy systems or new data?
- What kinds of users are getting value from the solution? What sort of experience do they prefer?
- Is information governance an issue? Do you need to protect certain users from access to types of data? If its not an issue today, do you see it being one?
- Is it possible for multiple clusters to share data sets?
- On the infrastructure side, what hurts the most in setting up and maintaining a Hadoop cluster? What would you like to see fixed?
- Is there interest in being able to easily vary compute, capacity and bandwidth? Or does the uniform building block approach work the best for you?
- Is there any interest in backing these environments up? Or is it simpler to recreate from existing data feeds?
- Is there interest in business continuity and disaster recovery?
- Do you think any of these answers will change over time?
If you are interested in participating, please contact Chuck Hollis at chuck.hollis_AT_emc.com or Joe Russell at joerussell_AT_vmware.com. Let them know a little about yourself and your company and why you are interested. They will contact you directly to let you know availability for the sessions.
Check out Chuck’s blog Going To Hadoop Summit? We’d Love to Chat… for more on these sessions.