One of Pivotal’s senior engineers, leading expert, and Tomcat contributor, Mark Thomas, made the announcement on Tuesday via Apache Tomcat mailing lists. Tomcat 8.0 supports the Java SE 7 specifications including, Java Servlet 3.1, JavaServer Pages 2.3, Java Unified Expression Language 3.0, and the new Java WebSocket 1.0 specifications.
Developed in parallel, Thomas had previously explained how specifications and releases are related, “As the work on the specifications proceeds, and, as the changes firm up, then those changes will be implemented in the Tomcat 8 branch.”
Plain and simple: Apache Hadoop has become the technology disrupter that is sending every enterprise into overdrive to get up to speed on and figure out how to exploit their data. Adoption is accelerating at 60% a year, yet 26% of the most sophisticated Hadoop users say that the time it takes to put Hadoop into production is gating its success.
From the agenda on this year’s Hadoop Summit in San Jose on June 26 & 27th, it looks like the industry is primed to fix this issue. This year, it is one of the first Hadoop/Big Data conferences that is supporting a full infrastructure track. VMware is also serious about this too, but we need your help—we need to meet you there!
Strategy Feedback Sessions
VMware’s big data experts, along with colleagues such as EMC’s Chuck Hollis, will be at the conference running a series of strategy feedback sessions concentrating on how extending virtualization will meet tomorrow’s requirements for big data analytics environments. We’d very much like to have you participate—and who knows, you may help shape the very future of Hadoop in big data web applications.
These 90 minute sessions will be run as small groups throughout the conference and will allow you to meet some of our top minds on how Hadoop will transform itself to seize the cloud. We’ll share with you some of what we see happening with a shift to make Hadoop more on-demand in the cloud, and some of our enabling technologies such as Serengeti and Hadoop Virtual Extensions (HVE). For your part of these sessions, we will concentrate on questions like: Continue reading →
The End of Life (EOL) process has begun for vFabric Enterprise Ready Server (ERS). The end of availability date is July 1, 2013, after which ERS will no longer be available for purchase. After this date, existing customers will be able to use their active deployments and will continue to benefit from support until their active subscription and support agreement (SnS) runs out. The end of general support (EOGS) date is July 1, 2014.
All ERS customers are encouraged to convert or deploy vFabric Web Server, the product that replaces the ERS bundle. Web Server has been integrated with the latest open source runtimes, security patches, and bug fixes, as well as adding new product features that better serve enterprise customers deploying applications across virtualized and cloud environments.
ERS customers with perpetual licenses and active support and subscription agreements (SnS) are entitled to a one-time $0 conversion to vFabric Web Server and open source Apache Tomcat Support. ERS customers with term licenses are eligible to convert to equivalent vFabric Web Server term licenses at the same price as ERS for HTTP term license and optionally add Apache Tomcat support or vFabric tc Server licenses.
Today, we are excited to welcome Cloudera officially to the VMware family. VMware and Cloudera have entered into a partnership agreement that is meant to help users of Cloudera’s Hadoop distribution, CDH4, to run in the cloud. As part of this announcement, VMware has tested and certified Cloudera’s Enterprise Big Data software to run on vSphere 5.1 and that Cloudera is now part of the VMware Ready and Technical Alliances Partner (TAP) program.
Whenever we’ve dealt with something for a while, our way of thinking about it becomes a habit. Hadoop deals with a lot of data. Currently, the record is 100 petabytes in a Facebook cluster that analyzes log data. Since it was built by the likes of Google and Facebook to deal with such large data volumes and performance, it originally was built to run on bare-metal servers. Since it wasn’t an option from the get-go, the notion that you can’t have that much data running on a move-able virtual machine safely has largely gone unchallenged.
However, as time has gone on, and technology has allowed for persistent storage on the cloud, organizations have started to rethink this paradigm. In fact, several companies are using Hadoop and big data today to gain competitive advantage. And while they are running it on virtualization, they are not moving the data. There are other advantages.
VMware’s Big Data product line marketing manager Joe Russell, spoke with Roberto Zicari this week in an interview on ODBMS.org that helps articulate why Hadoop not only can run on virtual infrastructure using Project Serengeti, but why companies should consider it to save time and make Hadoop more usable. Continue reading →
Announced this morning on the new Pivotal blog, where RabbitMQ now resides, this version includes enhancements to garbage collection, consumption, requeuing, memory use, and dead lettering.
For those on Mac OS X, there is a newly packaged, standalone release of RabbitMQ that doesn’t require a separate Erlang install.
Some key, new capabilities include eager synchronisation of mirror queue slaves, automatic cluster partition healing, and improved statistics (including charts) in the management plugin. There are also many enhancements and bug fixes to the server, Java client, Erlang client, and a number of other plugins, including federation, old-federation, shovel, Web-STOMP, STOMP, and MQTT plugins, as well as the consistent hash exchange.
For growth initiatives, many companies are looking to innovate by ramping analytical, mobile, social, big data, and cloud initiatives. For example, GE is one growth-oriented company and just announced heavy investment in the Industrial Internet with GoPivotal. One area of concern to many well-established businesses is what to do with their mainframe powered applications. Mainframes are expensive to run, but the applications that run off of them are typically very important and the business can not afford to risk downtime or any degradation in service. So, until now the idea of modernizing a mainframe application has often faced major roadblocks.
There are ways to preserve the mainframe and improve application performance, reliability and even usability. As one of the world’s largest banks sees, big, fast data grids can provide an incremental approach to mainframe modernization and reduce risk, lower operational costs, increase data processing performance, and provide innovative analytics capabilities for the business—all based on the same types of cloud computing technologies that power internet powerhouses and financial trading markets. Continue reading →
Pivotal, first announced in December, is a new venture started by VMware and EMC that is focused on Big Data and Cloud Application Platforms. Formally launched as a stand-alone entity today, Pivotal is led by former VMware CEO Paul Maritz, who has been working as Chief Strategy Officer at EMC since last August.
In a webinar today, Maritz not only confirmed the new initiative is now a stand-alone business with 1,250 employees from VMware and EMC, but he also surprised listeners with an announcement that General Electric is making a strategic investment of $105 million into Pivotal. GE’s Vice President and Corporate Officer Bill Ruh joined the webinar today and said GE will hold a 10% stake in the new company. CEO Jeff Immelt also joined the call to explain This brings the value of the newly launched Pivotal to $1 billion.
Training is a great way to speed up development, learn how to improve performance and usability for your applications and generally build confidence in your skills. This month, SpringSource is offering java developers a 15% discount code on all VMware trainings including Core Spring, Spring Web, Enterprise Integration, and Hibernate classes.
To secure your 15% discount, be sure to use the promo code springcustomerpromo during your registration process (promo is not available for partners). All of the following qualifying classes for May, 2013 can be found below:
Just like we saw in the dot-com boom of the 90s and the web 2.0 boom of the 2000s, the big data trend will also lead companies to make some really bad assumptions and decisions.
Hadoop is certainly one major area of investment for companies to use to solve big data needs. Companies like Facebook that have famously dealt well with large data volumes have publicly touted their successes with Hadoop, so its natural that companies approaching big data first look to the successes of others. A really smart MIT computer science grad once told me, “when all you have is a hammer, everything looks like a nail.” This functional fixedness is the cognitive bias to avoid with the hype surrounding Hadoop. Hadoop is a multi-dimensional solution that can be deployed and used in different way. Let’s look at some of the most common pre-concieved notions about Hadoop and big data that companies should know before committing to a Hadoop project: Continue reading →