Home > Blogs > Rethink IT

EMC VMAX3, XtremIO and VMware Solutions: Helping Solve IT’s Biggest Challenges

By Loretta Brown, vice president, OEM Alliances, VMware

The mobile cloud era (cloud, mobile, social, big data) is driving structural change across many industries. Organizations that are able to take advantage of this by delivering workloads to any device, across any cloud, on demand and securely, can gain a competitive advantage. In order to accomplish this, IT must build a hybrid cloud solution based on a software-defined datacenter.

The tectonic shifts in IT pose many great challenges for organizations, and at VMware we are proud to continue to collaborate closely with our partners, like EMC, to help solve some of the industry’s biggest IT challenges.

With today’s announcements from EMC on its Next Gen VMAX3 and XtremIO solutions, VMware continues its strong partnership with EMC as customers continue to accelerate and realize the benefits of the software-defined enterprise.

    VMAX, XtremIO and VMware solutions

Today, EMC revealed a new VMAX solution, VMAX3, which is a hyper-consolidated solution that provides predictable service levels for up to tens of thousands of virtualized workloads, including file or block, and provides high levels of availability with seamless cloud access.
VMAX3 includes enhanced management support with vSphere and acceleration with VMware VAAI.

More information on the new VMAX3 can be found here.

Additionally, EMC also announced new XtremIO solutions, which help organizations realize powerful performance and agility in their software-defined data centers. VMware Horizon 6 and XtremIO solutions meet at this intersection of VDI and storage — empowering a dynamic end-user experience at scale with radically simple central management across desktop, BYO and applications for IT, with low TCO.

When customers have specialized virtualized workloads in block environments, which require low latency and inline data reduction services, such as VDI, and mission critical applications and databases, XtremIO and VMware Horizon is a powerful solution. Thousands of desktops can be deployed in minutes for all kinds of users across desktop, BYOD and enterprise applications soar with incredibly fast, low-latency performance – all without compromising security or risking data loss.

Together, the combination of EMC’s new solutions with VMware infrastructure helps customers keep on top of today’s workloads and get ahead with tomorrow’s applications.

We congratulate EMC on its significant portfolio launch today, and look forward to continuing to partner as we help solve IT’s biggest challenges.

VMworld 2014 is quickly approaching, and you can learn more about how EMC and VMware are working together at many of the sessions available at the event. The VMworld session scheduler is live at the following link – here you can search for sessions where you can learn more about joint solutions with EMC. We look forward to seeing you in San Francisco, Aug. 24-28.

Reports of vCloud Director’s death exaggerated

Borrowing from Mark Twain’s retort to newspaper stories of his death, reports of the death of vCloud Director are exaggerated. vCloud Director (vCD) is alive and well and now 100% focused on the needs of the service provider market, where it powers more than 250 public clouds in the vCloud Powered and vCloud Datacenter programs in addition to VMware’s own vCloud Hybrid Service.

We’ll continue to integrate vCD functions into vCenter and vCloud Automation Center, as previously announced. With this strategy, we can focus vCloud Director development on the service provider market and public cloud, and vCloud Automation Center on the needs of enterprises and private cloud. The product management and engineering teams for vCloud Director are part of VMware’s Cloud Services business unit, and we’re working hard on the next release.

So, here are the key facts:

1) Development of vCloud Director continues at VMware, now 100% focused on the cloud service provider market.

2) vCloud Director will continue to be available in the VMware Service Provider Program (VSPP) and also continues to be a foundational component of vCloud Hybrid Service, VMware’s IaaS offering.

3) The next release of vCloud Director will be version 5.6, in the first half of 2014, available through VSPP to cloud service providers.

4) VMware continues to develop and enhance the vCloud API, to provide API access to new capabilities, and to make the API faster and easier to use.

The product team is finalizing the content for the vCloud Director 5.6 release, building on the current vCloud Director 5.5 functionality with new capabilities requested by our service provider customers, as well as new functionality developed for vCloud Hybrid Service. We met with many service providers at VMworld in San Francisco last week to gather feature requests and roadmap feedback, and the product management team will also be in Barcelona for VMworld EMEA, and in Australia for vForum Sydney. Let your VSPP account manager know if you’d like to meet and help shape the roadmap.

Roadmap themes include serviceability (ease of deployment, upgrades and updates), disaster recovery integration and other revenue-generating services, networking (further exploiting virtual networking and NSX), storage and security.

Thank you to our customers and partners for helping us build a better vCloud Director for public clouds, and I hope this post provided useful clarity.


Managing the Shift to a New Model for IT

In a previous post, I painted a picture of a new world in which CIOs act as service brokers, figuring out the best way for them to deploy services, whether internally or externally. In this new world, the CIO essentially oversees a “rental market,” in which services aren’t bought with expensive capital outlays, but rented with operational budget. Such a scenario begs the question of what impact such a “rental market” has on the way IT delivers technology.

The first thing the CIO needs to do is get “rogue IT” – the scenario in which line of business personnel are going to SaaS vendors or Amazon Web Services (AWS), whipping out a credit card, and setting up computing services without IT’s knowledge – under control. “Rogue” describes this behavior perfectly, because end-running the IT organization has integration, security, and compliance implications that at the end of day translate into business risk. CIOs need to make their business partners understand that IT has to be the gateway to ensure privacy and quality and compliance. But CIOs themselves need to understand that in the cloud era IT’s monopoly on offering IT services is over.

Even though you might be able to offer the same service, IT must recognize that in some cases – say, for cost or time to market reasons – it might be better to procure a service from a partner rather than to build it and manage it internally. Your job is to manage the contracts, taking responsibility for the technical and enterprise issues you’re responsible for. In a world where third-party services will play an increasingly important role, this critical management function is the key shift from rogue IT.

In this scenario, IT becomes a strategic sourcer of technology – essentially a facilitator or broker to help the business get applications up and running quickly while still maintaining its crucial role for security and compliance. This requires IT to beef up its muscles in disciplines such as contract management and governance beyond risk and compliance. It may also require that IT figure out how to retroactively apply controls to applications that the LOBs have already brought in. And there are probably more of them than you think. When VMware’s IT department went through this process, it initially thought we had approximately 20 SaaS applications – it discovered about 70.

A New Model for IT

There are no traditions for this new rental market. It’s like the wild, wild west. It will involve new roles and new structures, new relationships between LOBs and IT. I’ve seen some customers spin out their IT group and rent services back to the company – kind of like outsourcing, except there is no independent outsourcing vendor. It’s the spin-off’s responsibility to deliver services in the most cost-effective way possible, whether that’s through internal IT, outsourced IT or SaaS options.

LOBs benefit from these new arrangements. Before, they were essentially taxed for IT, paying up to 20 percent of their budget for shared services such as IT and human resources, without any clear visibility as to what they were getting for their money. In this rental market, where IT is constantly in competition with cloud providers, IT has to provide better transparency as to the true cost of its services to enable comparison with externally sourced services. In addition, as IT’s customers increasingly come to expect a consumption-based pricing model, metering is needed to demonstrate actual usage by the LOBs. The result of this greater transparency is greater efficiency all around.

That level of transparency may sound highly unorthodox to some CIOs, but consider the case if you were running IT like a business with your own P&L, as opposed to a best guess allocation of costs to your customers. Some of your services could be revenue generating, for example, by providing hosting to other companies, even those that might be considered competitors. A few banks I have visited in EMEA and Australia have been offering such services to smaller banks for years; the smaller banks save money and the bigger one has a revenue stream. In fact, I’m seeing this strategy adopted across industries and regions. It’s a powerful driver of greater IT efficiency and agility. Reason is, when IT sells its services externally, competitive pressures ensure it must deliver new services faster and provide value more cost-effectively.

Implementing a New 80/20 Rule

Transparency is what you need to adapt successfully to the new model for delivery of IT services. Cloud – IaaS, PaaS, SaaS – is fundamentally changing service delivery. It’s now more about an order-to build service model as opposed to a lengthy and costly build-to-order. Rather than wait for the LOB to request a service then build it out, IT organizations are creating pre-defined catalogs of services for each type of user or common request. When the LOB orders something from the catalog, it’s instantly provisioned on demand.

So where does transparency fit in? Without transparency into the costs and actual usage of your services you cannot make fact-based decisions on whether to source your services internally or from the external rental market. To compete cost-effectively with cloud providers you need to implement the 80/20 rule – standardize your service offerings so that your catalog addresses 80% of your workload needs.

More and more, I see companies instituting a new position of cloud operator or administrator, which acts like an IT product “service” manager to the line of business. Regardless of the title, this important role is tasked with making decisions about what types of services are needed, which ones to offer in the catalog and where they’re sourced or hosted. The decision may be to source the service internally, or, if it’s just a pilot project, the admin may create a short-term AWS account. It’ll be that person’s job to determine the best strategy. With such a system in place, it’s easy for IT to direct special attention to the 20 percent of business demand that still needs customized services.

As you standardize and automate services that represent 80 percent of your requests, they begin to take up less time and effort. You free up more money to invest in customization where it can help the business. Don’t do it just to save money – think about it as a reinvestment effort. Eventually, you can spend 20 percent of your time on tactical issues and 80 percent of your time on strategic issues. You may not ever get to those actual numbers, but that’s not a bad goal.

The upshot: IT gets more efficient, and the business gets better service.

Ramin Sayar is senior vice president and general manager of VMware. He blogs regularly about the ongoing challenges customers face in a changing IT world.

Being a CIO Isn’t Fair

Why isn’t being a CIO fair? Because you have to pay attention to both IT and the business, and the business only has to pay attention to the business. It’s like you have twice the work.

It’s even worse than that, though. Your colleagues on the business side don’t really care what you do. They just want to make sure what you do enables them to do what they need to do –  without interfering with their ability to do it. They don’t care about infrastructure – they just want it to be reliable. They don’t care about security – they just don’t want their data hacked. They don’t care about technology – they just want to be innovative. But CIOs have to worry about all of that – the technology and how it affects the business.

I’m thinking about the inequity CIOs face because I recently spent a few weeks meeting with customers across the U.S., Europe, the Middle East and Africa. I was lucky enough to see a nice cross-section of today’s IT challenges, talking with executives at various levels within IT, working at companies of various sizes and in multiple industries. Many of these executives have simply gotten over the idea of IT being fair. It’s like saying doctors have to deal with being on call at night – that’s just the way it is in this line of work.

These CIOs have moved to a new level of understanding, of acceptance, to a new place where they don’t talk about how difficult IT is, they just focus on how they can best serve the business. It’s all about transformation, about delivering agility. Some of it is about reducing cost at the same time, but most of it is creating a stage upon which the business can perform, where nobody sees or cares what IT is doing behind the curtain.

Here’s what three transformational CIOs are doing to make the business more agile:

One CIO I met serves a global builder of ships, of all sizes and configurations. In order to serve customers better, the company needed to design quickly, get those plans approved, and begin construction. That required follow-the-sun operations with a combination of in-house and outsourced design, which meant that shared design tools had to be accessible from anywhere in the world. The CIO oversaw the creation of a highly virtualized network with automated access to design applications (and appropriate security based on roles). The result: reduced design-to-build time for customers around the world while maintaining security and privacy for their sensitive data.

Another CIO leads a financial services firm, part of an industry that’s besieged by distributed denial-of-service (DDOS) attacks from hackers around the world. In order to provide the highest level of protection, this CIO deployed a state-of-the-art virtualized architecture – while also rethinking how virtualization and security should work together in specific zones to create better data protection. The architecture incorporates new application design that takes into account both cloud computing and security, in such a way that data is protected. The result: more uptime and protection, with reduced risk of attack. The implementation has been so successful that the CIO is sharing it with other CIOs in the region.

Savvy CIOs are collaborating with their business counterparts on how technology can enhance revenue. At one manufacturer I visited, the CIO is working with the business to expand revenues through new value-added services. The IT requirements included improved connectivity to the cloud and mobile access from anywhere. He supported the effort by ordering significant data center consolidation in order to improve operational efficiencies, driving down costs through virtualization and creating a standardized software-defined data center. The result: more innovative services, competitive differentiation, higher revenue, and deeper customer engagement.

These are all examples of how CIOs moving from defense to offense and transforming their IT roles in order to better align with the business and drive change. What’s the common thread here? The infrastructure – the stage on which the business performs. These CIOs understand the needs of their business. They understand how to link technologies such as cloud and virtualization to make change happen. It’s still not fair that CIOs have to make those transformational connections, and do it without the satisfaction of knowing the business understands and appreciates what it takes to make transformation happen. But these CIOs have been able to improve agility, as well as increase revenues, reduce risk, or both. What they lose out in fairness, they gain in results.

Ramin Sayar is senior vice president and general manager of VMware. He blogs regularly about the ongoing challenges customers face in a changing IT world.

Previous posts in this series:

Five Key Steps Toward Innovation

Shifting from Infrastructure to Innovation

The Inflection Point Looms

EMC ViPR: A New Storage Platform for the Software-Defined Data Center

Today at EMC World, EMC announced ViPR, a new open storage platform that enables the abstraction of the storage layer into a single pool of virtual storage within a software-defined data center. ViPR will easily integrate with VMware-based environments, and will enable organizations to centrally access and manage EMC and heterogeneous physical storage infrastructure.

By extending the benefits of the software-defined datacenter to storage, customers will be able to use their existing VMware infrastructure with ViPR to drive greater value, automation and simplicity out of their existing storage solutions.

At VMware, our mission is to extend the benefits of virtualization to all areas of the data center — beyond compute to security, networking, management and storage. With the help of our strategic partners, such as EMC, we can help customers realize greater efficiency, flexibility and agility in their IT infrastructure through a software-defined data center architecture.

Please see the EMC news release for further details on this announcement, and also a blog post on the news from Amitabh Srivastava, President, EMC Advanced Software Division.

Five Key Steps Toward Innovation

In my last blog post, I talked about shifting from infrastructure to innovation. Innovation has always been a key goal of IT, but the pathway to achieving it has never been easy. The cloud has made it easier, but you need a solid foundation on which to build innovation.

Here are five key steps toward building that pathway, best handled in sequence.

Focus on What’s Important. This goes back to the age-old idea of alignment; that is, how can IT best serve the business? Let’s assume you and your business colleagues have worked out the portfolio of services you need to deliver to help the business meet its objectives (of course, that’s a whole separate discussion in itself). The next question is, how should you deliver them? Is it with internal resources or through a third-party service provider? Most CIOs believe that their IT department can handle anything the business can throw at them. But even if it can, should it? Leave ego out of the equation. You should reserve the skills of your IT team for the most mission-critical needs, and outsource or co-source what’s less important.

Rely on Standardization. Standardization is king. Flexibility and choice are nice, but following the 80/20 rule will reduce costs while still delivering sufficient capability for the needs of the great majority of your business partners. Standardize and enable self-service for 80% of the common requests/requirements. Outsource them to the cloud if it makes sense (and not just financially – compliance and security are vital as well). Then leverage your team resources in shared services or infrastructure teams to do the heavy automation and lifting for the custom 20% of projects.

Calculate Your Baseline. To make informed sourcing decisions you have to develop a sound formula for calculating your service costs. Educated guesses and gut feel no longer cut it. You can achieve this through IT financial management tools that automate the capture of your costs (no more spreadsheets!) and allocate them to specific services. Next, compare your baseline to the competition – benchmarking shows how you stack up against your peers and cloud service providers (and how you’re improving over time).  These capabilities are all about confidently making the right sourcing and investment decisions for IT and the business.

It’s All About the Data. No matter where your information lives – on-premise or in the cloud – there’s got to be an easy way to send it back and forth. If you don’t make it easy, you’ll be creating your own bottlenecks. And make sure you develop a cast-iron governance strategy. Just because you don’t control the data in-house doesn’t mean you’re not responsible for it. The flexibility of the cloud bestows great power, and with great power comes great responsibility.

Strive for Visibility and Transparency. I talk to many CIOs who have a definitive mandate: reduce your budget either by real dollars or percentage costs. To do this you need transparency. Think about how you can create a “bill of IT” that clearly states not just what your services cost but who is consuming them. Leverage metering and reporting capabilities to empower a fact-based discussion with your business stakeholders, with showback or even chargeback. This will help you and your business counterparts make better decisions and drive down costs. Use transparency to prove your efficiency – remember, you must be able to show the payoff.

Here’s my recommendation: establish a small, greenfield private cloud deployment for a key line of business and expand from there. Track everything, from costs to ultimate benefits. Show how your investment paid off – that is, how your foundation for innovation enables you to invest limited funds wisely and generate the projected payoff.

Demonstrate that you’ve mastered your costs, targeted business problems, and delivered business value. You’ll have not only created the pathway to innovation, but ratcheted up your reputation within the company.

VMware Roars Into OpenStack Summit

As we head out to Portland for the latest installment of the OpenStack Summit, we have an exciting agenda of speaking sessions and demos, and will be showcasing our latest virtualization wares on the show floor.  For a schedule of all the VMware sessions, we’ve created a show planner for you here. Here’s a snapshot of what you can expect (and experience) at the show.

Keynote Session – “Virtual Networking, A Vagabond’s Log”

On Wednesday, April 17 at 1:50 p.m., VMware’s Martin Casado takes you along on the network virtualization journey. While it’s still an evolving area, the industry now has a few years of virtual networking under its belt. In this talk, Martin will draw from his experience of hundreds of customers visited, hundreds of thousands of miles flown, and dozens of deployments to describe use cases, what works, what doesn’t, and where things seem to be going.

Panel: Network Virtualization and OpenStack Networking users

Want to hear from real world Quantum users at eBay and HP among others? This session is a panel discussion with OpenStack users that have hands-on experience deploying Quantum in production environments, backed by network virtualization technology.

VMware/Nicira NVP Deep Dive

On Monday, April 15 at 11:00 a.m., VMware will provide a “deep dive” into the Nicira Network Virtualization Platform (NVP). This session will provide a detailed overview of NVP, its components, how NVP operates, and how NVP integrates with OpenStack Quantum.

Case Study on Virtualizing Advanced Network & Security Services

On Wednesday, April 17 at 11:50am in room A106, VMware’s will present a technical session on the state of the art in advanced networking and security services implemented in software. The session will dive into the operational and technical elements of integrating services such as load balancers, firewalls and VPNs in your cloud via OpenStack Quantum’s REST APIs. The session will explore the benefits of using virtual appliances to deliver these services on top of standard x86 servers further decoupling network service feature delivery from hardware installs, procurement, and forklift upgrades.

OpenStack Networking Hands-on Lab

On Wednesday, April 17 at 3:40 p.m., users will get access to a live OpenStack + Quantum setup and be able to walk through key quantum deployment use cases, with members of the Quantum core development team available to provide guidance and answer questions.

We hope to see you there!

Shifting from Infrastructure to Innovation

In my conversations with CIOs and other IT executives, I often hear how their teams are focused on maintaining a solid, reliable infrastructure. Their priorities are continuity of service, meeting SLAs, and minimizing disruptions and downtime. That’s an important, admirable goal, but as every IT exec now knows it’s not the whole picture.

If your teams spend too much energy on maintenance to ensure things don’t go wrong, they’re probably going to miss the opportunity for moving forward – and the threat of being left behind.  Consumerization of IT and the cloud have changed everything. As one customer exec pointed out to me recently, “Public cloud options can be the pink slip for IT infrastructure and operations teams.” Let’s face it, the monopoly is over.  Public cloud services, both consumer and business, have set a new standard for IT service delivery – ease of access, speed, reliability, etc. – as well as expectations on price, and IT teams are expected to match or better that standard if they want to stay in the game.

With so much available today on demand in the cloud there’s greater pressure than ever on IT to somehow reduce expense and shift Keeping the Lights On to new, innovative projects that drive business productivity and profit growth. You need to empower your teams to think and act differently, enabling them to be a world class IT organization.

Your teams can no longer focus on the infrastructure; they have to focus on taking advantage of the infrastructure to deliver new business value through innovation. In a world of options – private cloud, public cloud, hybrid cloud, virtualized and physical infrastructures – the focus needs to be on making the right choice that’s right for your business.

The question is no longer “How do I make my infrastructure the best it can be?” but: “What’s the best infrastructure for what we want to do?”  IT has to decide the most logical place to provision and operate infrastructure and applications, based on criteria such as cost, risk, compliance, security, etc. That’s where the innovation comes in – what works best where? What capabilities can I start to deliver as services? What cloud services can I take advantage of to help drive what the business is trying to achieve?

That’s the shift we’re seeing in IT. Instead of providing a super reliable infrastructure to support your applications, you’ll be sourcing and providing services. As I mentioned in my previous post, IT will become a broker for services that the business needs, with a fact-based approach to identifying the best source of those services, internally or externally. Being a service broker will help your teams shift toward innovation, while matching or bettering the standard set by public cloud services.

Some of those services – the ones supporting your mission-critical activities – will stay on-premise for reasons of security and compliance. Some of them – the utility part – you’ll offload to a cloud infrastructure provider through IaaS. The rest of them – the part in the middle – you’ll offload to a SaaS or PaaS vendor (someday these may come back in house or they may stay in the cloud or even move back and forth depending on cost and changing business demand).

Being an innovative IT organization is about trying new things. About being daring. About making decisions faster, killing projects sooner, investing more in projects that warrant it. And about how the cloud – private, public, hybrid – can help you do that.

This is going to take a mind-shift on the part of your teams and a critical look at your processes. You’re going to have to be more customer-centric and deliver cost transparency to your stakeholders. You’re going to have to standardize the services you offer (think 80/20 rule) and enable self-service access to them. And you’re going to have to put the right governance processes in place – who gets access to what and where does your data live.

In my next post I’ll walk through how you can tackle these challenges.

Care to comment on this blog post? Share your thoughts your thoughts with us in comment section.  

Now Available: VMware vSphere with Operations Management

By: Michael Adams, Group Product Line Marketing Manager, Cloud Infrastructure

Today, we are pleased to announce the general availability of VMware vSphere with Operations Management (read the February 12, 2013 press release).

Customers have achieved tremendous cost savings and IT agility by virtualizing their server hardware. To help them better monitor and manage VMware vSphere and business critical applications running in virtualized environments, VMware vSphere with Operations Management combines VMware vCenter Operations Management Suite Standard Edition with every VMware vSphere edition in a convenient, single SKU.

VMware vSphere with Operations Management is a new VMware vSphere product line that helps customers make the most of their investment in VMware vSphere by delivering deep insights into health to pro-actively avoid bottlenecks for improved platform and application availability and performance. Additionally, VMware vSphere with Operations Management enables customers to optimize their virtual environments and make the most efficient use of resources through integrated capacity planning.

Customers that rely on both VMware vSphere and the VMware vCenter Operations Management Suite have detailed substantial improvements in key performance metrics as well as operational and business benefits, including:

  • Reduced Capex costs by up to 30 percent;
  • Optimized capacity, improving utilization by up to 40 percent and consolidation ratios by 37 percent;
  • Improved application availability and performance, cutting downtime by more than a third and reducing the time it takes to find and resolve problems by up to 26 percent; and,
  • Nearly double the operational savings they receive from VMware vSphere alone.

VMware vSphere with Operations Management is available with a simple and scalable per processor licensing model – with no core, vRAM or number of virtual machine limits – so customers can deploy more virtual machines and further optimize their resource utilization without having to worry about added costs.

To learn more about VMware vSphere with Operations Management:


The Inflection Point Looms

The world is changing and if you’re deep in the IT trenches, it’s hard to see what’s coming. If you do find time to peer out on the horizon and think about the future, it can still be hard to know how these sweeping changes might affect you.

In talking with customers, as I get to do regularly, I hear about challenges like this all the time. But I also get to hear amazing and creative ways that organizations are meeting these challenges. That’s why I’ve decided to more regularly contribute to this blog, sharing with you how people are overcoming the uncertainty they face.

Big initiatives like mobility, cloud computing, and collaboration are dramatically changing the way organizations work and therefore the way IT works. My posts here are meant to start a dialogue on what those changes might mean to you and ways that you can best respond to them. We’re facing an inflection point that represents the potential for huge changes in IT, and the time is coming to take control – or face being controlled.

Whether you realize it or not, your monopoly for providing technology to the enterprise is over. You need to adapt. Sometimes you will provide services; sometimes you will have to outsource those services to cloud service providers. I think the path is fairly well defined: IT will become a broker for services that the business needs, with a remit to find the best source of those services, internally or externally.

Whichever path IT takes, you must become more strategically aligned with your business. You must understand both the business’ needs and your real ability to supply them. And if you can’t supply certain capabilities, you will need to have the insight and expertise to identifying best-in-class services from among a multitude of service providers based on what is most important to the business: cost, value, or responsiveness.

Make no mistake – IT still has much to contribute. I’m not suggesting it will go away. But you must develop (in fact, you already should have developed) a bifurcated view that gives you insight to both what the business really needs, and how IT can best serve the business.

I realize that’s a tall order for you as an IT professional, and it’s a tall order for me to chronicle that change. But at VMware, we see our customers – CIOs, infrastructure architects, data center administrators, network engineers, IT ops teams, etc. – pushing the envelope all the time. Sometimes we educate you; sometimes you educate us. Either way, we’re all on a journey across a new and ever-changing landscape.

Now, as anyone who saw Planes, Trains, and Automobiles knows traveling with a partner (I’ll let you determine if you’re more like Steve Martin or John Candy) can be problematic, so let’s set out a couple of ground rules.

First, even though this blog is from VMware, I’m not going to talk just about VMware, or our products, except to occasionally illustrate a very specific scenario. I want to focus more on helping you transform your IT department and understand the opportunity we all have available to us today.

Second, this is not a megaphone for me to shout through; it’s a telephone. In other words, the communication should go both ways. I want to hear your challenges, your concerns, your questions and suggestions. I want this to become a forum for anyone who is trying to navigate this new world. Please share your thoughts in the comments section below.

Like many of you, I’ve been in IT a long time… and for most of that history, IT has just been a builder. But it is now clear that we must evolve beyond that. IT will still be a builder, but IT will also be a broker. And a major challenge will be to understand when to be one or the other, based on what is best for the business. The ship has sailed… IT has to transform itself, to become far more agile, so the business can be more agile. You can either lead that change, or you’ll be at its mercy.

In my next post, we’ll get started tackling these challenges, starting with the shift from thinking about infrastructure to working on innovation.

Ramin Sayar is vice-president and general manager of VMware. He blogs regularly about the ongoing challenges customers face in a changing IT world.