posted

VMW-LOGO-LIGHTWAVE

Here at VMware, we’ve recognized that containers, microservices, and DevOps – among other technologies and methodologies – are changing how modern applications are built, deployed, and managed. We’ve espoused our belief that VMs and containers are better together, and we continue streamlining application development for DevOps teams on our unified platform. Our sister company, Pivotal, has been working on containers with us for several years, and both VMware and Pivotal continue to support open standards in the community.

As enterprises begin building more microservices-based applications and using containers to do so, valid security questions start to appear. Businesses building cloud-native applications need to address security and governance from developer desktop to production stack. They require enterprise-grade identity and access management for an increasingly large volume and variety of objects across their hybrid clouds. And the solution to these challenges must support common standards and interoperability for business agility and choice.

As we talked to customers and partners, we realized that these questions present a real challenge for enterprises in building, deploying, and managing cloud-native applications. Today we introduce Project Lightwave to address those challenges. (Read the news release).

Lightwave is an open source project comprised of standards-based, enterprise-grade, identity and access management services targeting critical security, governance, and compliance challenges for cloud-native apps. The project’s code is tested and production-ready having been used in VMware’s solutions to secure distributed environments at scale. Here are a few of its features:

  • Multi-tenancy to simplify governance and compliance across the infrastructure and application stack and across all stages of application development lifecycle
  • Support for SASL, OAuth, SAML, LDAP v3, Kerberos, X.509, and WS-Trust
  • Extensible authentication and authorization using username and password, tokens and PKI infrastructure for users, computers, containers and user defined objects

Project Lightwave pairs well with Photon OS (which we also announced today), our lightweight Linux OS optimized for cloud-native applications, to provide an enforcement layer for identity and access management via VMware vSphere and vCloud Air.

We are open sourcing Lightwave to encourage collaboration with our customers and partners. Furthermore, we also want to assure that resulting innovation in Lightwave is ubiquitously available to end-users regardless of where they decide to deploy containers. We plan on releasing Lightwave in the coming months. Until then, we invite you to check out this video of Lightwave in action.

About the Author:

JohnnyFergusonJohnny Ferguson is the Product Line Manager for Lightwave and VMware Platform services for security, including single-sign-on, authentication, authorization, certificate management, directory services, and lookup services. 

 

posted

VMW-LOGO-PHOTON

Today, we’re pleased to announce two new open source projects – Photon OS and Project Lightwave – that will help our customers to securely build, run, and manage their cloud-native applications.

Over the last year, we have taken a close look at delivery vehicles for cloud-native apps such as containers and the Linux distributions. We have also written a few integrations with popular container solutions and other solutions to help customers get started with running containers in their vSphere environments.

After delivering on those initial projects, we recognized the need to expand our customers’ capabilities for developing and running cloud-native apps. Our customers let us know they wanted to take advantage of new technologies such as containers that allow them to easily package their applications as well as scale them in real-time, so we aimed to provide easy portability of containerized applications between on-prem and public cloud. We knew that our customers needed an environment that provided consistency from development through production, to smooth integration and deployment and speed time to market.

To address these challenges, we have introduced Photon OS, a lightweight Linux operating system for cloud-native apps. Photon is optimized for vSphere and vCloud Air, providing an easy way for our customers to extend their current platform with VMware and run modern, distributed applications using containers.

Photon provides the following benefits:

  • Support for the most popular Linux container formats including Docker, rkt, and Garden from Pivotal
  • Minimal footprint (approximately 300MB), to provide an efficient environment for running containers
  • Seamless migration of container workloads from development to production
  • All the security, management, and orchestration benefits already provided with vSphere offering system administrators with operational simplicity

We are also open sourcing Photon OS to encourage widespread contributions and testing from customers, partners, prospects, and the developer community at large. It is available today on GitHub for forking and experimentation; the binary is also available on JFrog Bintray. We’re even making it easily accessible to developers by packaging it with Vagrant and making it available through Atlas with our friends at HashiCorp.

By offering Photon, we are able to provide integrated support for all aspects of the infrastructure, adding to the leading compute, storage, networking, and management found today. Customers will benefit from end-to-end testing, compatibility, and interoperability with the rest of our software-defined data center and End User Computing product portfolios. Through integration between Photon OS and the newly introduced Project Lightwave, customers can enforce security and governance on container workloads, for example, by ensuring only authorized containers are run on authorized hosts by authorized users.

For developers and operations engineers alike, we look forward to your contributions via the VMTN forums and GitHub to help shape the direction of the project. We look forward to collaborating with customers, partners and developers to optimize containerized applications running in VMware environments. Access Photon OS today, and share your thoughts with us at @cloudnativeapps or the forums.

Check out a brief video of Photon OS in action.

posted

Today, the CoreOS and vSphere teams are announcing production support for CoreOS Linux on vSphere 5.5, which gives our customers another fantastic option for a container runtime environment enabling streamlined application delivery and portability. It’s a good development for both companies and our mutual customers: CoreOS is beloved by developers building scalable, next-generation applications, while VMware provides trusted infrastructure solutions for the enterprise. Altogether, it’s a great match.

As containers and the ecosystem around them matures, we’re committed to making sure our customers can leverage their current investments to build and scale next-generation applications. As we’ve made clear before, VMware wants to assure businesses that they can continue to run their most critical applications on a single platform. With partners like CoreOS, we can do this without introducing cumbersome infrastructure silos that can come with implementing new solutions.

Both VMware and CoreOS will be working to extend support of CoreOS to the recently-announced vSphere 6; in the meantime, we’ll keep plugging away on more solutions like common application blueprints, and Mesos and Kubernetes integrations to make sure we can deliver choice without silos for our customers.

We’ll talk with you soon! (Or sooner – remember, we’re always reachable at @cloudnativeapps)

posted

kitcolbertToday’s an exciting day for VMware as we hold our biggest launch ever for a plethora of new products. While Pat and Ben are unveiling all the new features, I wanted to give the Cloud-Native Apps perspective.  As you guys know, our focus is on helping customers develop next-gen applications and operate them in production. It’s about helping our customers get applications to market faster.  VMware’s recent product announcements provide key technology building blocks to realizing this goal.

VMware’s already released vRealize Code Stream – an exciting new product that provides governance and automation of build, test, and deploy pipelines for applications. Ultimately it allows our customers to get newly tested, proven application updates to market as quickly as possible. Last month, VMware followed up with vCloud Air OnDemand, which enables both dev and ops folks to provision infrastructure on the fly for new apps. Now all you need is a credit card to instantly provision cloud infrastructure that’s fully compatible with and connected to your on-prem data center. This compatibility is key: it enables our customers to release new features quickly, and to continue securely operating and managing their updated apps.

And then we have today’s announcements.  First is vSphere 6, a big release many years in the making.  One of the most exciting features for us in Cloud-Native Apps is the new Instant Clone capability (a tech preview previously known as “Project Fargo”).  Instant Clone enables a running VM to be cloned, such that a new VM is created that is exactly identical to the original. This is powerful because you can get a new, running, booted up VM in less than a second.  Moreover, the “forked” VM is tiny from a resource perspective as it shares all its memory with the original.  We’ve blogged before about how lightweight VMs are nowadays, but Instant Clone takes the lightweight VM concept to a new level.  We’ve even started kicking around the term “nano-virtualization”, as Instant Clone VMs are so resource efficient.

So how will Instant Clone be leveraged for Cloud-Native Apps? A few different ways.  First, Instant Clone can be used for traditional use cases where developers provision VMs to run fleets of containers.  Instant Clone enables developers to spin up VMs and containers together instantly.  But a second and more interesting concept is that Instant Clone makes VMs so lightweight that we can potentially run one container per VM.  Each time a container needs to be started, a VM can be instant cloned to provide the runtime environment for that container.  We all know VMs are great because of the hardware level isolation and security they provide.  Using Instant Clone for a one-container-per-VM model means you get VM-level security for containers, yet with instant provisioning and very low overhead.  It’s an exciting win-win and is a great example of the “containers without compromise” direction we took at VMworld last year.

Finally, today we announced a new version of VMware Integrated OpenStack (VIO).  VIO is powerful because it provides the best of both worlds: the flexibility and API-driven nature of OpenStack with the maturity and power of VMware products.  Customers are looking at OpenStack because of the promise of automating infrastructure provisioning and management, and in the end, of moving faster.  VIO enables all these great values on the proven vSphere infrastructure our customers already have.

Today’s announcements feature tons of innovation, and it’s just the beginning in terms of Cloud-Native Apps.  We’re busy building on the innovations announced today, and working on a bunch of cool updates. Stay tuned.  In the meantime, check out the great products above and please give us any feedback/thoughts you have at microservices@vmware.com or @cloudnativeapps.

posted

While we’ve been cranking away on Docker Machine integrations, BDE extensions for Mesos and Kubernetes, and application blueprints, our colleagues over in vSphere have been hard at work collaborating with another of our technology partners, CoreOS. At Cloud-Native, we’ve been excited to work with CoreOS – lightweight operating systems are an important part of building distributed applications with containers, and this latest announcement helps make vSphere the best place to run next-gen apps.

Read more about it over at the vSphere blog, or on our own Knowledge Base.

posted

Happy new year everyone!  VMware’s Cloud-Native Apps team kicked off the new year by hosting a half-day meeting today with a group of industry leaders with the goal of defining a common, totally open application blueprint definition.  We had about 30 technologists representing many different companies: Amazon, Cisco/Noiro, Cloudsoft, CoreOS, Docker, Gigaspaces, Google, HashiCorp, Mesosphere, Microsoft, OpDemand/Deis, Pivotal, Telematica, and VMware.  It was great to see so much industry participation here!

Big group, small room…

So what exactly were we discussing?  Blueprints.  “Blueprint” is certainly an overloaded term and indeed we spent the better part of the first half of the meeting discussing exactly what we meant by it.  Thankfully we were all largely in agreement.  We want to focus on applications first and foremost and thus the blueprint definition needs to take an application perspective.  Modern applications are distributed and are comprised of many different components.  The blueprint specifies all the components of an app, how they’re stitched together, network and storage requirements, other service dependencies, and more.  It was widely agreed that the blueprints should support many app delivery formats: Docker, Linux container, VM, bare metal, and more.  This way we can provide customers the choice to deploy with whatever technology suits their applications best.  Further we all agreed that these blueprint definitions should be infrastructure agnostic.  This means that the same blueprint can be used to provision an app on a developer’s laptop, in a staging environment, in an on-premises virtualized datacenter, and in a public cloud.  The blueprint designer should be able to define the blueprint such that the requirements within it are properly mapped to whatever infrastructure is chosen.  In this way, blueprints can drive the application lifecycle from dev to CI/CD to production, enabling greater business velocity.  The group wanted to go even further, proposing that a single blueprint should be deployable by many different tools from different vendors.

It was very encouraging to see that there was general agreement on what we think a blueprint should look like.  The next challenge is  to do something about it.  Given the size of the group and the number of individuals and companies involved, many felt that getting something done might prove challenging,  so we decided to move quickly with specific deliverables both time- and scope-boxed.  The first deliverable is to define a set of use cases that we want this blueprint definition to solve.  Use cases are critical to keeping this effort grounded in reality so that what’s produced will be valuable to customers (after all, that’s what this is really about!).  So within the next week we’ll be putting together a use cases doc that is at most three pages and has two use cases clearly defined and agreed upon by the team.  Once we have that doc (which we’ll post here and would love your thoughts and feedback), we’ll get to work creating a prototype implementation.  The plan there is just to set up a github repo, roll up our sleeves, and get to work.  And yes, it’s all going to be open — open standard and open source.  This means we’ll be looking for your input every step of the way.

While there are still many details to figure out and many contentious points of design to work through, I’m very excited by the progress we’ve made in this first meeting and the fact that pretty much all of the participants see eye-to-eye on the basics of what we think a blueprint should be.  The goal here is to create a better experience for customers by getting all the players in the industry to agree on a standard.  I think we’ve taken an important first step towards that goal today.  Certainly a great way to start 2015!

posted

At VMware, we believe that VMs and containers work best together to provide an agile, efficient infrastructure for application development and deployment. But no web-scale application runs on just a single VM, or even a small group of VMs. Application deployment and operation requires a well-managed cluster, with the ability to elastically scale to meet service demands. Just as Kit mentioned earlier at DockerCon, we are introducing a solution to better assist DevOps engineers with these challenges.

In September of last year, we introduced Big Data Extensions (BDE) for vSphere to enable enterprises with deploying, running, and managing clustered workloads like Hadoop. Because BDE was designed to help our customers easily deploy clusters, and plan and automate VM provisioning, we saw an excellent opportunity to extend vSphere BDE to provision and manage Mesos and Kubernetes clusters as well.

While we’ve previously written about our collaboration with Google’s Kubernetes project in this areas, today we are adding Mesosphere to our ecosystem using this BDE extension. The latest BDE fling enables vSphere users to stand up a Mesos or a Kubernetes cluster in minutes, via a simple GUI or a single spec file. The clusters, as one would expect, can be customized for size and topology, and to scale elastically on demand.

BDE with Mesos and Kubernetes integrations can be downloaded from our flings page, where we frequently release new technologies. We look forward to hearing about your experience in using this Mesos/Kubernetes integrated BDE package to deploy, manage, and scale your applications – come talk to us at @cloudnativeapps.

Download:

About the author(s):
Bo DongBo Dong is a Senior Product Line Manager on the vSphere team. He manages vSphere Big Data Extensions and VMware OSS Project Serengeti.

 

 

Jesse HuJesse Hu is a developer for Project Serengeti (also known as Big Data Extensions) in the VMware Beijing R&D Center. He holds an M.S. in Computer Science and is interested in Open Source, Big Data, Cloud Computing, and Linux Containers.

posted

FabioAs Kit mentioned in his earlier post, today in Amsterdam, we’re showing off drivers for VMware Fusion, vSphere and vCloud Air that’ll make it easy for our customers to deploy and manage Docker hosts in local and hybrid cloud environments. We’ve seen requests on GitHub to simplify this process, so we’ve put together a quick preview of a solution that we’re happy to talk about in more depth here.

We’ve bundled Docker Machine support for Fusion, vSphere and vCloud Air into a single package for download, though you can choose between Linux or Mac OS X installations. The only thing you’ll need is an existing installation of VMware Fusion, vSphere, or a vCloud Air subscription – one of these is sufficient. The drivers can be downloaded from GitHub, complete with all the instructions you need to get started. If you’re interested in following the drivers’ development you can watch this PR on GitHub.

If you’re already on any of the above-mentioned VMware platforms, the drivers we’re releasing today provide the fastest way to try out Docker and see whether a container-based delivery model suits your needs. You won’t need to noodle around with certificates, permissions, or a mess of settings just to test out containers. (Side Note: if you don’t have Fusion, and you happen to be at DockerCon Europe this week, come see us! We can get you set up for free.)

Since these drivers are a work in progress – they’re a preview of our projects, not a production-ready solution – we’re seeking your input on how we can make them better. Your opinions are invaluable to our growing efforts here at Cloud-Native Apps – especially in our early days – and help us better understand how we can make your lives easier as both developers and ops engineers. You can reach us anytime by commenting below, emailing us directly at microservices@vmware.com, or on Twitter at @cloudnativeapps.

Download:

About the Author:  Fabio is a seasoned IT professional with over 15 years of experience, and a background as a software developer. He currently sits on the edge between Dev and Ops, helping both reach nirvana.

posted

kitcolbertI’m at DockerCon Europe today, participating in a panel discussion on orchestration.  It’s an important subject for us, since orchestration is critical for our customers as they move from experimenting with containers to running containerized applications in production. Through our experiences with Distributed Resource Scheduling (DRS) on vSphere and vRealize Automation, we’ve grown familiar with the challenges of orchestration here at VMware. But containers present new challenges, especially around scale. To that end, several orchestration solutions have been developed by various players in the container space. We want to ensure that these new solutions are as well-integrated into VMware platforms as DRS and vRealize Automation were into our core vSphere technology.

Our goal is to simplify delivery of containerized applications to vSphere environments, so I’m excited to announce new integrations in the container orchestration space with Docker, Google, Mesosphere, and Pivotal.  Our focus is on providing a common platform for building, operating, and managing applications at scale, and these integrations help our customers do exactly that. We’ll have more posts to talk about these integrations in greater depth, but I’ll provide some background here.

Docker
Docker has captured the industry’s imagination and customer interest around Linux container technologies, and we announced a partnership with Docker at VMworld earlier this year. Thus far, Docker’s focus has largely been on single host management, but its ambitions are to enable remote management and orchestration of many Docker hosts. Given that their orchestration technology is still in the early alpha phase, we wanted to focus on their upcoming remote management capabilities, called Docker Machine. Docker Machine enables developers and operators to start Dockerized applications on remote hosts, and today we’re introducing Docker Machine integrations for VMware Fusion, VMware vSphere, and VMware vCloud Air. This simplifies the process for deploying applications in VMware environments, whether it’s on a dev box via Fusion or into staging or production via vSphere or vCloud Air.

Kubernetes
In June, Google introduced a container orchestration and scheduling system called Kubernetes, which has garnered industry and community interest. We started working with Google earlier this year, and today we’re introducing a way to quickly and easily provision a Kubernetes cluster onto vSphere infrastructure through a tool we developed called Big Data Extensions (BDE). As its name suggests, BDE was developed with a focus on big data workloads such as Hadoop. However, the types of problems we face when provisioning Hadoop frameworks are similar to what we see with container cluster schedulers like Kubernetes, so we used and extended what was already working for many customers. We encourage you to give it a try yourself!

Mesosphere 
We developed a similar integration with Mesos, the open resource management framework driven by Mesosphere. Mesos has an innovative two-level design that supports many different workload types using the same underlying infrastructure, and is already present in large production implementations, most notably at Twitter and Airbnb. Given the volume of customer interest in Mesos, we wanted to streamline the deployment of Mesos to vSphere infrastructure. As with Kubernetes, we’re leveraging BDE to simplify the provisioning process.

Pivotal
VMware actually had its own technology in the orchestration space several years ago – CloudFoundry, an open-source PaaS – which we spun out when we created Pivotal. Pivotal now has its own version of CloudFoundry, called Pivotal CF (PCF). While PCF has container orchestration capabilities, they’ve been deeply enmeshed within PCF. A new project from Pivotal called Diego will rewrite parts of Pivotal CloudFoundry in a more composable manner, allowing the Diego container orchestration layer to be separated from PCF. We’ll be working closely with Pivotal to enable the same tight integration into vSphere for PCF that we have with the other container orchestration engines listed above. In the meantime, Pivotal is joining us at DockerCon this week to demo .

All in all, we’re impressed with the rapid progress in the container space and excited about enabling our customers to more easily and seamlessly provision their container orchestration frameworks and containerized applications onto Fusion, vSphere, and vCloud Air. We’re looking forward to getting your feedback at @cloudnativeapps, and there’s more good stuff yet to come.

 Downloads

About the Author: Kit Colbert drives strategy and product development of third platform application solutions across the company. Previously, he was CTO of VMware’s End-User Computing business unit, Chief Architect and Principal Engineer for Workspace Portal, and the lead Management Architect for VMware vSphere Operations Suite. At the start of the career, he was the technical lead behind the creation, development, and delivery of the vMotion and Storage vMotion features in vSphere. Kit holds a ScB in Computer Science from Brown University and is recognized as a thought-leader on third platform, end-user computing, and cloud management trends. He speaks regularly at industry conferences, on the main stage at VMworld, and is the Cloud-Native and EUC voice for the VMware Office of the CTO Blog.

posted

kitcolbertThis September 2014 marked my 11 year anniversary at VMware.  When I look back at my time here, I’m inspired by the things we’ve done as a company. We’ve always pushed the envelope on behalf of our customers, and that continues today with my transition from the End-User Computing group at VMware to new role within the Office of the CTO, focusing on Cloud-Native Apps.

The Rise of Cloud-Native Apps 
There are tectonic shifts happening in the enterprise, particularly in how applications are developed and operated (e.g. DevOps), how they’re architected (e.g. micro-services and 12-factor apps), and how they’re deployed (e.g. Docker and containers). We call these applications “cloud-native” as they’re designed for the mobile-cloud era. Naturally, these cloud-native apps impact how IT decisions are made and which factors are considered in architecting a datacenter infrastructure. The need to build and deploy apps quickly means developers and operations engineers are working more closely than ever, and that developers are increasingly influential in designing the enterprise stack. We’ve built some great products and partnerships here at VMware to address these trends, but we also recognize the need for more comprehensive solutions that help our customers thrive in the mobile-cloud era.

My focus will be to ensure that we capitalize on our considerable experience delivering proven solutions to our more than 500,000 customers, and to provide them with solutions tailored to this new environment. VMware’s goal has long been to help businesses build, deploy, and operate their applications, even in the most stringent production environments. The rise of cloud-native applications presents an opportunity for us to further that goal by providing solutions that span from the developer desktop, through the DevOps lifecycle, and to the production stack where these apps are deployed and operated.

While this post officially kicks off our Cloud-Native blog, we’ve been thinking about and blogging on these topics for some time:

Stay Tuned for Cloud-Native Apps News, Hacks and More 
My first 11 years here were great. But in my mind, the real fun is just beginning! As always, there’s a lot happening here at VMware, and we’re excited to share this news with you. Please follow this blog for our latest announcements, updates, thoughts, and random hacks. We look forward to getting to know you better here at Cloud-Native Apps blog or on Twitter!

About the Author: Kit Colbert drives strategy and product development of third platform application solutions across the company. Previously, he was CTO of VMware’s End-User Computing business unit, Chief Architect and Principal Engineer for Workspace Portal, and the lead Management Architect for VMware vSphere Operations Suite. At the start of the career, he was the technical lead behind the creation, development, and delivery of the vMotion and Storage vMotion features in vSphere. Kit holds a ScB in Computer Science from Brown University and is recognized as a thought-leader on third platform, end-user computing, and cloud management trends. He speaks regularly at industry conferences, on the main stage at VMworld, and is the Cloud-Native and EUC voice for the VMware Office of the CTO Blog.