Home > Blogs > VMware Accelerate Advisory Services > Tag Archives: Reg Lo

Tag Archives: Reg Lo

Wed @ VMworld – 2 Wildly Different Ways to Discuss Next Gen IT Strategy

Reg LoLively discussionBy Reg Lo

The theme of VMworld 2016 is be_Tomorrow.  As we’ve talked about in many previous blog posts, it’s no secret that the demands on IT are changing and that IT leaders need to evolve their strategies or risk the decline of their company’s market position and the loss of relevance for internal IT.

For IT leaders attending VMworld, I hope to offer you a couple of unconventional ways of fostering discussion with your peers around the pressing challenges you’re facing today.

Unpanel:  How I Survived the DevOps Transition

Wednesday, August 31st – 12:30 PM – 1:30 PM

If you’ve ever joined an Unpanel before, you know you’re in for some lively discussion – and to be prepared at any moment to jump up on stage!

Join me and my colleagues, Ed Hoppitt of Battlebots (and VMware) fame and Tom Hite who leads VMware’s DevOps and Open Cloud Services team, as we moderate a dynamic exchange between IT practitioners and their development counterparts. You will hear about the dos, don’ts, and gotchas from both perspectives.

We will invite you to participate with your opinions and insights, and you might become part of the panel.

Add session DEVOP9093 to your agenda

Experience the Business Impact of IT Innovation & Transformation in this Live Interactive Simulation

Wednesday, August 31st – 3:30 PM – 4:30 PM

Join your peers in IT leadership for a live interactive simulation where you get to experiment on what series of IT initiatives will lead to the greatest impact on business revenue and IT costs. This is a unique experiential learning session.

Using a software-based simulation platform, I will team up with my colleague, Andy Troup of VMware Operations Transformation Services, to present you with a variety of IT innovation project options, representing a wide spectrum from developing cloud capabilities to advanced micro-segmentation. Acting as a company with a set budget for operating expenses and innovation, your team will choose which projects to focus and then see the results of your selections.

Will revenue increase because you were able to speed time to market? Will your operating expenses increase or decrease? Will you experience set-backs if you focus on one area but neglect others?

Test your IT strategy theories, participate in lively discussions about today’s options in IT, and walk away with tips for how to build a roadmap for innovation that will work for your organization.

VMworld_ITStrategy (288x300)Add session SDDC9971 to your agenda

I hope to see you at both of these sessions at the end of this month!

Download a full agenda of VMworld breakout sessions that will help IT leaders build a strategy for the digital era.

=======

Reg Lo is the Americas Director of VMware Accelerate Advisory Services and is based in California.

The new culture of IT echoes the industry’s earliest days.

In many ways, it’s back to the future – but we also need some things to change.

Reg Loby Reg Lo

IT cultureTo get a sense of what’s happening in IT today, it can help to have a long term perspective. Think back to the earliest days of computing, for example, and you can see that we’ve almost come full circle – a reality that underscores the major cultural shift that the business is undergoing right now.

When enterprise computers were first commercially available, companies used to buy their hardware from someone else but write their own software, simply because there wasn’t very much packaged software out there to buy.

Then by the ’90s or so, it became the norm to purchase configurable software for the business to use. That worked well for a while, as companies in many different industries deployed similar software, e.g. ERP, CRM, etc.

Today we expect software to do a lot more. Moreover, we expect software to differentiate a business from its competitors – and that’s returning IT to their roots as software developers. After all, the ability to create digital enterprise innovation requires software development skills. And so we’ve made a full arc from a software development perspective.

The Expanding Reach of IT

Now add another historic change that we’re seeing: IT departments used to just provide services for their business, their internal customer, but the advent of the fully digital enterprise is expanding who gets touched by IT. IT departments now need to reach all the way to the customer of the business, the consumer. When we talk about omnichannel marketing, for example, we’re expecting IT to help maintain connections with consumers over web, phone, chat, social media, and more. The same goes for the Internet of Things, where it’s not so much the consumer as a remote device or sensor out in the field somewhere that IT needs to be worried about.

Both broad trends have changed the scope of IT and both are making IT much more visible. More importantly, they mean that IT is now driving revenue directly. If it’s successful, IT makes the business highly successful. But if IT fails, it will directly impede the business revenue flow.

Becoming Agile Innovators

That brings me to my last point. Here’s what hasn’t changed from the past: for the last 30 years or so, the mantra in IT cultures has been “Bigger is Better.” Software Development and Release processes got increasingly bureaucratic and terribly slow (think of those epic waits for the next ERP release). The standard mind-set was to package multiple changes into a single release that they’d roll out every six months or so, if they were lucky.

But that culture is also something that we need to be moving away from, precisely because the relationship between IT and the business it serves has changed. Businesses used to perceive IT as just a cost center that should be squeezed for more and more savings. But when IT touches the end-customer experience directly, business needs IT to be both cheaper and faster – to support and enable the kinds of innovation that will keep the business one step ahead.

We now have the technologies (cloud computing, cloud-native applications) and methodologies (agile development, DevOps) to make smaller, much more frequent, incremental releases that are simpler, less likely to be faulty, and easy to roll back if anything goes wrong.

What we’re still lacking – which I still see when I’m out in the field – is the widespread cultural change required for it to happen. Most importantly, that means adopting what I could call a DevOps mindset across the entire IT organization. At its essence, this mindset views the entire work of IT through a software lens. It makes everything, including infrastructure, code.

For IT long-timers, in many ways that’s simply returning software to the centrality it once enjoyed. But if it takes us back to the early days of computing, it also points us to what we must change if we’re to succeed in a future that’s entirely new.

=======

Reg Lo is the Director of VMware Accelerate Advisory Services and is based in San Diego, CA.  You can connect with him on LinkedIn.

Transforming IT into a Cloud Service Provider

Reg LoBy Reg Lo

Until recently, IT departments thought that all they needed to do was to provide a self-service portal to app dev to provision VMs with Linux or Windows, and they would have a private cloud that was comparable to the public cloud.

Today, in order for IT to become a cloud service provider, IT must not only embrace the public cloud in a service broker model, IT needs to provide a broader range of cloud services.  This 5 minute webinar, describes the future IT operating model as IT departments transform into cloud service providers.

Many IT organizations started their cloud journey by creating a new, separate cloud team to implement a Greenfield, private cloud.  Automation and proactive monitoring using a Cloud Management Platform was key to the success for their private cloud.  By utilizing VMWare’s vRealize Cloud Management Platform, IT could easily expand into the hybrid cloud, provisioning workloads to vCloud Air or other public clouds from a single interface.  Effectively, creating “one cloud” for the business to consume and “one cloud” for IT to manage.

However, the folks managing the brownfield weren’t staying still.  They too wanted to improve the service they were providing the business and they too wanted to become more efficient.  So they also invested in automation.  Without a coherent strategy, both Brownfield and Greenfield took their own separate forks down the automation path, confusing the business on which services they should be consuming.  We started this journey by creating a separate cloud team.  However, it may be time to re-think the boundaries of the private cloud and bring Greenfield and Brownfield together to provide consistency in the way we approach automation.

In order to be immediately productive, the app dev teams are looking for more than infrastructure-as-a-service.  They want platform-as-a-service.  These might be second generation platforms such as database-as-a-service (Oracle, MSSQL, MySQL, etc.) or middleware-as-a-service (such as Web Methods).  Or they need third generation platforms based on unstructured PaaS like containers or structured PaaS like cloud foundry.  The terms first, second and third generation map to the mainframe (1st generation), distributed computing (2nd generation), and cloud native applications (the 3rd generation).

Multiple cloud services can be bundled together to create environment-as-a-service.  For example, LAMP-stacks – Linux, Apache, MySQL and PHP (or Python).  These multi-VM application blueprints lets entire environments be provisioned at a click of a button.

A lot of emphasis has been placed on accessing these cloud services through a self-service portal.  However, DevOps best practices is moving towards infrastructure as code.  In order to support developer-defined infrastructure, IT organizations must also provide an API to their cloud.  Infrastructure-as-code lets you version the infrastructure scripts with the application source code together, ultimately enabling the same deployment process in every environment (dev, test, stage and prod) – improving deployment success rate.

Many companies are piloting DevOps with one or two application pipelines.  However, in order to scale, DevOps best practices must be shared across multiple app dev teams.  App dev teams are typically not familiar with architecting infrastructure or the tools that automate infrastructure provisioning.  Hence, a DevOps enablement team is useful for educating the app dev teams on DevOps best practices and providing the DevOps automation expertise.  This team can also provide feedback to the cloud team on where to expand cloud services.

This IT operating model addresses Gartner’s bimodal IT approach.  Mode 1 is traditional, sequential and used for systems of record.  Mode 2 is agile, non-linear, and used for systems of engagement.  Mode 1 is characterized by long cycle times measured in months whereas mode 2 has shorter cycle times measured in days and weeks.

It is important to note that the business needs both modes to exist.  It’s not one or the other.  Just like how the business needs both interfaces to the cloud: self-service portal and API.

What does this mean to you?  IT leaders must be able to articulate a clear picture of the future-state that encompasses both mode 1 and mode 2, that leverages both a self-service portal and API to the organization’s cloud services.  IT leaders need a roadmap to transform their organization into cloud service providers that traverse the hybrid cloud.  The biggest challenge to the transformation is changing people (the way they think, the culture) and processes (the way they work).  VMware can not only help you with the technology; VMware’s AccelerateTM Advisory Services can help you address the people and process transformation.

 


Reg Lo is the Director of VMware Accelerate Advisory Services and is based in San Diego, CA.  You can connect with him on LinkedIn.

Increase the Speed of IT with DevOps and PaaS

Reg Lo By Reg Lo

How do you increase the speed of IT?

In this 5 minute video whiteboard session I will describe two key strategies for making IT more agile and improving time to market.  For your convenience there is also a transcript of the video below.

Two key strategies for increasing the speed of IT are:

  1. Deliver more applications using DevOps. Traditional waterfall methods are too slow.  Agile methodologies are an improvement but without accelerating both the infrastructure provisioning and application development, IT is still not responsive enough for the business.  Today, many organizations are experimenting with DevOps but to really move the needle, organizations must adopt DevOps at scale.
  2. Deliver new Platform-as-a-Service faster. Infrastructure-as-a-Service is the bare minimum for IT departments to remain relevant to the business.  If IT cannot provide self-service on-demand IaaS, the business will go directly to the public cloud.  To add more value to the IaaS baseline and accelerate application delivery, IT must deliver application platforms in a cloud model, i.e. self-service, on-demand, with elastic capacity.

Let’s start with this second key strategy: delivery new PaaS services faster.  PaaS services include second generation platforms (database-as-a-service, application server-as-a-service, web server-as-a-service) as well as third generation platforms for cloud native applications such as Hadoop-as-a-service, Docker-as-a-service or Cloud Foundary-as-a-service.

In order to launch these new PaaS services faster, IT must have a well-defined service lifecycle that it can use to quickly and repeatably create these new services.  What are the activities and what artifacts must be created in order to analyze, design, implement, operate and improve a service?

Once you have defined the service lifecycle, you can launch parallel teams to create the new service: platform-as-a-service, database-as-a-service, or X-as-a-service where X can be anything.  Each service can be requested via the self-service catalog, delivered on demand, and treated like “code” so it can be versioned with the application build.

Each service needs a single point of accountability – the Service Owner.  The service owner is responsible for the full lifecycle of the service.  They are part of the Cloud Services team, or also called the Cloud Tenant Operations team.  The Cloud Services Team also manages the service catalog, provides the capability to automate provisioning, and manages the operational health of the services.

The Cloud Services Team is underpinned by the Cloud Infrastructure Team. This team combines cross-functional expertise from compute, storage and network to create the profiles or resource pools that the cloud services are built on.  The Cloud Infrastructure Team is also responsible for capacity management and security management.  The team not only manages the internal private cloud, but also the enterprise’s consumption of the public cloud, transforming IT into a service broker.

Now that we’ve described the new cloud operating model, let’s return to the first key strategy for increasing the speed of IT: deliver more applications using DevOps.  Many organizations have tasks one or two applications teams to pilot DevOps practices such as continuous integration and continuous deployment.  This is a good starting point, however, in order to expand DevOps at scale so IT can provide a measurable time-to-market impact for the business, we need to make the adoption easier and more systematic.

The DevOps enablement team is a shared services team that provides consulting services to the other app dev teams; contains the expertise in automation so that other app dev teams do not need to become the expert in Puppet, Chef, or VMware CodeStream; and, this team drives a consistent approach across all app dev teams to avoid a fragmented approach to DevOps.

Remember how we talked about expanding PaaS?  With self-service on-demand PaaS provisioning, app dev teams can build environment-as-a-service: an application blue print that contains multiple VMs (the database server, application server, web server, etc.)  Environment-as-a-service lets app dev teams treat infrastructure like code, helping them adopt continuous deployment best practices by linking software versions to infrastructure versions.

By delivering more applications using DevOps and by delivering new PaaS services faster, you can increase the speed of IT.


Reg Lo is the Director of VMware Accelerate Advisory Services and is based in San Diego, CA.  You can connect with him on LinkedIn.

Software Defined Networking for IT Leaders – 5 Steps to Getting Started

Reg Lo By Reg Lo

In Part 1 of “Software Defined Networking (SDN) for IT Leaders”, micro-segmentation was described as one of the most popular use-cases for SDN.  With the increased focus on security, due to growing number of brand-damaging cyber attacks, micro-segmentation provides a way to easily and cost-effectively firewall each application, preventing attackers from gaining easy access across your data center once they penetrate the perimeter defense.

This article describes how to get started with micro-segmentation. Micro-segmentation is a great place to start for SDN because you don’t need to make any changes to the existing physical network, i.e. it is a layer of protection that sits on top of the existing network.  You can also approach micro-segmentation incrementally, i.e. protect a few critical applications at a time and avoid boiling the ocean.  It’s a straightforward to dip your toe into SDN.

5 Simple Steps to Get Started:

  1. Software Defined Networking ProcessIdentify the top 10 critical apps. These applications may contain confidential information, may need to be regulatory compliant, or they may be mission critical to the business.
  2. Identify the location of these apps in the data center. For example, what are the VM names or are the app servers all connect to the same virtual switch.
  3. Create a security group for each app. You can also define generic groups like “all web servers” and setup firewall rules such as no communication between web servers.
  4. Using SDN, define a firewall rule for each security group that allows any-to-any traffic. The purpose of this rule is to trigger logging of all network traffic to observe the normal patterns of activity.  At this point, we are not restricting any network communications.
  5. Inspect the logs and define the security policy. The amount of time that needs to elapse before inspecting the logs is application dependent.  Some applications will expose all their various network connections within 24 hours.  Other applications, like financial apps, may only expose specific system integration during end-of-quarter processing.  Once you identify the normal network traffic patterns, you can update the any-to-any firewall rule to only allow legitimate connections.

Once you have completed these 5 steps, repeat them for the next 10 most critical apps, incrementally working your way through the data center.

In Part 3 for Software Defined Networking for IT Leaders, we will discuss the other popular starting point or use case: automating network provisioning to improve time-to-market and reduce costs.


Reg Lo is the Director of VMware Accelerate Advisory Services and is based in San Diego, CA.  You can connect with him on LinkedIn.

Understanding Software-Defined Networking for IT Leaders – Part 1

Reg Lo By Reg Lo

Software-defined networking (SDN) is revolutionizing the datacenter much like server virtualization has done. It is important for IT leaders to understand the basic concepts of SDN and the value of the technology: security, agility through automation and cost-savings. This blog post explores some of the security benefits of SDN using a simple analogy.

DomiNations

Courtesy of the game DomiNations, Nexon M, Inc.

My kids are playing DomiNations – a strategy game where you lead your nation from the Stone Age to the Space Age. I recruited their help to illustrate how SDN improves security. In this analogy, the city is the datacenter; walls are the firewall (defense against attackers/hackers), and the workloads are the people/workers.

The traditional way of defending a city is to create walls around the city. In the same manner, we create a perimeter defense around our datacenter using firewalls. However, imagine there is a farm outside the walls of the city. Workers need to leave the protection of the city walls to work in the farm. This leaves them vulnerable to attack. In the same way, as workloads or virtual machines are provisioned in public or hybrid clouds outside the datacenter firewalls, what is protecting these workloads from attack?

DomiNations

Courtesy of the game DomiNations, Nexon M, Inc.

In an ideal world, let’s say my kids have magical powers in the game and they enchant the city walls so they can expand and contract to continuously protect the workers. When a worker goes to the farm, the walls automatically extend to include the worker in the farm. When they return back to the city, the walls return to normal. SDN is like magic to your firewalls. Instead of your firewalls being defined by physical devices, a software-defined firewall can automatically expand into the public cloud (or the part of the hybrid cloud that is outside of your datacenter) to continuously protect your workloads.

This ability to easily and automatically configure your firewalls provides another benefit: micro-segmentation. As mentioned before, in a traditional city, the city walls provide a perimeter defense. Once an attacker breaches the wall, they have free range to plunder the city. Traditional datacenters have a similar vulnerability. Once a hacker gets through the firewall, they have free range to expand their malicious activity from one server to the next.

DomiNations

Courtesy of the game DomiNations, Nexon M, Inc.

Micro-segmentation of the network is like having city walls around each building. If an attacker breaches the outer perimeter, they can only destroy one building before having to re-start the expensive endeavor of attacking the next line of defense. In a similar fashion, if a hacker penetrates one application environment, micro-segmentation prevents them from gaining access to another application environment.

Software-defined networking can improve information security. Every few months there is a widely publicized security breach that damages a company’s brand. CIOs and other IT leaders have lost their jobs because of these breaches. SDN is a key technology to protect your company and your career.

In Part 2 and 3 of this series, “Understanding Software-Defined Networking for IT Leaders,” we’ll explore how SDN increases agility and drives cost savings.


Reg Lo is the Director of VMware Accelerate Advisory Services and is based in San Diego, CA.  You can connect with him on LinkedIn.

What is the Difference Between Being Project-Oriented vs. Service-Oriented?

Reg LoBy Reg Lo

Today, IT is project-oriented. IT uses “projects” as the construct for managing work. These projects frequently begin their lifecycle as an endeavor chartered to implement new applications or complete major enhancement or upgrades to an existing application. Application projects trigger work in the infrastructure/engineering teams, e.g., provision new environments include compute, storage, network and security, with each project having its own discrete set of infrastructure provisioning activities.

Project-Oriented vs. Service Oriented

Project-Oriented vs. Service Oriented

This project-oriented approach results in many challenges:

  • There is a tendency to custom-build each environment for that specific application. The lack of standardization across the infrastructure for each application results in higher operational and support costs.
  • Project teams will over-provision infrastructure because they believe they have “one shot” at provisioning. Contrast this to a cloud computing mindset where capacity is elastic, i.e., you procure just enough capacity for your immediate needs and can easily add more capacity as the application grows.
  • The provisioned infrastructure is tied to the project or application. Virtualization allows IT to free-up unused capacity and utilize it for other purposes, reducing the overall IT cost for the organization. However, the project or application team may feel like they “own” their infrastructure since it was funded by their project or for their application, so they are reluctant to “give up” the unused capacity. They do not have faith in the elasticity of the cloud, i.e., they do not believe that when they need more capacity, they can instantly get it; so they hoard capacity.
  • A project orientation makes an organization susceptible to understating the operations cost in the project business case.
  • It makes it difficult to compare internal costs with public or hybrid cloud alternatives – the latter being service-oriented costs.

When IT adopts a service-oriented mindset, they define, design and implement the service outside the construct of a specific application project. The service has its own lifecycle, separate to the application project lifecycles. Projects consume the standardized pre-packaged service. While the service might have options, IT moves away from each application environment since the solutions are custom-built. IT needs to define their Service Lifecycle, just like they have defined their Project Lifecycle. You can use VMware’s Service Lifecycle, illustrated below, as a starting point.

The Service Life-cycle

The Service-Oriented Mindset

This service-oriented mindset not only needs to be adopted by the infrastructure/operations team, but also by the application teams. In a service-oriented world, application teams no longer “own” the specific infrastructure for their application, e.g., this specific set of virtual machines with a given number of CPUs, RAM, storage, etc. Instead, they consume a service at a given service level, i.e., at a given level of availability, with a given level of performance, etc. With this mindset, IT can provide elastic capacity, (add capacity and repurpose unused capacity) without causing friction with the application teams.

The transformation from a project-orientation to a service-orientation is a critical part of becoming a cloud-enabled strategic service provider to the business.  When IT provides end-to-end services to the business, the way the business and IT engage is simplified, services are provisioned faster and the overall cost of IT is reduced.


Reg Lo is the Director of VMware Accelerate Advisory Services and is based in San Diego, CA.  You can connect with him on LinkedIn.

IT-as-a-Service (ITaaS): Transforming How We Manage IT

By Reg Lo

Reg LoAs enterprises make their way along the journey to IT-as-a-Service, CIOs and technology leaders must consider an overhaul of how they run IT – from technology enablement, to the operating model itself. A phased approach to technology enablement, designed as a maturity model, helps provide structure to the journey.  Breaking down traditional IT silos leads to a more functional, service-focused operating model.

Based on years of customer experience, we have developed a three-phased path to ITaaS, as seen in Figure 1.  In Phase I, when IT was seen as a cost center, virtualization created dramatic CapEx savings, resulting in more efficient IT production.  In Phase II, automation results in faster business production, and implementing management tools improves quality of service and reliability.  And in Phase III, IT becomes a service broker, reducing OpEx and increasing agility.  In this phase, IT uses an “IT-as-a-Service” approach, focusing on the end-to-end services that support the business mission, and leveraging technologies and sourcing options that make providing those services reliable, agile, flexible and cost-effective.

ITaaS  Journey

Figure 1. Enabling Technologies for IT-as-a-Service (ITaaS)

It makes sense, then, that the transformation into an IT-as-a-Service approach requires more than just the enabling technologies.  IT needs a new operating model to be successful – a new way of thinking and organizing people and process.

Today, many IT organizations are process-oriented.  Their key IT Service Management (ITSM) processes are managed, process owners are identified, and their processes are enabled through an integrated ITSM tool.  But a process-oriented approach hasn’t changed how they think about managing the technology silos.

ITaaS Evolution

Figure 2. The Evolution of how we Manage IT

Mature IT organizations realize that focusing on managing “end-to-end services” helps them be more customer focused than managing discrete “technology silos.”  A service-oriented approach enables IT to link the customer outcome to IT services, to applications, and to the infrastructure.  These organizations are defining their services, publishing their service catalog, and establishing service owners.

Many IT leaders also talk about “running IT like a business.”  This brings a higher level of maturity to IT, with the same fiscal discipline required to manage a traditional business.  This entails economic transparency or even an economic transaction where the business pays IT based on service consumption and IT, in return, commits to delivering a certain service level.  In this model, business relationship managers act much like account managers in a commercial IT service provider, i.e. building a strategic relationship with the business.

This transformation from process-oriented, to service-oriented, to running IT like a business, results in a new, IT-as-a-Service (ITaaS) operating model.  Another way of looking at this transformation is Figure 3.  Note that the progression is not necessarily sequential, e.g. an IT organization may work on elements of becoming service-oriented and running IT like a business simultaneously.

ITaas Operating Model

Figure 3. ITaaS Operating Model

Many individuals might recognize elements of service management in the ITaaS IT operating model.  While the model builds on service management best practices, it emphasizes service characteristics that are associated with cloud-based XaaS services (where XaaS includes Infrastructure as a Service [IaaS], Platform-as-a-Service [Paas], and Software-as-a-Service [SaaS]).  XaaS are characterized by the quality of service being actively managed, services being rapidly provisioned (typically through automation), ability to pay for what you use, elastic capacity, and high availability and resiliency.  While service management encourages these characteristics, achieving these characteristics across all IT services is a goal of ITaaS.


Reg Lo is the Director of the Service Management practice for VMware Accelerate Advisory Services