Admit it, you’ve got them: legacy .NET applications in production supporting the business. How many times have you been asked the hard question of how you’re  going to run those apps in the cloud?

It seems like an impossible task, taking something that was developed and has been running on the same, dedicated infrastructure, for years, to the cloud. Especially an application with a laundry list of dependencies that includes, at the very least, customized IIS, a Windows registry, the Windows event store, custom user accounts (with permissions), and a dedicated hard disk for storing docs/logs. No way you could ever run an application like that in a container, in the cloud, and enjoy things like high availability or dynamic routing, right? And never would you imagine seeing dedicated infrastructure vanish because you don’t need to provision for that one peak month a year.

In fact, using VMware Tanzu, both are entirely possible. In this post, l‘ll explain how.

It’s all about the environment

Let's consider two scenarios when moving legacy ASP.NET. In the first one, you have access to the old code, and so have the option to get in and change things. Even the sound of this makes me cringe; you’re just asking for bugs and all kinds of other creepy crawlies. In the second scenario, you can’t compile the application. Either the source code has been lost, or it disappeared in some tragic `format c:` incident, or compiling and deploying is just not an option. The bottom line is that all you have to work with are a bunch of DLLs and web.config.

While it’s not obvious, both of these scenarios have something in common, something that is very configurable and always a known thing. Something that can be transformed into an immutable thing (the cloud likes immutable things) and can be software-defined to look quite modern. Can you guess what it is?

It’s the environment. Hidden behind years of upgrades and patches and all that IIS are the components that make up the application’s environment. All legacy ASP.NET runs on Windows, which has a minimum .NET version and IIS version installed. The customizations start from there. So when we want to containerize legacy applications, we don’t necessarily need a programmer. It can simply be a matter of creating a container definition that starts with these known versions.

Your first modernized .NET

Say you have an ASP.NET 3.5 application running in IIS that's relying on some custom local account. We can create that exact environment using Microsoft’s base container images with the following DockerFile.

FROM mcr.microsoft.com/dotnet/framework/aspnet:3.5

SHELL ["powershell", "-command"]

# Create a local user and add them to a custom group

RUN New-LocalUser -Name "customUser" -Password (ConvertTo-SecureString -AsPlainText "ThisIsNotSecure" -Force) -FullName "Custom.User" -Description "Local Custom Account"

RUN Add-LocalGroupMember -Group myCustomGroup -Member customUser -Verbose

WORKDIR /inetpub/wwwroot

COPY wwwroot/* .

Hidden behind that FROM statement is a ton of work being done just for us. If we drilled down into the definition of that image we would find the use of an even deeper base image that downloads and installs a specific .NET runtime version and patches things with all the latest security. Thank you, Microsoft!

Next we run a PowerShell command to create a new user and add them to an existing group. While the result isn’t terribly useful, your mind should be racing with possibilities. Indeed, given that today’s PowerShell can pretty much do everything in Windows and the container will be created by the ContainerAdministrator account, the only limitation here is your creativity.

We then set the working directory to good ’ol wwwroot and copy in the application DLLs. Now, when the container image is run, IIS will start up and serve anything from the default folder. The application won’t know any difference between running within a container and running on a VM, so assuming we got the container definition correct, we’ve just modernized legacy .NET!

What about the operating system?

As you may have noticed, we aren’t talking about Windows Server very much. That’s because, as far as the application is concerned, it’s irrelevant. It doesn’t care what Windows version it’s running on, only which .NET version is installed.

You may be accustomed to thinking that Windows and .NET versions are one and the same—or assuming they are. But in fact, with Windows containers we are given a way to decouple this tightly bound runtime-to-OS relationship. While the container’s base layer and the underlying Windows OS do have some compatibility requirements, they’re no longer around certain versions of .NET running on certain versions of Windows. And because IIS and the .NET runtime have been neatly tucked away in a container, the only thing to worry about on the host is offering the containers feature.

In 2016, Microsoft introduced the containers feature on Windows Server, and it has been getting better and better ever since. Today it’s so good the Kubernetes community as well as most of the major cloud platforms have extended it to their products.

For Kubernetes specifically, with each cluster you have workers. These are the VMs that are managed by the cluster’s master and run your applications. Within a worker you have pods, which have at least one container—your application. Kubernetes originally supported Linux workers and about a year ago added stable support for Windows. But there’s a catch. You can’t run just any ‘ol version of Windows; the minimum version supported is Windows Server 2019.

At this point you’re probably thinking, “David has lost his mind.” Yes, I just said that .NET (and IIS) are very picky when it comes to Windows versions. And we are sure that legacy .NET won’t have anything to do with Windows Server 2019. But when we decoupled the operating system from IIS and the .NET runtime, we opened up a whole new world of possibilities. So while the worker must be at least Windows Server 2019, the only feature installed is container support—not .NET and certainly not IIS.

As new containers are spun up on the worker, within it is an entirely different hosting scenario. As we saw in the above example, the base image—mcr.microsoft.com/dotnet/framework/aspnet:3.5—is actually a composition of features that started with an image of Windows Server Core version 2004. Then a compatible netfx3-2004 .NET runtime was installed and patched.

All of this work represents a modern way of running legacy .NET.

There’s so much offered within this pattern, most notably separated responsibilities. With the Windows worker only worrying about containers, patching and upgrading should be a breeze compared to the old way. There’s no more application compatibility to worry about. The base image each container is using will also need patching, but it’s an entirely different workflow that can be managed by the application’s owner, the developer.

Yes, Tanzu can do that

If you’ve ever met a Tanzu .NET expert, no doubt you’ve heard why Tanzu—in particular Tanzu Kubernetes Grid Integrated Edition (TKGI) and Tanzu Application Service (TAS)—is a perfect home for your legacy .NET applications. Both heart .NET framework apps, but they offer distinct experiences that cater to different business needs.

TAS has an opinionated way of containerizing and deploying .NET frameworks apps, in the form of a buildpack. A developer simply pushes their compiled and tested application (aka the artifact or DLLs) to the platform and TAS does the rest. The idea behind this pattern is that one can’t customize the environment much, but that in turn creates a very stable, immutable, long-running cloud service.

TKGI, on the other hand, offers a place to customize your .NET application’s environment. But instead of only pushing the compiled bits, the developer writes the Dockerfile and creates the container image. The location of that image is then provided to Kubernetes for deployment.

Version 1.9 of TKGI introduced support for Windows workers, using NSX-T with Windows. If you are not familiar with NSX, it’s worth a look. Just like we use virtualization to create machines (VMs), we use NSX to create networks. Everything’s software-defined, so it’s perfect for running a cloud platform.

From a developer point of view, TKGI is a ready-to-roll Kubernetes cluster. Add your container image to a repository, create a quick kube manifest, and deploy.

If you don’t want to manage container images, Tanzu Build Service (TBS) is the perfect solution, as it includes a cloud native buildpack for .NET framework. Just provide your compiled application (all those DLLs) and TBS will take care of building the container image and listing it in your chosen repository. Remember how I talked about the application owner managing the container images base layer? TBS makes that debt vanish, as it can automatically rebase containers whenever the Windows Server Core image gets patched (aka ”Patch Tuesday”).

Learn more about TBS.

So many use cases, so little time

What's really powerful about Kubernetes is that it’s such a neutral place to run your applications. Just provide a container image and a little direction, and the rest is done. From a Windows perspective, within that Dockerfile you can do really amazing things. All you need is a good understanding of PowerShell and any Windows APIs that get the job done.

Speaking of use cases, probably the biggest question is around domain joining. If the application’s container has nothing to do with the host it’s running on, how does the application interact with a domain? Search Microsoft’s documentation and you’ll find a section dedicated to Group Managed Service Accounts (GMSA). The spoiler is that you can’t join a Windows container to a domain. All you can do is join the host, that Kubernetes worker, and share an account between it and the container for access.

There is also a very good chance legacy ASP.NET is using Integrated Windows Authentication or some other domain managed service. Back in the day, that was the way to get authentication done. Today it’s a different story, as authentication and authorization are more portable using oAuth2/OpenID.

The community is hard at work trying to find workarounds for containers that don’t have domains joined to them. In the end it’s Kerberos that you need, not necessarily the domain. Andrew Stakov has done some interesting work creating a proxy for Kerberos authentication that greatly reduces an application’s dependence on the domain controller and, in turn, eases its containerization with little to no code change.

When it comes to domain services, the ultimate goal is to replace that dependency with something more cloud-friendly, something that you could possibly recreate locally while developing. That said, when your application needs local operating system features like the registry, the global assembly code, or a persistent disk, the combination of PowerShell, container definitions, and Windows workers on TKGI offers a great alternative.

Get started today

To learn more about TAS and TKGI, visit the Tanzu site today. If you would like help evaluating your portfolio of applications for their cloud readiness, head over to VMware Pivotal Labs. They’ve been doing modernization for a long time and have the tools to make your cloud migration a breeze.

In the meantime, learn more about the products mentioned throughout this post:

Getting Started with VMware Tanzu Build Service 1.0

Tanzu Kubernetes Grid Integrated Edition documentation

 

Thumbnail image courtesy of Boba Jovanovic via Unsplash.