This post originally appeared as part of the April 18 Intersect newsletter. Click here to view the whole issue, and sign up below to get it delivered to your inbox every week.
Let’s hear it for the mainframe! IBM’s Big Iron turned 55 years old last week, on April 7. A lot has certainly changed in the intervening years. While today’s enterprises regularly deal in terabytes or even petabytes of data, when the IBM S/360 first entered the market in 1964, you would have needed a one-ton pickup truck just to install a humble 128KB of core RAM.
Yet despite the platform’s advancing age, it’s hardly ready for retirement. Many organizations have no plans to mothball their mainframes, now or in the foreseeable future. What’s more, while most of us see the cloud as the way forward, some customers have actually begun moving workloads the opposite direction, out of the cloud and back into traditional data centers—a phenomenon known as “cloud repatriation.”
But what about application modernization? Were these seemingly retrograde outfits’ plans for digital transformation misguided, doomed from the start? Not at all. These organizations are just recognizing the current reality: it’s a hybrid cloud, multi-cloud world.
The truth is, digital transformation can never be one-size-fits-all. According to the latest edition of the AFCOM State of the Data Center survey, 64% of respondents said they were planning to implement a public cloud strategy in the next 12 months. But fully 73% said they would invest in a private cloud, with 59% explicitly mentioning a hybrid cloud strategy.
Clearly, then, today’s IT mantra isn’t about resting on our laurels and continuing to rely on outdated, legacy applications. It’s about modernization, but it’s also about choosing the right tools—and the right battles.
Credit card and finance company Discover has some insights to share about that. With Pivotal’s help, in 2017 Discover embarked on a plan to modernize its back-office systems, which had an average age of 10 years. Among the most important lessons it learned along the way was that flexibility and adaptability are essential. Modernization solutions, once implemented, often need to be reworked before they fully match requirements. And in a few cases, Discover found the sheer amount of long-buried technical debt meant dragging its legacy data systems, kicking and screaming, into the modern world simply wasn’t worth it.
One best practice before embarking on a modernization effort is to survey the legacy applications that could use updating. For each, ask: How much risk is there in touching this application? And how much reward will there be if we go through the effort?
For example, a back-office application that’s running countless workloads around the clock could certainly benefit from the performance, storage, and networking improvements that come with a modern architecture. But if it’s so mission-critical that it can’t tolerate any downtime, performance dip, or unforeseen security vulnerability that arises from the upgrade, then it might be better to put that effort off until the requirements are better understood—or even skip it altogether.
By the same token, upgrading an application that looks like low-hanging fruit with an easy path to modernization, but which is seldom used, might also be a wasted effort.
As for cloud repatriation, there are several reasons why moving applications from the cloud back in-house or to a colocation data center might make sense, as this article from eWeek explains.
Of course, none of this is to say that application modernization is something to take for granted. Legacy applications can potentially expose an organization to serious risks, even if they appear to be only gathering dust. The key take-away should be that modernization requires adaptive planning and is best approached in stages. Along the way, you may find that even as you strive for the IT of tomorrow, there may yet be value in the systems and processes of yesteryear.