AI Explore Tanzu Platform

Learn how to add AI to your platform at Explore 2024

Everyone is experimenting with enterprise AI. I hope they are at least: that’s the only way we’ll learn what works and doesn’t work. Part of what we all need to learn are the roles and responsibilities for AI in large organizations. My theory is that the benefits of generative AI will mostly come from modifying existing applications. There are thousands and thousands of those in the world, helping run our everyday lives. Maybe even millions once you think of all the UI-less services that these apps rely on. Improvements in the apps we use everyday will have a huge, massive effect.

Each time a new technology is introduced into enterprises, we follow the same cycle. Small, eager teams start experimenting and using it on their own. Many fizzle out or just fail to deliver on razzle-dazzle expectations. But a few of them end up actually improving the business: usually by improving customer experience that drives more sales, by saving lots of employee time for internal operations, and sometimes just for doing something weird-but-cool. The organization then wants to spread all this tech goodness to other groups. The assumption here is that new innovations in application development are like salt: you can add it to near about anything and it’ll improve it. Even watermelon! 

You can see where this is going, right? New application functionality is not like salt, especially in large organizations. “Enterprises” need more than one-shot AI applications. They have a long-term view on gardening all those applications. Two, three…five years from now, successful applications will still be running and need to be maintained. You’ll want to manage costs. You’ll also, hopefully, want to update them. And when we’re talking about applications, making sure you have an application platform in place is one of the best ways to achieve all of that. 

When it comes to platforms, what’s new with AI is thinking about who “owns” AI. Now there’s a loaded term: “own.” So many promising IT initiatives have been killed by high-stakes playground squabbles over who “owns” the initiative. 

I had a fun talk with one of my co-workers, Adib Saikali, Software Engineer at Broadcom, recently. Along with many other Tanzu-ites, he’s been working on how we integrate AI into the Tanzu Platform. One of the things I liked most was their thinking on how this question of ownership is handled. The theory is that there isn’t a single group that owns AI, but that different aspects of AI are spread up and down the stack. The layers, from top to bottom are: infrastructure, AI experts, platform engineers, and finally application developers.

Infrastructure

Software has to run on something, be stored on something, and connect to other software over something. That’s your IaaS layer. With AI, you might modify this a bit, adding in GPUs if you’re doing actual training, getting faster storage and networking. But the practices your infrastructure people are following to manage your AI stack are mostly the same: understand the infrastructure needed, spec it out for those needs and costs, build it, keep it running within agreed on parameters, and evolve the infrastructure as needed.

AI Experts

It’s at the next layer that we encounter the new role, the AI nerds. Well, that’s my term for it. Perhaps I should say “AI experts.” I’m guessing there’ll be something more sophisticated for the people who understand LLMs, can sort through and evaluate them, and then also tune them and configure them to work with what the business needs. If I understand it right, there’s an interesting product management function to this role as well: the AI experts need to understand what the business needs, understand what the different models will do, understand how they could train models and tune them, and then put together an internal marketplace of models. 

What this AI expert layer is doing is something like the “infrastructure” of generative AIs. As with the infrastructure people, there’s some interesting roles and responsibilities here. Your application developers are not the ones selecting models, let alone training and tuning them. Also, the AI experts aren’t dedicated to the ongoing use of any given model all the way up the stack. Instead, they move on to gardening other models here and there. This is the newest of the stack layers, so we’ll see how it pans out.

With AI being so new, the AI Experts probably spend a lot of time on the FUD-side of AI. What if it hallucinates? Can we leak our data? How do we control costs if we end up chewing through eighty bazillion tokens just to find out that people prefer fluffy bathrobes? You’ll probably even see AI “Centers of Excellence” that seek to establish and spread enterprise best practices. That sounds groovy: my only recommendation is to never call that a “Center of Excellence.” It’s the equivalent of claiming you’re selling the world’s best pizza. Maybe you are – congratulations! – but now, no one will believe you or even notice you said it. Maybe just call it “The AI Lounge” and do ask the AI to do some ROI xls’s for comfy chairs.

What’s interesting to ponder here is who runs the AI models and services. I suspect that’s spread across the infrastructure and AI Experts roles, at least for now. I have the feeling most AI Experts are not production operation experts, nor want to be. I think the next layer will have an increasingly important role in running enterprise AI stacks, at least the developer-facing part.

Platform Engineers

Then, up from there, you have the platform engineers. Platform engineering is the layer we’ve been consumed with over the past few years. They’re the ones who build and manage the PaaS. Oh, pardon, “platform.” Their job is to apply product management to the ongoing building of the application platform to make sure the platform is useful for developers. At the same time, they have other stakeholders: first, themselves, the people who are running the platform. Second, for security and compliance people who want to – should want to – use the platform to make doing the right thing, the easy thing, for developers. You can learn more about this layer in a recent discussion with Charles Schwab platform engineers.

When it comes to AI, the platform engineers are treating AI like any other service. First, the platform engineers are making AI services part of the platform with self-services access, automation, and controls. Second, they’re incrementally improving how developers use those services by applying product management and design practices. You know: the platform engineers are watching developers use the AI services, seeing what works and doesn’t work, and trying to make the experience better with small changes, repeating the feedback loop over and over. 

Finally, all of this would just be blinking cursors and YAML files without actual applications. So, you need the application developers.

The Developers

The developers are in charge of writing code, sure, but also figuring out what the applications should do in the first place. In more mature organizations that have product management and design that works closely with The Business, the “developers” also spend a lot of time figuring out what the problems to solve even are. They study the people who use the software (customers, employees, partners, etc.) and try to incrementally improve the applications. 

When it comes to AI, the developers are responsible for figuring out how to bring each model’s capabilities into their applications. They likely give feedback to the AI Experts about what works and does work. You’d hope, at least! As pointed out above, the important part about roles and responsibilities is that developers don’t spend time on model selection and maintenance, nor on running the AI services. That’s turned out to be a big waste of time when it comes to application developers working on the infrastructure and platform layer, so I’m hoping we don’t repeat it again with the AI layer.

Learning from those who’re doing it

There’s more details to all of this, of course. And the most important thing is to experiment with this model and fine-tune it. Many people are interested in AI and many people are figuring out what to do with it. But, as a recent Battery Ventures survey found, there aren’t that many AI applications in production. In fact, a lot less than people thought there would be:

The AI wave is still building, but the future has been slower than anticipated. Today only 5.5% of identified AI use cases are in production, a sobering reality check on respondents’ Q1’24 projection that 52% of identified use cases would be in production over the next 24 months.

This means any story you can find about organizations using AI is valuable, and you should seek it out.

I’m moderating a panel at this year’s Explore in Barcelona on exactly these two AI topics. I’ll be joined by the aforementioned Adib to discuss the stack and to get insights from the conversations he’s been having with large organizations figuring out enterprise AI. The other panelist is Benni Miano, the product owner for the Tanzu Platform for Cloud Foundry at Mercedes-Benz. They’ve been thinking through all of this as well, and he’ll bring the platform engineering perspective in. 

I’m looking forward to it, and I hope you can join us in the room.  It’ll be a rare chance to ask your questions, and I’m pretty sure we can wrangle up some hallway discussions as well if you want to go deeper.

Explore Barcelona is November 4th to 7th. Check out the AI panel here. You’ll also want to check out a deep dive into the AI stack at this session as well. Also, pursue the entire catalog to see all other AI, platform, cloud native, and infrastructure sessions. Then, if you can make it, register here.