You can't win the AI game without a playmaker captain

Nutanix

By Michael Alp, Managing Director, Nutanix A/NZ
Tuesday, 17 December, 2024


You can't win the AI game without a playmaker captain

Cricket had Border, rugby had Gregan, and AI has Kubernetes.

AI has taken the world by storm, causing organisations to reimagine how this technology could be used to improve business operations and fuel innovation. Gartner suggests 80% of enterprises will have adopted AI by 2026, while IDC believes generative AI (GenAI) will spur on a market transition to “AI Everywhere”, which will be the defining factor of the next frontier of digital business.

It’s clear that AI is a game worth playing and the possibilities are endless. But those possibilities also bring challenges that can seem insurmountable.

The Nutanix Enterprise Cloud Index found that although 90% of organisations across APAC note AI as a priority, one third believe their current IT infrastructure is unsuited to running AI applications.

Directing the complex flow of AI applications is no mean feat. These systems require a powerful, flexible infrastructure that can support complex computational needs and large data sets. This is where Kubernetes and modern applications become indispensable. AI is transforming the world as we know it while Kubernetes provides the fundamental building blocks.

Kubernetes brings control to the chaos — which is fitting, given the Greek origin of the word, meaning ‘helmsman’ or ‘pilot’.

If Australia’s historic 2023 Rugby World Cup failure taught us anything, it’s the importance of a cohesive team, captained by a world-class playmaker to guide the team’s play and orchestrate moves with precision. Without it, well…the scoreboard tells the rest of the tale.

Kubernetes gets called up

Kubernetes helps automate the development, deployment, scaling, and management of AI workloads, enabling seamless scaling of AI applications, ensuring efficient allocation of resources, and ensuring dependability. This allows AI models to run consistently across various platforms, from local servers to the cloud and across multiple different clouds — whether private or public. It also handles failure recovery, ensuring resilient, uninterrupted AI processes, making it an essential tool for scaling AI applications efficiently.

What does that mean? Kubernetes reads the situation, distributes tasks seamlessly, and ensures the team of data, models, and compute power moves as one, executing the right plays at the right time — regardless of location, for maximum impact. It’s the driving force that turns a collection of individual efforts into a unified, winning strategy.

Unifying the team

Today, most organisations run with a mix of modern and traditional applications. As a result, operational silos can be formed. Bridging this gap requires intelligent infrastructure that is compatible with both VMs (the traditional application deployment model) and Kubernetes (the new gold standard in applications).

Additionally, AI workloads involve a mix of diverse components that require consistency and isolation to run effectively across different environments. As such, most AI applications are containerised — a process that packages an application’s code together with all the files it needs to run on any infrastructure — to accelerate deployment, eliminate testing dependencies, and enable scalability, portability and repeatability across distributed edge environments.

In turn, Kubernetes provides the orchestration needed to manage these containerised workloads, automatically distributing resources and optimising performance.

Gartner predicts that by 2027, more than 90% of organisations will be running containerised applications in production. The use of AI-powered apps is exploding, and that family of apps — being containerised — is trending upwards as well. As a result, organisations are seeking to deploy AI applications at scale across data centres, edge environments, and in public and private clouds, while maintaining the flexibility to move between these environments throughout their lifecycle.

Training AI models is a resource-intensive process, requiring massive compute power and memory. Kubernetes helps solve this by allocating resources based on the needs of specific workloads, ensuring AI applications can scale up during intensive training sessions and scale down during periods of low demand. Not only does this manage resource use more effectively, but it also helps reduce costs.

Captain’s call

While the possibilities for AI are truly endless, organisations are stuck wondering how to get started. The seemingly complex journey for the development, deployment and management of modern applications needs to be simplified so that organisations can continue to drive innovation and be successful.

Just as a captain on the Rugby field keeps the team cohesive, in control, and playing their positions, Kubernetes and containers promise to bring cohesion to the otherwise complex world of modern apps. Behind every history-making captain is a coach who brings the requisite pieces together. As modern workloads move across the enterprise — from on-premise, to cloud, and the edge — an intelligent hybrid multicloud infrastructure gives the captain everything they need to bring the promise to life.

Image credit: iStock.com/ArtemisDiana

Related Articles

Fixing the cybersecurity skills gap in Australia

Industry needs to mend the broken pathway from cybersecurity education to employment.

Despite years of explosive data growth, there may not be enough for AI

Enterprises have reached a fork in the road, where they must either find more data or shrink the...

Eight steps to avoid breaking the bank on AI's seductive promise

Investments into AI that have a positive return are the result of establishing solid foundations...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd