Is Kubernetes the key to the cloud's future?
Kubernetes is great news for IT administrators and will soon be seen as a viable ‘virtual alternative’ to legacy systems.
It’s been a busy year. The pace of technological change has increased significantly, putting increased pressure on enterprises and their IT departments. However, the upside of new technology is access to new capabilities that can transform a company’s competitive edge. One of them is Kubernetes.
But before we get into that, let’s look at a little IT history.
The past two decades have seen the distributed server paradigm evolved into web-based architectures, which matured to a service orientation before finally moving into the cloud.
The cloud revolution has been fuelled by virtualisation, and its widespread adoption has transformed the modern data centre. But it didn’t stop there. In fact, its unchecked proliferation has created some of the same challenges that cloud was designed to resolve… such as sprawling and expensive-to-maintain server farms.
The machines may be virtual, but managing them can still be a chore. So enterprises are looking for ways to provide more agile and cost-effective ways to build, deploy and manage applications. And that’s driving interest in another fairly new and exciting idea — containers.
Containers get more out of their IT
Containers do the same job as virtual machines, providing a separate venue for applications to operate within. The big difference is that container technology can run an application in a fraction of the compute footprint that a virtual machine requires. This is because they don’t have to run a complete instance or image of an operating system, with all of its attendant kernels, drivers and libraries.
What’s more, as well as taking up a tiny percentage of the room, extra containers can be spun up in microseconds, compared to minutes or even longer for virtual machines.
It really works. For example, Google uses containers — more than two billion of them every week — to run its cloud services. Many of Google’s most popular services, such as Gmail, Search, Apps and Maps, run inside containers that operate in Kubernetes, which is an open source container cluster orchestration framework that Google initiated in 2014.
Kubernetes works in conjunction with Docker, one of the providers that has made containers popular in the cloud world. While Docker provides lifecycle management for containers, Kubernetes takes this to the next level by providing orchestration and the ability to manage clusters.
HDS is part of the container crowd
Google isn’t going it alone. Other companies have announced support for Kubernetes in combination with converged infrastructure solutions. This is great news for customers because it can provide a proven enterprise-class private cloud infrastructure for developers and customers to orchestrate and run container-based applications with a full microservices architecture.
Kubernetes and VMware running side by side on converged platforms offers companies an enterprise solution for both container-based applications and traditional virtualised workloads.
One of the biggest benefits of having Kubernetes orchestrate container management is it can manage and allocate resources on a host/cluster dynamically with fault tolerance to guarantee workload reliability. Kubernetes allows the definition of resources and labels on nodes, enabling the user to select and control where the defined resources can be run.
Labelling also allows the running of pods on different tiers or configurations of hardware. For example, a set of production nodes with a higher set label will allow Kubernetes to select and manipulate pods and services related to that label. This enables Kubernetes to deploy workloads to these blades based on labelling to assure all resources are being utilised based on the end-user’s requirements.
A compelling combination
The combination of a converged infrastructure solution and Kubernetes container orchestration offers customers several benefits, including simplified management of physical and virtual infrastructure with automated orchestration, to scale based on workload needs and flexibly deploy Kubernetes container clusters into new environments.
Hitachi Data Systems’ (HDS) Unified Compute Platform (UCP), for instance, can quickly scale from 2 to 128 nodes to provide a rapidly increasing capacity for Kubernetes to schedule nodes and manage containerised workloads. Kubernetes will orchestrate the deployment, scale and monitoring of those containerised services — all while running side by side on the same platform with virtualised and bare metal workloads.
Kubernetes is great news for both the developer community and IT administrators looking for accelerated application deployment. And things are only going to get better. HDS, for example, is already considering advancements and new features for this solution including hybrid configurations with GKE and AWS cloud services, streamlined and fully automated Kubernetes cluster deployments within UCP, and integrated container registry.
It seems like once every five years or so, the IT industry witnesses a major technology shift. With more applications contending for I/O resources, I am betting that in the next year or so, converged solutions combined with Kubernetes will be seen as a viable ‘virtual alternative’ to legacy systems that were not developed with containers in mind.
Staying ahead: business resilience in the hybrid cloud era
The rise of cloud computing and advancements in virtualisation have revolutionised how businesses...
Taming cloud costs and carbon footprint with a FinOps mindset
In today's business environment, where cloud is at the centre of many organisations' IT...
The power of AI: chatbots are learning to understand your emotions
How AI is levelling up and can now read between the lines.