Getting real about virtualisation


By Anthony Caruana
Friday, 17 May, 2013


Virtualisation is the single biggest change to how we manage our systems for the last decade or so. As a result of virtualisation we have slashed the number of physical devices we have to manage, resulting in smaller data centres, lower power consumption, better business continuity and greater flexibility. But what about everything else? And does virtualising everything need a new set of skills?

The desktop

In case you missed it, it’s the year of VDI - again. We’ve been hearing this for the last few years but according to Kevin McIsaac, an analyst with IBRS, “There will never be a year of VDI - ever. The reason is it’s based on the assumption that it’s the next replacement for the desktop and it’s not. VDI is an interesting way of deploying a desktop for a very specific set of use cases.”

But scratch the surface and you’ll find that virtualisation has hit the desktop - it’s just not happening the way many expected. As well as the traditional view of using the end-point device as a type of dumb terminal, there are many other ways to use virtualisation for delivery of desktop applications.

Application virtualisation is a very cost-effective way to deliver software to users. In many schools, where old applications often abound, packaging an application and delivering it in its own virtualised container gives some breathing space as old applications can run on newer platforms.

Despite the many advances made in VDI, there’s still some resistance to deploying it, particularly as the delivery of desktops shifts from desktop teams to server and network personnel.

“It’s not only cultural but it’s political. You usually find that there’s a different group of people looking at the servers to those managing desktops. This [VDI] solution is a server-based solution delivering a virtual desktop. Who, in the organisation, will own this?” according to Nabeel Youakim from Citrix.

One of the issues for St Vincent’s Hospital in Melbourne is the mobility of staff. VDI provided a solution that allowed the hospital to make effective use of VDI. St Vincent’s uses Microsoft Remote Desktop Services in combination with contactless proximity cards. Delivery is over a network using Cisco and F5 hardware.

This initiative, dubbed Quick Connect, allows a staff member to walk up to a computer, tap a sensor with their ID card and within a couple of seconds their session, hosted on a server in the local data centre, opens and they can continue working. This works with several thousand users accessing clinical applications from in excess of 600 computers on the local network.

One of the big challenges in the successful deployment of VDI has been throughput. Regardless of what hypervisor or software solution is in place, getting data back and forth from the server to the client has always been a critical element when establishing a VDI implementation. However, several recent technical changes, and their better value, have made it possible for VDI to be deployed with better outcomes.

“One of the catalysts has certainly been flash storage becoming more mainstream. That has helped solve a lot of the IO problems that we saw in some of the early architectures,” according to Adrian De Luca from Hitachi Data Systems.

De Luca also commented on the number of new entrants to the storage market who are bringing flash-based solutions to the market, making a specific grab for customers looking to resolve disk IO issues associated with VDI.

“At the end of the day, VDI is just another enterprise application. Why should we create separate silos of compute, network and storage just so you can get some cost benefits? Some of these flash-only players become a one-trick pony. Although they might seem too cheap to purchase the long-term running cost and management will start to see the same issues that drove us to consolidate SANs 10 years ago.”

VDI and BYOD

While it’s critical to ensure that there’s enough computing power back at the server to handle the processing of many hundreds of clients at the same time, it’s critical to not forget the network. Ultimately, the success or failure of a VDI deployment will be determined by the capacity of LAN, WAN and cellular links between the end-point device and the servers.

One of the triggers for increasing uptake of VDI has been the rise of BYOD. Desktop virtualisation makes it possible to deploy applications to a wide variety of mobile devices without the need to develop bespoke apps for each mobile platform.

Damien Murphy from Riverbed says: “We’re definitely seeing, although it’s not talked about, how BYOD equals VDI. When you speak to organisations doing BYOD they’re building a layer of VDI.”

The network

There’s a lot of hype today for software defined networking (SDN). SDN looks to be on a steep growth curve with Gartner suggesting that it will be one of the key issues for enterprises to consider.

With SDN the software and intelligence that is required to manage the network is abstracted from the hardware. Alan Perkins of Rackspace says: “It separates the software in terms of the intelligence around where the networks are being routed from the actual switches.” This allows businesses to create more sophisticated topologies to ensure that data routes according to the best business logic rather than via physical switching.

We’re already virtualising significant parts of our corporate networks with virtual switches connecting virtual servers both inside and across physical hosts. But what are the benefits?

With server virtualisation, according to Rhys Evans from Thomas Duryea Consulting: “Server virtualisation was a slam dunk. Let’s take your 100 physical servers and turn them into six. We can show a cost reduction in terms of power, cooling, hardware, maintenance contracts - you show someone the numbers and it financially makes sense.”

With SDN, the benefits may be less clear-cut, although Dustin Kehoe, an Associate Research Director with IDC, says there are clear benefits.

“I’m seeing SDN actually for business continuity and disaster recovery. The second thing about SDNs is also automation. If you go back to this thing we call cloud, and we’re talking about virtualising server, compute and storage, one thing we’ve failed to this date I would argue is automation. We’re not really automated, because the network isn’t automated. Let’s face it, the network requires lots of manual processes.”

The automation benefits are certainly possible although SDNs aren’t yet widely seen in enterprise networks. McIsaac says: “If you’re a Telstra or an Amazon or a Google then it’s probably very important. But if you’re a typical enterprise in Australia or an SMB, eventually it will trickle down but I don’t see it as being hugely important now.”

But if you’re a service provider, the benefits may be another slam sunk, according to Kash Shaikh, Senior Director for Product & Technical Marketing at HP. “In a public cloud environment, a mid-sized public cloud provider has about 10,000 provisions per day. And, let’s say, if each provision takes about 20 commands, that’s about 200,000 commands per day. And if you have a really good IT admin, a guy who really knows which command to enter when and they can really punch in the commands really fast, it takes up to one minute to enter these commands. And how many hours that translates into, 3333 hours. That’s about 420 admins.”

The data centre

“Providing the data centre as a logical unit that can be defined in software. There’s a further separation of the logical and physical that unlocks a lot of opportunities,” says Charles Clarke from Veeam.

At some point, we will still get down to physical assets that need to be managed and maintained. While it’s all well and good to talk about virtualised servers, networks and data centres, it’s not possible to virtualise everything. However, what we’re seeing is a continued, sustained push to separate the physical and logical elements of our infrastructure.

This offers many opportunities for greater flexibility and redundancy, although it does require a shift in how we think about systems. Whereas we used to look at physical devices and considered the intelligence built into that equipment, we’re now moving towards software emulating that intelligence on commodity hardware.

“What the software defined data centre and software defined network really represents is a different management paradigm,” he adds.

When should you say no?

Are there times when virtualisation isn’t the right option? In the vast majority of cases the relatively minor performance hit that might be experienced when placing a hypervisor between an application and the hardware is greatly outweighed by the benefits of redundancy, flexibility and cost management. However, there may be legacy applications that expect to operate with direct access to hardware and won’t work when there’s a hypervisor involved.

Aaron Steppat from VMware says: “It’s better to focus on the modernisation of an application that can run on a commoditised platform and get all the benefits of virtualisation versus trying to hold it back and have it on an environment where ultimately the performance isn’t guaranteed.”

McIsaac adds: “People are still reluctant to deploy mission-critical, large-scale databases on virtual machines. There is no reason today from a hypervisor scalability or availability point of view not to do that. You might not for other reasons but it’s because of the capability of the infrastructure.”

People and management

Management of virtualised environments presents IT managers with some new challenges. Now that the servers can be spun up and deployed quickly, the skills required are no longer around the physical deployment of hardware. The focus shifts to application management.

“It becomes no different than putting another application on your PC. As long as someone knows how to use that virtualised environment, you can then move ahead,” says Timothy Gentry from Avaya.

This leads to a shift in the skills we might need in our IT departments. While there will continue to be a need for some specialist engineers for the network, servers, storage and other critical hardware, we will increasingly need to consider application engineers.

Gentry adds: “You don’t need to bifurcate between a telecom person and a networking person. Why can’t it all be one person because it rides on one application layer?”

When you hire a unified communications specialist, they won’t be a PBX engineer. They will be a communications specialist that understands the interplay between the network, the unified communications software and the virtualisation platform.

Now that we’re able to quickly and easily spin up a new server at almost a moment’s notice, there’s a new challenge. In the past, adding a new server to the business was a non-trivial decision. We’re seeing environments where the number of servers greatly exceeds the number of staff.

“Now we’re seeing this sprawl. We’re seeing companies with 150 staff and three or four hundred servers. But they don’t need them. They just find it too easy to do it. That’s where we’re seeing the complexity,” says Evans.

Image credit ©iStockphoto.com/russ witherington

Related Articles

Staying ahead: business resilience in the hybrid cloud era

The rise of cloud computing and advancements in virtualisation have revolutionised how businesses...

Taming cloud costs and carbon footprint with a FinOps mindset

In today's business environment, where cloud is at the centre of many organisations' IT...

The power of AI: chatbots are learning to understand your emotions

How AI is levelling up and can now read between the lines.


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd