Preparing your data centre for desktop virtualisation
Tuesday, 10 January, 2012
Server virtualisation is now normal practice in Australia, and desktop virtualisation is becoming increasingly commonplace. Once you’ve determined that it’s the right approach for your organisation, how do you decide what’s needed to support virtual desktops from your data centre? Stephen Withers explains.
Until recently, organisations had been approaching desktop virtualisation in much the same way as they had server virtualisation: that is, by acquiring and allocating server hardware and SAN storage. But recently there has been a change following a realisation that desktop virtualisation is different - demand is spikier, and it doesn’t always need high-performance storage to the extent virtualised servers do.
Last year, VMware released a reference architecture for virtual desktop infrastructure that puts some of the storage back onto the server. User data remains on shared storage, but the rest - especially the operating system and applications - is kept on each server.
“That makes a big difference to the price point and performance,” explained David Wakeman, Solutions Marketing Manager, APJ, VMware.
Reference architectures are convenient: as Leon Booth, Senior Sales Strategy Manager for Desktop Virtualisation, Microsoft Australia, pointed out, they may indicate that to run the required number of virtual desktops you’re likely to need certain server and storage configurations. But he warned that virtual desktop workloads are much less predictable than virtual servers, and therefore it is important to understand what your users do, and when they do it. Understanding the pattern of use in terms of the usage of CPU, memory and I/O resources is essential.
David Rajkovic, ANZ Director of System Engineering at Citrix, emphasised the importance of adopting an appropriate architecture for desktop virtualisation. He advocated separating the applications from the desktop, and then running each application in the most efficient environment. For example, some may be better executed under XenApp (Citrix’s application virtualisation product) and others under XenDesktop (the company’s virtual desktop software). He also suggested there will be increased options for sourcing applications from outside an organisation’s own data centre.
Some applications - notably CAD and other high-end graphics programs - may not be suited to run at all in a shared hardware environment, Rajkovic said. Instead, those users should be allocated a dedicated blade at the data centre for the duration of the session.
Dave Robbins, Network Technology Specialist at Bridge Point Communications, said predefined infrastructure sets such as FlexPod (from NetApp and Cisco) and VCE (VMware, Cisco, EMC) are becoming more common as they simplify planning (by making it easier to work out what’s needed for a particular project) and management.
FlexPod is more flexible and a better fit for the Australian market, he suggested.
“It scales down very well” to suit Australian organisations, and can be “stretched” in a particular direction (eg, with additional storage) to suit specific needs. “It’s really quite effective in reducing project risk,” he said.
Wakeman noted the emergence of complete VDI appliances from companies such as Nutanix, which further simplify selection and installation.
Sizing
But how do you determine the hardware requirements for your workload? Phil Goldie, Director of Microsoft Australia’s server and tools business group, said Microsoft’s System Center can help provide the historical data needed for such an exercise, and Booth suggested tools such as VSI Login are useful for simulating workloads for benchmark testing.
But you can’t assume that a VDI exercise will simply move existing workloads from desktop PCs into the data centre. Goldie noted that it is likely to change patterns of use by allowing previously ‘desktop only’ applications to be used on mobile devices. As Adrian De Luca, ANZ Chief Technology Officer at Hitachi Data Systems, pointed out, it’s usually too costly or difficult to write a mobile app for each type of device in use, so virtualisation can be the way to go. However, the shift to SaaS for a variety of enterprise applications including CRM, payroll and people management means tablet accessibility may come “for free”.
When planning a desktop virtualisation project, it is important to inventory applications that are in use today, and expected in the future, said Robbins. Custom-developed applications tend to be resource hungry and therefore not suited to VDI, he said. He gave an example of a healthcare application that worked well on a conventional desktop but its resource consumption had a significant impact on a desktop virtualisation project’s return on investment. So profile your applications first, and make sure that VDI is the most appropriate model, as other options exist, he said.
VMware and other virtualisation vendors allow you to offer users an ‘app store’ style catalogue of available applications, but they don’t all have to run in your data centre - some could run in the cloud, he suggested.
Booth warned that the best and fastest kit isn’t always optimal for desktop virtualisation. Instead, using larger quantities of generic units may give better results.
Shared vs siloed
Wakeman recommended separating virtual desktops from other workloads to prevent the peaks that regularly occur (eg, when people start work in the morning) from interfering with server applications. Booth concurred, warning that the unpredictability of desktop workloads can disrupt server applications running on the same hardware.
Energy consumption can be reduced by shutting down servers overnight (moving still-active virtual desktops onto a subset of the hardware minimises the number of physical servers required), or they can be put to other uses such as batch processing - a strategy that is being successfully used in the retail sector, Wakeman noted.
De Luca advocated a converged architecture for servers and virtual desktops, but stressed the importance of being able to segment and manage resources across the installation. Without that, it is necessary to create ‘silos’ to prevent one set of workloads from stealing resources from another.
He argued that virtualisation is needed across the stack for VDI projects, saying “our philosophy has been ‘virtualise everything’”.
For example, Hitachi can partition a blade or combine up to four blades into a single logical unit - “No one else is doing that for x86 compute,” he claimed. He also said it is essential to be able to measure workloads and move them around (dynamic reprovisioning) for efficiency. According to Rajkovic, data centres are increasingly being optimised to rearrange virtual machines on a policy basis (ie, without administrator attention), and this optimisation is being largely driven by desktop virtualisation.
Robbins warned that desktop virtualisation is very memory intensive, but most existing server architectures focus more on CPU performance, so there is a tendency to overprovision VDI in terms of CPU to obtain sufficient memory. This can be overcome by pooling virtual desktops with other workloads (eg, adding CPU-intensive but less memory-hungry applications to even out utilisation), or selecting blades that allow larger amounts of memory per processor.
Mike Hawley, Network Technology Specialist, Bridge Point Communications, said it is good practice to physically separate the virtualisation infrastructure management software from the virtual desktop workloads, to the extent of using at least two blade enclosures. This provides better scalability without adding much to the total cost of the project. Some vendors (eg, Cisco) make managing multiple enclosures as easy as one, and starting out with free slots in an enclosure makes it easier to scale up as demand grows. Conversely, Hitachi’s reference architecture for XenDesktop implementations shows the infrastructure components running in the same chassis as the first set of desktop blades, and the first storage block accommodates one pool for the infrastructure and another for the virtualised desktops.
Whatever hardware you select, Booth pointed out that you’ll need to consider the capacity of your data centre in terms of space, power availability and other parameters. A significant project may mean a new data centre, or taking space in a shared centre.
Storage
There will be “an explosion in the amount of storage you need in the data centre to support VDI”, said Robbins. Desktop virtualisation vendors are getting better at separating user data from the rest of the system, and deduplication can reclaim up to 90% of storage space currently used, but the amount of data stored per user continues to grow.
While De Luca agreed that virtualisation makes it easier to manage and protect data for desktop applications, he warned that deduplication can adversely affect performance.
“We haven’t seen a lot of even virtual server implementations of primary dedupe” as it can cause performance bottlenecks, he said. Instead, he recommended thin provisioning (allocating physical storage space to virtual volumes at the time it is actually required) as a way of achieving the desired storage reduction. The demand for ‘smart’ storage is increasing in the light of increasing storage prices caused by the flooding in Thailand, observed Robbins.
Storage performance can be improved by caching (typically in solid-state storage) frequently used data for quick delivery, and Hawley noted that such storage is being increasingly used in VDI projects. But tiered storage isn’t usually appropriate in such situations, De Luca suggested, as it tends to work against the delivery of desktop-equivalent (or better) performance.
He also warned that putting multiple virtual desktops on a single blade changes the I/O profile, as what would be a set of sequential reads from dedicated hardware effectively becomes random as different desktops demand overlapping access to different files. Hitachi’s VSP storage with a switched architecture and a large cache is highly suited to such activity and outperforms bus or controller-based architectures, he claimed.
De Luca suggested existing Hitachi storage customers could readily reconfigure their arrays for desktop virtualisation projects, though additional drives would be needed in most cases. Rajkovic recommended caution as problems may arise from repurposing existing equipment, and suggested it is generally better to select storage for the specific purpose.
Availability
Consideration should be given to availability issues, Booth said, since thousands of people could be left idle if there’s a serious outage at the data centre. Desktop virtualisation calls for a different approach to that used with servers, and he suggests using either SAN replication or a ‘layered desktop’ model where user profiles and data are applied to a generic desktop image - ‘on demand’ delivery of applications is more practical with VDI than conventional desktops, he said.
Network
A virtual desktop project will usually call for some changes to connectivity between the data centre and the workplaces, said Robbins. It is possible that less bandwidth will be required if the old applications are particularly chatty, but usually greater capacity and better quality will be needed.
Rajkovic pointed out that the elimination of replication traffic can reduce the load on the WAN, but suggests that LAN optimisation technology should be considered as part of desktop virtualisation projects. Gigabit or 10 GB network infrastructure is needed within the data centre, he added.
“There’s no way to centralise printing,” observed Rajkovic, warning that rasterising page images before sending them to a printer at the user’s location means lots of WAN traffic. Citrix’s universal print driver takes advantage of technologies such as XPS to reduce the amount of data that must be transferred, but when using low-end printers that expect the host to do all the rasterising it may be necessary to install print servers at those locations. WAN optimisation technology and setting appropriate policies to control printing can also reduce traffic, he suggested.
Similarly, Citrix has technology to reduce multimedia-related traffic by allowing Windows Media and Flash files to be played natively by the client. This gives the performance users expect while reducing the load on the data centre and WAN. That does mean that these types of content must be executed in the data centre if they are being delivered to devices such as iPads that lack support for Flash and Windows Media. Citrix is working on similar support for H.264 and other codecs.
Rollout
Start with a pilot that involves a fraction of the planned load, and then scale up in the light of experience, Booth counselled, and Goldie noted this approach is being used in the Department of Defence’s Next Generation Desktop project. But care is necessary: De Luca mentioned an electricity retailer that ran a successful trial of VDI to deliver enterprise and field applications, but scalability issues arose when it was deployed to around 300 field workers.
If you prefer a hands-off approach, it’s possible to find a partner that will build and run VDI for you, said Wakeman. That’s “very much here and now”, he said, but while there is clear interest in a desktop as a service (DaaS) model that lets organisations rent as many virtual desktops as they need for however long they are needed, it hasn’t taken off yet. He expects technical innovations will make it possible during 2012 to offer DaaS with the flexibility and pricing necessary for the concept to take off among SMEs. Rajkovic suggested providers already catering for specific vertical markets may be in the vanguard as they can introduce existing customers to DaaS.
Most systems integrators that already offer desktop management services are considering adding DaaS to their portfolios, Rajkovic said. But how soon a particular organisation will be ready for DaaS depends largely on where it is on the refresh cycle. Those that have already upgraded to Windows 7 probably won’t be interested for some time, but those currently planning the upgrade are considering it, he said, predicting that DaaS will be in widespread use in 12 to 18 months.
An important difference between typical cloud applications and a virtualised Windows desktop is that the latter requires a lower-latency connection between the data centre and the user. While web applications can be successfully delivered from offshore, that’s not the case for virtualised desktops.
“It’s really going to be hard for international players to compete,” Robbins said, unless they are prepared to establish onshore data centres.
“We strongly recommend engaging with a systems integrator partner of Microsoft or Citrix Consulting” for a large deployment, said Booth. “They are usually high profile [and] they’re not simple projects.” Hardware vendors are often brought into such projects due to the importance of design considerations, he added.
“We take a consulting approach,” said Rajkovic. Such an approach includes eliciting a detailed understanding of user demographics, defining the service they need and establishing a cost model. He added that there are plenty of other companies around that can provide expert advice to desktop virtualisation projects. Existing in-house Citrix expertise may be transferrable, but additional skills are needed for such projects. To help, Citrix offers training programs and a certification scheme.
To close on a practical note, Booth advised against using an existing standard desktop image with virtual desktops: density (the ratio of virtual desktops to physical servers) is an important consideration, so you should disable or remove any inessential services and other features.
Outro
Desktop virtualisation is complex, Booth said, and it requires a significant investment. If it is adopted for reasons such as improved agility, flexibility, accessibility and security rather than standardisation, following a perceived trend or reduced desktop TCO, “you can get a good return on your investment”.
“It’s not going to work for everybody,” he warned, so it is important to understand the technology and “don’t get caught up in the hype”.
Staying ahead: business resilience in the hybrid cloud era
The rise of cloud computing and advancements in virtualisation have revolutionised how businesses...
Taming cloud costs and carbon footprint with a FinOps mindset
In today's business environment, where cloud is at the centre of many organisations' IT...
The power of AI: chatbots are learning to understand your emotions
How AI is levelling up and can now read between the lines.