Four myths of storage virtualisation


By Adrian De Luca, Director, SNIA ANZ
Thursday, 15 March, 2012


Four myths of storage virtualisation

In recent years, server virtualisation has hogged the IT industry’s spotlight. All the while, storage virtualisation has remained in the background, slowly developing and maturing. However, there are still misconceptions surrounding the technology.

As the allure of cheap, commodity x86 compute became too much to resist for many organisations, a new problem in the data centre emerged as the number of servers sprawled in all directions making operations a nightmare. Server virtualisation came to the rescue, introducing software which sat between the hardware and operating system to pool all the computing resources (CPU, memory and network) and manage them more intelligently. This resulted in greater consolidation - more workloads on less physical hardware - thereby driving up utilisation.

Concurrently, the data folks were brewing their own virtualisation capabilities to do the same for storage capacity. The reason why storage virtualisation hasn’t received the same attention as its server counterpart is that IT organisations were experiencing bigger problems with their compute power than their ability to manage their data - but that has all changed.

According to a study conducted by Gartner last year into the reason why Australian data centres are in crisis, the single largest contributor to the problem was data growth, with 59% saying that managing storage was their biggest challenge.

Much like server virtualisation has made compute resources more efficient, storage virtualisation can do the same for storage resources - driving up utilisation, mobilising data to the most appropriate cost of media and simplifying operations and management. For those of you who investigated, but resisted implementing, here are four notions about storage virtualisation which mature technology has turned into myth:

1. Storage virtualisation is complex and generally difficult to implement. Many early applications of storage virtualisation were appliance based, which involved deploying dedicated devices in between the SAN. Deploying these solutions required a fair degree of planning and project management, outages to install, new skills to manage and greater effort to maintain once operational. Today, virtualisation functionality is embedded directly into the storage arrays and SANs, and can be easily introduced as part of a technology refresh, and managed with accompanying software tools.

2. If I have virtualisation in the server, I don’t need it in my storage. Modern storage arrays do far more than ship data from servers to disks. They can manage local and long-distance replicas, prioritise workloads, migrate data between different tiers of media, and some even intelligently store data by deduplicating, compressing or encrypting it. Implementing virtualisation in the storage layer is actually complementary to server virtualisation, offloading tasks that would otherwise be done by servers.

3. Storage virtualisation is expensive. Between the hardware, software and professional services required to get storage virtualisation up and running, some organisations have questioned whether the costs are worth it. Like all things in life, some hard work up front will pay off in spades down the track. The fact is, most IT environments without storage virtualisation have a utilisation rate between 30-40% when you take into account RAID overhead, stranded storage and replicas. By pooling storage resources together and exploiting capabilities like thin provisioning and auto tiering, utilisation can easily be increased to more than 80%.

4. Storage virtualisation = vendor lock-in. Another fear people have with implementing a particular vendor’s virtualisation technology is not being able to move away from it in the future if they no longer want it. Some older solutions on the market, particularly appliance offerings, intercept and store data in proprietary ways, making it difficult if not impossible to remove them after they are operational. Most modern solutions implement virtualisation in an open and non-proprietary way by using industry standard protocols, avoiding the need to maintain internal mapping tables. This means you can easily can remove a vendor’s storage virtualisation technology, introduce a new one or connect the underlying storage directly to hosts, maintaining complete data coherence and integrity.

Virtualisation technology principles are accepted and products widely deployed, and so with the evolution of storage virtualisation over recent years, it can now be considered ready for prime time.

Related Articles

Staying ahead: business resilience in the hybrid cloud era

The rise of cloud computing and advancements in virtualisation have revolutionised how businesses...

Taming cloud costs and carbon footprint with a FinOps mindset

In today's business environment, where cloud is at the centre of many organisations' IT...

The power of AI: chatbots are learning to understand your emotions

How AI is levelling up and can now read between the lines.


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd