Five assumptions about storage virtualisation - and why they're wrong

By Simon Elisha
Wednesday, 18 March, 2009


Storage virtualisation has been discussed, debated and hyped for some time now, and has caused a great deal of excitement in the storage industry with its promise to mask storage complexity, improve efficiency and provide companies with significant IT cost savings. Its buzzword status in the industry has affected the pace of virtualisation announcements from a myriad of storage vendors, with everyone touting their offerings in the storage space.

Because of the way that storage is packaged, organisations buy up to 75% more capacity than they actually need, meaning that an organisation’s data centre is not being fully utilised. Given the pressure in today’s economic climate for CIOs to cut costs while growing the business, the business case for deploying virtualisation is stronger than ever. Organisations that deploy storage virtualisation typically realise a wide range of benefits, such as increased utilisation, reduced total cost of ownership, simplified management, and environmental concerns over power consumption, cooling needs and floor space requirements.

Myth One: It’s too complex

For many organisations, it is common to use single-tier storage architecture. This stores all data in a single pool, purchased at a general rate, with an expected growth rate that applies uniformly to all applications and data in the pool – regardless of a company’s individual resource requirements or business value. High value data thus tends to receive insufficient resources, while low value data and archive data enjoy resources far beyond what is required. As a result, storage is underprovisioned and poorly utilised, and it commands a relatively high CAPEX.

While single-tier storage may initially seem like the more straightforward option for businesses, tiered storage presents a superior alternative as it assigns the right applications to the right storage tier for current and future access requirements. Since the tiers of storage represent various service levels and costs, stratifying storage across lower cost tiers (as opposed to a single, higher cost tier) will result in less CAPEX spending in upcoming years.

Tiered storage is not simply storing data on various silos of storage, by different vendors, each having separate operating consoles and functions. Multi-tiered storage implies a centralised pool of storage, with integrated but segmented storage tiers, all controlled by a unified storage architecture that can be promoted or demoted seamlessly between the tiers and storage arrays. Furthermore, coupling tiered storage with virtualisation allows for the ability to dynamically move data (promote or demote storage between tiers) using automated software that will help organisations more accurately align their costs to business requirements without incurring the risk of a manual data migration.

Myth Two: It’s just a tool for data migration

Typically organisations heavily use their server arrays for processing seasonal activities, such as during end-of-month processing or as companies cope with changes in their environments over time. Without server virtualisation, data is housed in its own storage silo on the network, which means that organisations have to invoke multiple, host-driven workarounds whenever they need to move data from one tier to another. Usually involving some application downtime, this is time intensive and a prospect generally avoided by the IT team.

Server virtualisation offers more than just data migration – it also helps organisations do three important things that help create an economically and ecologically superior data centre: reclaim, utilise and optimise. This maturity model lets IT pool all its storage, apply it as needed to meet application requirements, and manage it using one common set of software and processes. As the environment matures, IT moves from simple consolidation and migration of data, to right-sized tiers and data mobility, to implementing automated, policy-based (or even content-based) assignment of data to its optimal tier.

In particular, controller-based virtualisation provides organisations with the ability to manage their existing heterogeneous, multi-vendor storage assets as a single pool of storage and through a single pane of glass. Virtualisation enables cost-lowering functions – reducing hardware costs, SAN infrastructure costs and environmental costs – by providing a single management interface for all virtualised storage hardware and extending the useful life of all assets. It also offers the ability to classify data and then move it to the right tier of storage efficiently and non-disruptively – ensuring applications are always available.

Myth Three: It’s just for servers

Although server virtualisation and storage virtualisation are usually viewed separately, the clear trend is towards a merging of the two technologies. Server virtualisation is rapidly becoming a top priority for IT managers looking to consolidate their hardware resources, increase utilisation and align IT with business requirements. The primary benefits of server virtualisation include lower costs, improved efficiency and availability and non-disruptive upgrades. Storage virtualisation has the same benefits – only, of course, in the storage domain.

As servers and storage arrays communicate with each other, many organisations that have implemented server virtualisation soon realise the benefits as storage suddenly becomes a bottleneck. As a result, a combined virtualisation strategy with both virtual servers and virtual storage has the greatest potential return in simplifying the management and provisioning of IT resources. Server virtualisation technology, together with storage pooling, produces a powerful set of management improvements that contribute increased flexibility, reduced power and cooling, and more efficient utilisation of IT resources to end users.

Furthermore, the increased demand for a combined storage and server virtualisation strategy are thanks to another current trend: green storage. As companies consolidate and roll up different servers and virtualise them, it makes sense to also virtualise data in order to not only achieve savings in physical space and improve asset utilisation, but also to effectively reap savings in data centre power and cooling. With server and storage virtualisation also complementing each other in enabling application and data mobility, the trend of deploying server storage virtualisation is set to continue over the next couple of years.

Myth Four: It poses a security threat (and single point of failure)

Today, storage has become a network service, with storage systems typically living on switched fibre channel or IP networks and shared by multiple hosts. As a result, security concerns commonly associated with storage virtualisation include denial of service or degradation of services.

However, virtualisation in a storage controller is not dependent on a network for connectivity. With controller-based virtualisation, dynamic partitions can be created that limit the use of shared resources like cache. Organisations can give more cache and higher port priority to critical applications during their peak periods. Dynamic partitioning (also called thin provisioning) allows for storage capacity to be allocated to servers as and when it is required. In storage area networks (SANs) whereby several applications are sharing access to one storage array, thin provisioning enables disk storage space to be allocated economically based on the minimum storage capacity required by each application at any one time. Thus, we no longer need to have a situation whereby storage space is allocated beyond the current storage requirements of an organisation or, indeed, in anticipation of its future needs.

Lastly, by using the controller to virtualise, many have multiple processors – some, with more than 128 – so if one processor goes down there are 127 others. A more common challenge is the need to upgrade the processors and or software where you can do so in batches – say five, 10 or 20 at a time – and still have plenty of performance for your applications.

In a nutshell, thin provisioning enables organisations to allocate ‘virtual’ disk storage based on anticipated future needs without needing to dedicate physical disk storage upfront. If the need for additional physical disk arises, capacity can be purchased at a later time, and this can be installed without causing any disruption to mission-critical applications.

Myth Five: It’s a vendor lock-in

With virtualisation approaches that require complex logical unit number (LUN) mapping, vendor lock-in can be of concern. This is partially due to the fact that once you map LUNs from several heterogeneous systems, you don’t want to change or do it again. This is particularly the case in a single-tier storage architecture. However, not all virtualisation techniques require this level of complex LUN mapping and the corresponding LUN management issues. If an organisation uses multi-tiered storage and utilises controller-based virtualisation, then they can map the LUNs directly to the storage controller. Once that is complete, there are no mapping tables or ongoing LUN management required.

The simplicity of this model means that an organisation can ‘back out’ of a virtualisation strategy if there is a change in direction. Further, if part of an organisation is sold off, the storage related to that part of the organisation can be passed to the new owners without a complex process of unmapping and without the need to implement that particular virtualisation technology in the receiving organisation.

Demystifying the hype

There is no ‘silver bullet’ – no one solution – for virtualisation. Rather, it must be viewed as an enabling technology that is integrated as part of a whole storage network paradigm, including server, storage network, storage arrays and storage management software. Each element in the storage network has its own management product and/or interface, which only adds to the complexity as networks grow in size, connectivity and heterogeneity.

Simply put, you can’t keep throwing more storage as point solutions for each user or business need. Organisations need to balance high business demands with low budgets, contain costs and ‘do more with less’. Virtualisation enables organisations to utilise and optimise their storage assets, while helping CIOs to cut costs and grow the business.

Simon Elisha is Chief Technologist Australia New Zealand at Hitachi Data Systems. He specialises in both hardware and software enterprise storage solutions including data protection, storage management, high availability, data archival and compliance.

Elisha performs a technology “evangelist” role at Hitachi whereby he communicates the business value of storage technologies to customers. His wide ranging experience in systems architecture and infrastructure design, combined with extensive business consulting experience, gives him deep insight into customer issues.

 

Related Articles

Staying ahead: business resilience in the hybrid cloud era

The rise of cloud computing and advancements in virtualisation have revolutionised how businesses...

Taming cloud costs and carbon footprint with a FinOps mindset

In today's business environment, where cloud is at the centre of many organisations' IT...

The power of AI: chatbots are learning to understand your emotions

How AI is levelling up and can now read between the lines.


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd