Culling the confusion from the SDS hype
Software defined storage promises many benefits in architectural simplicity, scaling and mobility, but there are significant challenges as well.
There’s been a lot of buzz lately about software defined storage (SDS). As with many new technologies, there is a fair amount of confusion to go along with the hype.
Just what constitutes SDS, sometimes called ‘software-based storage’, ‘software-enabled storage’ and even ‘software-led storage’, has been open to interpretation. As hardware has become commoditised, box vendors have proven eager to brand their products as ‘software defined’.
The definition that most - if not all - agree on is that SDS is an architecture model that relies on software to deliver storage functionality in a hardware agnostic manner. It’s a definition that is broad enough to encompass a wide range of approaches. A more narrow definition is that SDS encompasses those architectures in which storage runs on the same node as server and application software.
SDS’s advantage is architectural simplicity, and the cost savings that often result from streamlining infrastructure. SDS also helps with scaling and mobility. Beyond that, there are particular use-cases that can benefit from leveraging existing compute resources to enhance storage system functionality.
The use-cases that will gain the greatest benefits are remote offices, which are usually constrained by budget and space. And it will provide an advantage to small and medium-sized businesses, which often operate under similar limitations and are often looking for easy-to-deploy solutions.
SDS’s cost benefits are magnified in large web-scale operations such as Amazon’s Simple Storage Service and Google’s Cloud Storage. Amazon and Google have developed multipetabyte object storage repositories running internally developed software designed to run on commodity hardware.
SDS is not without its challenges. Chief among them are application and data availability. In a traditional storage system, high availability is achieved by the use of dual controllers. Dual controllers ensure that when one controller fails, the other will be ready to take over. SDS requires triple mirroring and/or erasure coding techniques to arrive at the same level of availability. This creates significant system overhead and decreases the system’s responsiveness to applications.
SDS’s mobility and scalability advantages are somewhat compromised as nodes are added due to data co-locality issues. In other words, if the data being accessed is not local to the node, it can introduce performance degradations.
With SDS, storage software may need to be configured for a wide range of hardware devices and configurations, requiring IT to be aware of individual compatibility, support matrices and any underlying limitations.
Customers considering SDS need to carefully evaluate various factors, especially their need for performance, availability and scaling. It’s very likely that modern data centres will incorporate SDS into their environments alongside specialised storage solutions in the not-too-distant future.
Seven predictions that will shape this year
Pete Murray, Managing Director ANZ for Veritas Technologies, predicts trends that will have a...
ARENA jointly funds Vic's first large-scale battery storage
Two large-scale, grid-connected batteries are to be built in Victoria with the help of the...
Protecting next-gen storage infrastructures
Companies looking to modernise their overall IT infrastructure cannot afford to take a relaxed...