Is your 'head in the sand' over disaster recovery?
By Benjamin Hodge, Technical Services Manager, KEMP Technologies
Thursday, 29 October, 2015
Despite the critical role played by information technology in the business world, many Australian organisations are still ignoring the importance of having a comprehensive strategy for disaster recovery (DR).
Rather than ensuring their infrastructure can withstand failures and outages, they prefer to take a ‘head-in-the-sand’ approach. Many think an effective DR strategy is simply too complex and costly and so instead adopt a policy of business as usual. In many cases, DR has become something that is easy to put off until ‘next year’.
It’s a baffling situation when you consider the potential downside, and you don’t have to look very far to find examples of companies that have suffered significant disruption and financial loss. The causes can be anything from floods, fires or storms to malicious hacking attacks or prolonged network outages.
With an increasing proportion of business processes being carried out electronically, even a disaster that causes disruption for a matter of hours can have a significant impact on the bottom line. Any organisation that believes it’s immune to such incidents is kidding itself.
The challenge of effective DR
The biggest perceived challenge for many organisations when it comes to DR is how to deal with the resulting complexity. They believe having an effective DR capability requires the duplication of everything from servers and storage to applications and network links.
Creating such a redundant architecture can be expensive and challenging to complete. A second data centre is likely to be needed or capacity leased in a third-party facility. Alternative network links must be installed, as must redundant routers and switches to redirect traffic should a disaster occur.
Having such extra links in place purely for DR can add significant cost for little or no day-to-day business benefit. Their capacity can go unutilised for most of the time, yet they will attract leasing charges from telecom carriers or network providers.
Core applications will also need to be duplicated in the second site, with the ability to bring them online quickly should the primary site fail. This can result in extra software licensing costs and the need to manage multiple instances of the same application.
Data replication is another critical requirement. For an effective DR capability to be assured, all data required for daily activity will need to be backed up to a second site in real time. This requires an investment in both storage infrastructure and backup tools.
In some cases, organisations might find they have already invested a considerable amount of money in equipment without actually achieving a workable DR infrastructure. For example, generators might have been placed in the basement to provide power during grid outages, but then found not to work during times of flood.
DR challenges are also being caused by the increasing use of mobile devices to access core business applications and data. While this can be readily catered for during normal operations, it can become difficult if a primary data centre strikes problems.
In many instances, DR sites tend to be ‘daisy chained’ to the primary data centre. This means mobile users can only access applications and data in the secondary site by going through the main site. If this becomes impossible during a main site outage, there will be no way for staff to continue to be operational.
A different approach
To overcome these challenges, increasing numbers of organisations are taking a different approach to improving their DR capabilities. Rather than looking at it from an infrastructure perspective, they are instead shifting attention to their applications.
An applications-focused DR approach makes use of a technique called global site load balancing (GSLB), which can ensure continuity of services during times of disruption.
A GSLB appliance achieves this by taking ownership of the name spaces for an organisation’s core applications. It allocates IP addresses to users and monitors the health of the applications at each site.
If an outage occurs in one location, the GSLB appliance can immediately redirect users to a different instance of the application they were using, housed and running at a different location. In most cases, users won’t even be aware that anything has changed.
As well as ensuring business continuity, this fundamentally different approach to DR also removes significant costs. Total redundancy is no longer required within each site, removing the need for redundant servers and networking equipment.
Links between sites need to only handle data replication, rather than application workloads, which reduces the total bandwidth required. This, in turn, reduces telecommunications and network costs.
A GSLB appliance is also application-agnostic and can be configured to work with almost every application being used within an organisation. Once in place, it can continually monitor the status of applications and shift workloads as required.
Overall, a GSLB approach to disaster recovery can remove infrastructure complexity and result in significant cost savings. It’s a simple yet very effective approach to the DR challenge faced by every organisation.
Is the Australian tech skills gap a myth?
As Australia navigates this shift towards a skills-based economy, addressing the learning gap...
How 'pre-mortem' analysis can support successful IT deployments
As IT projects become more complex, the adoption of pre-mortem analysis should be a standard...
The key to navigating the data privacy dilemma
Feeding personal and sensitive consumer data into AI models presents a privacy challenge.