Simplifying the virtual data centre
Tuesday, 08 February, 2011
Virtualisation technology is a growing trend that is supporting a shift in business priorities which requires data centres to deploy new applications quickly and efficiently, provide fast and reliable ‘round-the-clock’ access to information, meet or exceed stringent service levels with zero downtime, and all be done while driving down costs by maximising investments. Graham Schultz builds the business case for virtualisation of the data centre and the importance of the data centre network.
In a 2010 Gartner survey of over 1600 CIOs across the globe, participants were asked what their top business priorities are today and looking out three to four years. The results showed the focus today is on improving business processes and cost savings. Then looking further out, improving productivity, driving innovation, gaining competitive advantage and attaining new customers also saw increases in priority. These business priorities require business to capitalise on new opportunities and respond to increasing global competition.
That case is built on benefits such as reduced capital and operating expenses because organisations need fewer physical servers, making the server infrastructure less costly and easier to manage. With fewer machines, companies also have reduced power, cooling and data-centre footprint requirements while improving security. Virtualisation also improves the availability of applications and services, as enterprises can more easily recover virtual machines (VMs) from failure.
However, companies can only realise these benefits if their virtual environments are properly managed. In this complex and expanding environment, companies need a fast, reliable data-centre network to ensure optimal performance. The network must also extend to the storage systems that support the virtual environment - whether the systems are virtual, physical, or a mix of the two. All this with no additional funding or staff, while planning for continued growth.
The virtualisation landscape
Scaling up with limited staff and resources is often cited as the top virtualisation challenge by network managers. Other critical obstacles include increased infrastructure complexity and ensuring high availability. At some level, these challenges point to the need for a high-performance, resilient data centre network that’s easy to manage.
The need for reduced complexity will only grow as enterprises continue to expand their use of virtualisation technology, and a recent Network World poll indicates companies are doing just that. On average, IT leaders expect to virtualise more than one-third of their server infrastructures within a year. While most organisations started with applications that were not business-critical, they are now moving beyond. In fact, they expect nearly one-third of business-critical applications to be running on virtual servers within a year.
The data centre is arguably the most active and critical segment of today’s IT environment in terms of innovation, strategy and investment. In particular, it is integral to key initiatives such as the migration of legacy IT infrastructures and architectures to a more flexible, agile cloud model in its various forms - including public, private and hybrid clouds. One of the key factors driving organisations to migrate to these new, more agile IT architectures is the need to better address the complexities associated with the widespread and accelerating adoption of server virtualisation.
Complexity and the network infrastructure
To be sure, there are many challenges inherent in properly managing infrastructure components, beginning with the virtual and physical servers themselves. While virtualisation does reduce the number of physical servers, it doesn’t eliminate the time required to manage server operating systems, applications and data. There is still the need to manage all network connections while maintaining proper security profiles for users and applications.
Similarly, there’s no shortage of management chores related to storage systems, be they physical or virtual. IT leaders must stay on top of backups and replication, and ensure proper storage tiers are in place for optimum trade-off between availability, cost and performance.
All of this complexity puts stress on the network infrastructure, which ultimately must provide the performance, availability and application mobility required in a virtualised data centre. IT leaders must consider the network-related challenges inherent in virtualisation:
- Increasing VM density can dramatically increase the amount of traffic to and from any given server, putting greater demand on existing network bandwidth to maintain required performance levels.
- The need to ensure that server communication latency is deterministic and lossless.
- VM mobility restrictions can limit maintenance and application availability options to either the physical server, the blade chassis or the server rack.
- The need to provide end-to-end visibility from VMs to storage, which typically requires multiple tools.
Moving beyond complexity
The traditional solution to network bandwidth and performance challenges has been to add more devices, ports and network tiers. The result is a network that continually grows in complexity and rigidity, becoming more difficult and costly to manage and maintain - the opposite of what IT leaders need right now.
To achieve optimal performance in a virtual environment, the network must evolve. An application and virtual machine may no longer be locked into any physical infrastructure - be it a server, a specific port or even storage - which means the network infrastructure and the tools to manage it must improve.
Virtualised data centre networks must provide visibility and control over data flows, while also becoming simpler to operate, more flexible, resilient and scalable. So, how can enterprises evolve their networks to address these complex challenges and simplify the virtual data centre? By using ethernet network fabric.
What is ethernet fabric?
Data centre networks rely on ethernet. Over the decades, ethernet has evolved as new application architectures emerged. Today, data centre networks carry traffic for a diverse set of applications including client/server, web services and unified communications - each with different traffic patterns and network service requirements. Applications are increasingly deployed within virtual machines hosted on server clusters. Ethernet is used to build shared storage pools, which places stringent demands on the network, demands for lossless packet delivery, deterministic latency and high bandwidth. All together, these changes are the forces behind the next evolutionary step in ethernet: the ethernet fabric.
The idea of a fabric is not new; in fact the concept has been used for years in storage area networks (SANs). As with SANs, ethernet fabrics are a powerful way to dramatically improve performance and reliability while reducing complexity.
Network fabrics meet the need for performance and reliability by flattening the network. Typical data-centre networks use three-tiered architecture: access, aggregation and core. Ethernet fabrics eliminate the need for an aggregation and access tiers, thereby increasing both efficiency and performance. Ethernet fabrics also improve network resiliency because multiple network paths are available among all the devices in the fabric.
Ethernet fabric gives enterprises the opportunity to build a flat, multipath, deterministic mesh network for the data centre. The great thing about ethernet fabric is that it does not require the use of spanning tree protocol, thereby eliminating idle ‘standby’ links, and increasing network utilisation by 100%.
Distributed intelligence
One of the key functions of a virtualised data centre is distributed intelligence. This is when configuration and device information is known by all its switches, allowing fabric members - including physical and virtual servers as well as switches - to be seamlessly added, removed and moved within the fabric without manual reconfiguration, while maintaining all security profiles.
Distributed intelligence allows the ethernet fabric to be self-forming, so the fabric is automatically created and the switches automatically discover the common fabric configuration. In some cases scaling bandwidth in the fabric is as simple as plugging in a new switch.
A highly virtualised data centre can be configured in a ring, mesh or tree topology, with enough links to make it entirely non-blocking or to be over-subscribed at whatever level works best for the enterprise.
Logical chassis
If distributed intelligence is included in the control plane of an ethernet fabric, then logical chassis is found in the management plane. Since all the switches share a common control plane database, network policy management is configured one time and shared by all switches in the fabric. Now, combine that with a logical chassis and you get simple management that scales at the rate of application growth. Every time you add another switch to the fabric, it plugs into the logical chassis showing up like a new port card would in a traditional chassis switch. Logical chassis is very scalable and very simple.
Because there is no need for separate aggregation and access tier switches, enterprises can employ a flatter, simpler core/edge network architecture.
*Graham Schultz is the Regional Director for Brocade Australia and New Zealand, where over the past 10 years he has been instrumental in establishing and building Brocade into the market leader it is today. Schultz has over 25 years’ experience in the IT industry, in both software and hardware markets.
Staying ahead: business resilience in the hybrid cloud era
The rise of cloud computing and advancements in virtualisation have revolutionised how businesses...
Taming cloud costs and carbon footprint with a FinOps mindset
In today's business environment, where cloud is at the centre of many organisations' IT...
The power of AI: chatbots are learning to understand your emotions
How AI is levelling up and can now read between the lines.