Driving the next generation of data centres
Accelerating speeds and the increasing complexity of applications in today’s data centres are outpacing the capabilities of traditional monitoring and management tools and threatening security and stability.
Virtualisation, cloud computing, mobility and video are creating an east-west traffic surge in the enterprise data centre, and as data centres scale up core networks from 10 to 40 and 100 Gb, interface speeds could grow as much as 50 times due to improvements in bus designs and multichannel adapters aiding higher traffic volumes.
At these link speeds and data volumes, the tools used to monitor, analyse and secure the computing environment lose real-time visibility into the traffic and transaction flows.
This leaves network and application performance monitoring tools facing a new challenge: how to cope with real-time monitoring at wire speed for 10G and beyond. It is unlikely that they can keep pace with those speeds without serious investment in upgrades. If upgrades are not forthcoming, then applications will simply disappear from the monitoring tools’ dashboards.
CIOs need several capabilities to overcome the challenges presented by these prevailing trends: scalable architecture, fabric intelligence, pervasive visibility, optimised monitoring environment and purpose-built solutions.
Scalable architecture
To keep up with the transforming infrastructure, organisations need a monitoring infrastructure that supports various speeds and connectivity options from 1 to 40 Gb and beyond to monitor the high data volumes.
New traffic visibility technology has the required capability, with its modular design and interconnecting nodes. These are designed on the principles of port density, high volume packet processing and scalability to fit any size of data centre infrastructure.
Fabric nodes connect into the network and collect data through test access point (TAP) modules, inline bypass modules and connections to the mirror/ switched port analyser (SPAN) ports on network devices. These highly port-dense 1, 10, 40 and 100 Gb appliance and chassis-based solutions can deliver traffic to connected tools at the supported interface speeds.
To monitor virtualised environments, nodes are available as virtual machines (VMs) tunnelling relevant traffic back to the fabric and connected tools. Nodes are also available for remote offices to tunnel only relevant traffic back to centralised tools.
Traditional methods of monitoring, which attached monitoring tools directly to network links or into each SPAN port on every switch to filter and aggregate traffic, were costly and unreliable. Often, tools see only a portion of traffic and even then, much of that was irrelevant. As speeds increase, keeping up with line rate became a challenge.
Recently introduced flow mapping advanced filtering enables users to apply map rules to line-rate traffic up to 100 Gb from a network TAP or a SPAN mirror port. So each tool sees only the traffic it needs to see.
In addition, to keep upgrade costs down and aligned with the budget, a visibility fabric solution can intelligently filter and aggregate traffic in front of the tools, enabling the continued use of existing 1 G tools.
Pervasive visibility
A consistent monitoring policy across all network traffic requires pervasive visibility across the physical switch and the virtual and cloud environments, and decapsulation of overlay and virtual network traffic.
As the physical to virtual (P2V) migration occurs, when an app goes virtual, it will just drop off of the monitoring tools’ radar. The CIO’s challenge is to maintain pervasive visibility before and after the P2V migration of an app occurs.
Existing virtual fabric nodes act as a TAP for the vSphere Distributed Switch and the Cisco 1000v virtual switch, directing copies of real-time virtual network traffic to the tools commonly used to monitor and analyse physical data centre elements. The nodes also decapsulate MPLS and VXLAN traffic, filtering and tunnelling captured traffic to the centralised tool environment.
For these appliances to perform efficiently and monitoring solutions to run optimally, tools should be centralised on a common management platform. Simplified management enables network administrators to configure visibility into the virtual switch without disrupting workflows of the server administration team, resulting in faster turnaround times for change requests.
Declining budgets continue to drive the move to virtualised solutions - servers, networking, storage and applications. If a dozen applications, each running on dedicated servers, are virtualised and migrated to a single host, that host is going to need about 12 times the amount of network as a single application server.
With VMware vSphere server integration, visibility rules defined and mapped to specific VM network ports follow the VM and remain in effect even after a vMotion event occurs. Applying maps to data maximises each tool’s effective throughput and the data load per connected tool.
Flow mapping optimises every network port to receive 100% line-rate traffic and each tool port handles relevant traffic at its full capacity, regardless of the number of network ports or available tool port filters.
The reliability and performance of a monitoring environment relies on a robust and purpose-built visibility solution.
This solution should be resilient with modular and hot-swappable modules or dual-redundant and hot-swappable power supplies and fans, for instance. As business needs evolve, it should be easy to configure and change: scalable from single- to multi-node deployments, and modular enough to mix and match standards-based solutions.
Further, flow mapping allows different user groups to decide which traffic should be forwarded, where it should be sent and how it should be handled once it arrives. Role-based access controls determine each group’s visibility of a traffic flow.
Benefits that matter
The benefits offered by a scalable, intelligent, pervasive, optimised and purpose-built visibility solution for any modern data centre should not be underestimated. A visibility fabric that scales from just a few connections to thousands, and monitors and secures traffic from a centralised network tool farm, can reduce both capex and opex through:
- Reduced time-to-resolution for troubleshooting and security issues via accurate analysis and measurement of network traffic traversing both physical and virtual environments.
- Minimal disruption to the production network as tools are changed, upgraded and taken down.
- Better tool utilisation from optimised data streams delivered to the tools, which reduces load on the tools, extends the life of the tools and results in fewer tools and probes.
- Management of data streams to existing fabric-connected tools even in a network infrastructure upgrade.
Clearly these benefits reduce the total cost of ownership for monitoring and managing next-generation data centres and also have a deep impact on business. Failure to analyse, monitor and secure the network will result in downtime that can quickly cost millions of dollars in lost revenue.
AI tools leading the charge on net zero goals
There is a great opportunity to harness the potential of AI to help drive progress in the race...
How can Australia become an AI leader?
Australia is currently an AI innovator, scoring highly on AI maturity in a recent IDC study.
Futureproofing IT: why observability matters in the hybrid age
While hybrid cloud is becoming increasingly commonplace in the IT industry, on-premises...