Top 8 Reasons for Poor Network Performance

Fast, secure and reliable connections continue to underpin the success of the modern business.

IT downtime costs North American businesses nearly $700 Billion a year1. Because today’s businesses are dependent on communication and the sharing of data, any interruption in the network exponentially increases the costs of IT downtime as it interrupts user productivity and customer experience. Creating a fast, secure and reliable network is a core tenet of ensuring the success of your business. Below are some of the key areas to focus on with your network performance monitoring solution to ensure your business continues to thrive.  

Utilizing Hardware or Software that is Outdated and Due for a Refresh

The pace of change in business and technology continues to accelerate. New business and strategic IT initiatives will only continue to mandate new network topologies and deployment scenarios. The titanic shifts occurring due to Cloud, WiFi, Security, SD-WAN, and so many others can represent significant cost savings or opportunities for the modern business, but the network designs that were put in place 3-5 years ago may not be suitable for the job today.

Plan for the Future, Protect Your Investment, Avoid Network Monitoring Silos

Not everything is doom and gloom. Vendors continue to offer innovative ways to maintain, upgrade and adapt their hardware and software as the need arises. When making a decision on which hardware and software to buy look for best-in-class solutions and providers, reference industry analysts like Gartner to see how the landscape is changing, and seek out solutions that have historically adapted well over time. Siloed tools are useful for monitoring one segment, domain, or topology of the network, but seeing across the entire network regardless of shifting topology can simplify network monitoring workflows and significantly improve team productivity.   

Malicious Traffic

Attacks on your network in the form of Phishing, Dedicated Denial of Service (DDoS), Advanced Persistent Threats (APT), and internal bad actors can all result in changes in network performance. Network security tools will focus on identifying, blocking and mitigating these events to the best of their abilities.

Situational Awareness Provides Context of Malicious Traffic

When it comes to performing forensic analyses of a malicious attack, security and network teams must collaborate and collate their data to not only understand the nature of the attack but the context – what was occurring at the time of the attack. Which users, applications, and network segments were active. Accurate, historical playback also becomes critical during this process. This situational awareness can help administrators understand the source and impact of the attack.

Shadow IT, BYOD, and Everything But the Kitchen Sink

With the ease of deploying SaaS-based applications and the desire for organizations to be nimble, some iT decisions are being made at the line of business (LoB) level versus at the IT level. The refrain of “you can’t manage what you can’t measure” is being replaced with “what the heck is suddenly generating all that traffic?”

Communication, Visibility and Monitoring Shines Light on Shadow IT

No one wants to return to the days where IT acted as the gatekeeper for any IT decision. However, the organization does need to know that the network is not an infinite resource and to act accordingly. Communication is usually the first line of defense to ensure that IT is included in discussions about new deployments. The second line of defense is a network monitoring solution that can peer into every nook and cranny of the network, and quickly present the status of the entire network. This holistic, or end-to-end, visibility should alert network administrators to any changes, and the source of those changes, in network traffic.

Network Configuration Error

Here at LiveAction, Network Engineers, Operators, Admins and Architects are our heros. But our heroes are also human, and to err is human. A mistyped keystroke, forgotten device, or mis-scheduled backup and disaster recovery process can all create unforeseen and hard to track network performance issues.

Use the Right Network Performance Management Solution to Avoid Human Error

Poor network performance stands out as one of the major factors affecting the throughput efficiency of a network path. And it can stem from a computer to cause problems that affect an entire network. A poorly configured network can be as a result of certain data usage restrictions on a network port, or errors which might cause re-transmission on a network path. This will greatly slow down application and network performance, as well as limiting usability.

Incorrect, Dropped, or Misconfigured Traffic-shaping Policies

When business-critical or latency-sensitive application traffic, such as video conferencing, VoIP, ERP, and transaction data, are not prioritized over less latency-sensitive traffic, like network backups, email, and recreational videos, the user’s productivity and perception of network performance will suffer, and ultimately, the business will suffer.

Improve User Experience with Quality of Service (QoS)

Traffic shaping and prioritization policies instruct the network to give bandwidth and routing preference to the applications and traffic that matter to your business.

Configuring, deploying, and maintaining shaping policies that align and change with the business has historically been an arduous, time-consuming, and error-prone process, especially when done via CLI.

Network performance monitoring enables constant monitoring and adjustment to ensure the policies being set to meet the needs of the business. For the foreseeable future, traffic-shaping will continue to play a significant role in how well your network performs.  

Poor Capacity Planning Results in Insufficient Bandwidth

The purpose of Bandwidth and Capacity Planning is to ensure that the network has sufficient capacity to carry and transmit the traffic the organization is creating and consuming. The capacity planning process often resembles a mystic art because network administrators have to tread a fine line between managing costs and budgets, accounting for the lead time required to provision new bandwidth, and anticipating the demands any new IT initiatives will add, all while ensuring that the organization doesn’t grind to a halt.

Achieve Better Capacity Planning With Better Data

Getting a handle on this process usually starts with reports, lots and lots and lots of reports from across the network. Bandwidth issues can crop up anywhere within the network and network administrators need to know where the dangers lie in order to plan accordingly. Additionally, these reports must be run at repeating intervals in order to track historical trends. Being able to view a historical playback of bandwidth usage can also help in understanding unusual spikes or dips in traffic.

Finally, communication throughout the organization helps with understanding where the business and its IT is headed. This contributes to IT being a key stakeholder in ensuring the future and strategic success of the organization.

Bandwidth and Capacity Planning Didn’t Consider Peak Traffic Loads

Capacity planning is intended to forecast and account for normal usage of the network. But what is ‘Normal’? The best planning in the world cannot predict the latest craze (Pokemon GO), or the hot trending topic/video (Cat Videos). These events can create unexpected spikes in network traffic that no one could have anticipated. However, sometimes the spike is more normal, or more predictable and the failure is simply that it wasn’t anticipated.

Plan for Peak Traffic With Network Performance Monitoring

Don’t fall into the trap of averaging traffic usage across all days of the week or month, or even across all 24 hours in a day. Approach your reporting and projections with an understanding of the nuance of the business. The rise and fall of traffic usage over different time periods are unique to every business and requires 100% fidelity in your data versus data that’s been aggregated over 5 to 30-minute intervals. Statistical forecasting concepts such as 90% and 95% confidence intervals can also help focus on what is normal versus including outlying, one-time traffic spikes.

Networks are like Ogres and Onions, They Have Layers

The OSI 7 Layers model of the network provides a mental framework for what is going on in a particular network system.  Poor Network performance issues can occur at different layers of the network. By narrowing down the location of an issue IT teams can determine who is responsible for fixing the issue. In some cases a delay in an application can be traced to the application itself, the network, or even delays from a connected data store.

Collect and Consolidate As Much Data As Possible in One Representative Tool

Getting past the finger-pointing between IT teams is half the battle. If teams are able to reference a single source of truth about what is occurring on the network, the teams can get back to solving the actual problem. Being able to correlate and unify data from different sources within the network can identify performance issues such as application delay, server delay, jitter, latency, dropped packets, speed and duplex mismatches, spanning tree errors and much more.

References

  1. IHS Markit – https://technology.ihs.com/572369/businesses-losing-700-billion-a-year-to-it-downtime-says-ihs