Managing DMVPN QoS with LiveNX
DMVPN QoS Overview
Cisco’s Dynamic Multipoint Virtual Private Network (DMVPN) is a VPN deployment strategy for building a secure and scalable connectivity platform for large wide area networks (WANs). DMVPN technologies are most often used for enabling any-to-any, full-meshed communication for users over the public internet. For example, if two branch locations require a direct communication path for a VoIP call between the two offices, but don’t require a permanent VPN connection between sites, they can benefit from DMVPN. It is a zero-touch deployment of dynamic IPsec VPNs and improves network performance by reducing latency and jitter while optimizing bandwidth utilization.
Many organizations are migrating part or all of their WAN environments to Internet-based VPN strategies to save costs. With the increase in this deployment model, DMVPN has become an attractive solution because of these benefits:
- Lowers the cost for implementing large redundant WAN network environments
- Simplifies branch-to-branch connectivity for business applications like Voice and VoIP
- Reduces deployment complexity in VPNs with zero-touch configuration
Quality-of-service (QoS) is key to successfully deploying VoIP, Video, and virtual desktop infrastructure (VDI) applications. Tradition WAN technologies like MPLS provide any-to-any dedicated bandwidth with the additional benefit of QoS-aware service providers that protect business-critical applications. Internet VPN connectivity solutions such as DMVPN do not have this QoS protection because the Internet is not QoS aware. Despite this, the cost savings from using Internet-based VPNs are still attracting many to transition to these types of networks LiveAction can provide QoS protection in such VPN environments. The following diagram depicts a typical DMVPN configuration.
This document will outline the design framework for successful QoS deployment and validation in DMVPN environments using LiveAction software.
Network QoS Configuration
How Does QoS Work?
VoIP, video, and other critical data applications rely on the network infrastructure to honor and queue their traffic for call quality and application performance protection. This need for QoS is a requirement in WAN environments, regardless of the underlying transport mechanism (DMVPN vs. traditional WANs) If business-critical applications are to receive their requested level of service.
Quality of Service (QoS) manages bandwidth usage across different applications and technologies. Its most common use is to protect real-time data applications like video or VoIP traffic. All network infrastructure devices have limits on the amount of traffic flowing through them. In packet and frame switched networks, data is delayed or dropped during times of congestion. Quality of Service (QoS) management is the collection of mechanisms that control how traffic is prioritized and handled during these times.
The two most common QoS tools that handle traffic are classification and queuing. Classification identifies and marks traffic to ensure network devices prioritize data as it crosses the network. Queues are buffers in devices that hold data to be processed. Queues provide bandwidth reservation and prioritization of traffic as it enters or leaves a network device. If the queues are not emptied (due to higher priority traffic going first), they can overflow and drop traffic.
Policing and shaping are also commonly used QoS technologies that limit the bandwidth used by predefined traffic types. Policing enforces bandwidth to a specified limit. If applications try to use more bandwidth than allocated, their traffic will be dropped. Shaping defines a software set limit on the transmission bandwidth rate of a data class. If more traffic needs to be sent than the shaped limit allows, the excess will be buffered. This buffer can then utilize queuing to prioritize data as it leaves the buffer.
The WRED (Weighted Random Early Discard) technology provides a congestion avoidance mechanism that will drop lower-priority TCP data to protect higher-priority data from being dropped.
Link-specific fragmentation and compression tools are used on lower bandwidth WANs to ensure real-time applications do not suffer from high jitter and delay.
Table 1: Packet flow through a typical QoS policy
DMVPN QoS Design
Three main components of a DMVPN QoS design must be considered to protect VoIP video and high-priority data successfully. These are
- Protecting priority data with DMVPN tunnels
- Protecting tunnels from casual Internet traffic
- Understanding end-to-end logical throughput
Protecting Priority Data Within DMVPN Tunnels
When designing QoS policies to protect priority data within a DMVPN tunnel, the concepts outlined in the diagrams below should be considered:
- When considering the protection priority data within a DMVPN tunnel:
- Due to the full-mesh, any-to-any communication path provided by DMVPN, if one was to ignore the underlying internet transport:
- The design is similar to a high bandwidth private WAN in which the service provider does not have QoS on their backbone. When considered in this manner, the DMVPN QoS design for protecting priority data in the tunnel becomes much simpler to understand:
Here is an example of a typical branch office QoS configuration when no service provider QoS is available. This would also be used for a DMVPN remote branch/spoke. It is characterized by a hierarchical QoS policy that contains a parent shaping policy and a child queuing policy.
Traditional Branch Office QoS Configuration When Service Provider Does Not Have QoS | |
Policy map QUEUING
class VoIP – bandwidth 20% class Video – bandwidth 30% class MGMT_DATA – bandwidth 5% class CALL_SIGNALING – bandwidth 5% class CRITICAL_DATA – bandwidth 10% class default – fair-queue Policy map SHAPING_2Mb class default – shape average 2000000 20000 0 service-policy QUEUING interface GigabitEthernet0/0 Description WAN_INTERFACE Service-policy output Shaping_2Mb |
This is an example of a typical data center QoS configuration when no service provider QoS is available. It’s similar to the traditional configuration used for a DMVPN design. It’s characterized by a multiclass hierarchical QoS policy that contains a parent shaping policy for each remote site. Each site’s parent policy contains a child queuing policy. It ensures the data center router never sends more traffic than any remote site can handle. But, if congestion does occur for traffic, priority applications (VoIP video, etc.) will be scheduled first.
Traditional Branch Office QoS Configuration When Service Provider Does Not Have QoS | |
Policy map QUEUING
class VoIP – bandwidth 20% class Video – bandwidth 30% class MGMT_DATA – bandwidth 5% class CALL_SIGNALING – bandwidth 5% class CRITICAL_DATA – bandwidth 10% class default – fair – queue Policy map SHAPING_2Mb class default – shape average 2000000 20000 0 service-policy QUEUING interface GigabitEthernet0/0 Description WAN_INTERFACE Service-policy output Shaping_2Mb |
Protecting Tunnels from Casual Internet Traffic
Once priority data is protected inside a DMVPN tunnel, the tunnel(s) itself needs to be protected from casual internet traffic. We’ve covered that an engineer can control egress traffic for both Internet and tunnel destinations via QoS. This section will focus on controlling ingress internet traffic.
There are three design options for controlling ingress internet traffic to protect DMVPN tunnels. The options can be used alone or combined.
- Service Provider Last Mile QoS
- Ingress Policing
- Remote Ingress Shaping (RIS)
Let’s look closer at these design options:
Server Provider Last Mile QoS
Some Internet service providers now provide QoS on the last mile connection to the customer edge (CE) device. This is usually the DMVPN terminating router. This service provider egress queuing policy can be used for protecting tunnel traffic from casual Internet traffic This service is not yet widely available and will most likely incur an additional cost |
|
Ingress Policing
An ingress policing QoS policy may be applied to the Internet interface of the DMVPN router. This will provide protection of DMVPN tunnel traffic by limiting the volume of casual ingress Internet traffic This configuration is not very flexible as it does not allow casual Internet traffic to use any unallocated bandwidth if the tunnel is not fully utilized. |
|
Remote Ingress Shaping (RIS)
Remote Ingress Shaping (RIS) is an egress QoS policy applied to the LAN interface of a DMVPN router. This policy’s configuration is very similar to that of the hierarchical configuration used at a remote branch office’s WAN interface. The key to the RIS policy configuration is that the parent classes’ shaper is set to only 95% of the target bandwidth rate. By creating this artificial congestion point, ingress traffic (VoIP, video, critical data and casual Internet) can be prioritized as it is delivered to the LAN. Since most casual Internet traffic is TCP and TCP will adapt to the underlying network’s capacity, this RIS policy will effectively control the casual TCP based Internet traffic while protecting priority tunnel traffic |
Understanding End-to-End Logical Throughput
Another key DMVPN QoS concept that must be understood is that of determining the end-to-end bi-directional logical bandwidth of the Internet connectivity between the data center and each DMVPN remote site. Consider the following diagram:
The data center device physically has a 1Gb connection to the service provider, but the provider has implemented a 300Mb CIR This means the service provider will drop any data sent or received over 300Mb on the data center circuit. The remote site has a 100Mb connection, but the CIR is 40Mb. This means the service provider will drop any data sent or received over 40Mb. When considered end-to-end, the DMVPN QoS policies for this example need to be configured for 40Mb, the lowest end-to-end throughput. Below is a second example:
The data center device physically has a 1Gb connection to the service provider, but the provider implemented a 300Mb CIR. This means the service provider will drop any data sent or received over 300Mb on the data center circuit.
The remote site is connected to the service provider using a 100Mb connection but is using an asymmetric service in which the download rate is 20Mb, but the upload rate is 3Mb. Any data sent/received over these rates will be dropped. This DMVPN QoS policy must be configured for 20Mb from the data center to the remote site and 3Mb from the remote site to the data center.
DMVPN QoS Configuration
QoS configurations for DMVPN networks follow many of the same methodologies as traditional WAN technologies, but there are exceptions that should be understood.
Exceptions to DMVPN QOS Configurations | ||
QoS policies cannot be applied directly to multipoint GRE tunnel interfaces | QoS policies can be applied to point-to-point GRE tunnel interfaces, but these are seldom used in DMVPN configurations | QoS policies can be applied to the physical interface that sources multipoint GRE tunnel interfaces. This is the most common configuration. |
!
Interface Tunnel description MULTI-POINT GRE TUNNEL tunnel mode gre multipoint ! |
I
interface Tunnel description Point-to-point GRE TUNNEL tunnel mode gre point-to-point service-policy output My-QoS- Policy I |
Iinterface Tunnel
description Multi-POINT GRE TUNNEL tunnel mode gre multipoint tunnel source GigabitEthernet0/0 I interface GigabitEthernet0/0 description INTERNET CONNECTION service-policy output My-QoS- Policy I |
QoS Pre-Classify
QoS Pre-Classify is a configuration option that is applied to tunnel interfaces. It gives an engineer more classification options for the matching of traffic to the WAN egress QoS policy. The following table highlights this configuration in more detail:
When QoS pre-classify is not used on a DMPN tunnel interface, as a packet is put inside the tunnel, destined to the Internet interface, the original packet is encapsulated inside a GRE, and then IPSEC The egress QoS policy on the Internet interface only sees the IPSEC packet and does not have any understanding of the original IP headed information but with one exception. The original DSCP value is kept intact as the encapsulation process transpires. This means that the egress QoS policy can use the priority marking of the original packet for traffic classification. | |
When the QoS pre-classify command is applied to a tunnel interface, a shadow copy of the original packet is created. This copy is available for the egress QoS policy for reference. When used with DMVPN, this allows the QoS policy on the Internet interface to use the DSCP markings as well as the original source/destination IP addresses and layer 4 TCP/UDP port numbers for classification. | |
Is QoS pre-classify a requirement?
In many organizations QoS classification will be performed at the LAN level. As a packet enters the WAN router, its DSCP will have already been pre marked. In this scenario, there is no need for QoS pre-classify. As well, if a LAN ingress QoS policy is used for classification, QoS pre-classify is not a requirement, as any updated DSCP values will be kept during the encapsulation process. |
Per-Tunnel QoS
The traditional QoS configuration of a DMVPN data center router can become complex. Since the data center will use a multi-class hierarchical policy, with each remote site getting its own class and child queuing policy, if the DMVPN network consists of hundreds of remote sites, the QoS configuring can grow to thousands of lines of code. To eliminate this potential management nightmare, Cisco has implemented a solution named Per-Tunnel QoS. This allows engineers to configure and manage a simple, consolidated QoS configuration, but gain the benefit of dynamically created QoS policies for each DMVPN tunnel.
There are two steps for creating a Per-Tunnel QoS configuration.
Step 1 – Configure all remote branch offices with a NHRP group; ip nhrp group
Step 2 – Configure the data center router’s tunnel interface with a NHRP group(s) to QoS policy mapping(s); ip nhrp group service-policy output
Note: NHRP group names are typically selected by using some form of bandwidth or QoS characteristic common with multiple remote devices. The fewer group names that are used in a network, the smaller the QoS configuration on the data center device.
LiveAction Overview
LiveAction is an application-aware network performance management tool that will graphically display how networks and applications are performing using SNMP and the latest advanced NetFlow capabilities embedded in Cisco devices.
LiveAction provides the ability to view and control application performance through its graphical QoS management interface. QoS can be configured to manage and control DMPVN and Internet traffic using LiveAction. We will describe our steps to confirm the application performance of VoIP and video in DMVPN networks using the Medianet technology in Cisco devices with LiveAction solutions.
The image below is a view of the LiveNX console. It shows a network diagram consisting of three network devices. The three larger green circles represent routers and switches managed by LiveAction. The little green circles inside the devices represents their interfaces. These devices are interconnected by both the Internet and a DMVPN.
Since LiveNX is also a NetFlow collector, we can visualize the traffic that is flowing over the network. In the diagram below, the multi-colored arrows visualize the traffic traversing the network by DSCP value. In this example, the color legend below shows that red arrows represent EF traffic, green arrows represent AF21, light blue is Best- Effort traffic, etc. This is what you could expect to see if QoS matching and marking policies are configured for VolP, video or other high-priority traffic. LiveNX gives you visual confirmation that a traffic type’s DSCP value is honored from end to end.
Double-clicking on any of the larger circles (routers/switches) in the LiveAction network diagram will show the real time NetFlow data of traffic that is flowing through the device. In the example below, multiple DSCP values are validated. This again confirms that VolP and video’s (RTP’s) DSCP values are configured and being honored correctly.
LiveAction DMVPN QoS Configuration
iveAction provides engineers a graphical interface for simplifying the creation, modification and deployment of QoS policies. Let’s see how QoS can be configured deployed and validated for DMVPN using LiveNX.
Remote Site DMVPN QoS Configuration
To configure the hierarchical shaping policy on the Internet interface for the prioritization of traffic inside the tunnel, perform the following:
- Select the QoS tab. Then right-click on REMOTE_SITE_A device object in the device list to the left of the LiveAction network diagram. In the pop-up menu select QoS > Manage QoS.
- The Manage QoS dialog window appears. Click the + sign icon on the top left of the screen to add a policy.
- Name the policy and click ok. In this example, we will name the policy Internet_Shaping_2Mb.
- Expand the new policy by clicking on it and click its class-default to highlight it.
- Click the shaping tab in the middle of the window and add a shaper. In this example, will create a shaper of 2Mb with the following parameters
- Rate: 2000 Kbps
- Committed burst: 20,000 bits
- Excess burst: 0 bits
- Click the Add Policy icon to the top left of the screen again, give the new policy a name, and click OK.
- In this example the name of the second policy is QIEING. This will become a child policy for the Internet_Shaping_2Mb policy.
- Right-click on the QUEUING policing and select Add Class to Policy
- Select Create new class and enter a name. In this example, the first class created will be named VoIP
- Click to highlight the VoIP class. Select the Classify tab and click the Edit button.
- This brings up the Classes tab. Select the Match type dropdown and select the appropriate match criteria for VoIP class
- In this example, the DSCP value of 46(EF) is selected
- Once the appropriate option is selected, click Add Match Statement
- The match type will appear in the list to the top right of the window
- Click the Policies tab, notice how the match criteria is now visible in the Classify tab for the class
- Click the Queuing tab select the queuing type of priority
- With Queuing type priority, apply the appropriate bandwidth. In this example 20% is given to this queue.
- Repeat these same steps and create any additional queues required for protection of VoIP video and high priority data. In this example the full QUEUING policy is utilizing the following data:
Queue Name Match Type Queuing VOIP DSCP = EF Priority = 20% VIDEO DSCP = AF41 Priority = 30%, Burst = 128,000 MGMLDATA ACL = MGMT_ACL Class-based = 5% CALL_SIGNALLING DSCP = CS3 Class-based = 5% CIRTRIX DSCP = AF21 Class-based = 5% Class-default Fair-queue - When finished, creating all class of the QUEUEING policy click on the policy named QUEUING. The summary section of the policy should look similar to the following.
- Click, drag and drop the QUEUEING policy onto the Internet_Shaping_2Mb policies class-default. The QUEUING policy will act as a child policy for the Internet_Shaping_2Mb policy.
- Click the Interfaces tab
- Right click on the Output of the applicable internet interface and select Apply Policy to Interface Select the Internet_Shaping_2MB policy and Click OK
- The Internet_Shaping_2Mb policy will be applied to the Internet interface
- Click Save to Device
Remote Site DMVPN QoS Validation
Validate the hierarchical shaping policy on the Internet interface:
- Ensure the QoS tab is highlighted and click on the Home icon to the top left of the window
- Double-click on the remote site’s internal interface where the Internet_Shaping_2Mb policy was applied
- This will show the real-time statistics of this interface.
- Ensure that the drop-down menus at the top of this page show Application/Class and Output This will ensure the graph is focusing on the interface’s output statistics. In the example below, notice the bottom graph. This is a graphical view of the CBQoS MIB, the real-time QoS stats of the lnternet_Shaping_2Mb policy. Note that the bandwidth graph is showing data in the VoIP and VIDEO queues.
- To view historical statistics for this interface, select the 15m, 1 hr, 1d, 1w icons at the top.
- The time range of the historical statistics can be further customized by utilizing the time range options on the report
Data Center DMVPN Per-Tunnel QoS Configuration
Configuring Per-Tunnel QoS at the data center will require all remote DMVPN devices to utilize the following command on their tunnel interfaces ip nhrp group . The following example is assuming that each remote site has this prerequisite command in place.
Each NHRP group will get its own unique hierarchical QoS policy on the data center router. The configuration steps to configure these hierarchical QoS polices are identical to those of the remote site DMVPN QoS configuration as outlined in this document This section will only focus on concepts unique to Per-Tunnel QoS. To implement Per Tunnel QoS, perform the following
- Select the QoS tab. Then right0click on the data center object in the device list. In the pop-up menu select QoS>Manage QoS.
- The Managed QoS dialog window appears. In the example below a QUEUING and two hierarchical policies are already configured on the data center DMVPN device. These were created using the steps outlined in the remote site DMVPN QoS configuration of this document.
- Select the interfaces tab. Notice how the data center’s DMVPN tunnel interface shows NHRP Group information. This is because LiveAction discovered the remote site’s group memberships on the data center router.
- Right click on one of the NHRP Groups and select Apply Policy to Interface.
- Select the appropriate hierarchical QoS policy for the NHRP Group.
- Repeat this process for all NHRP Groups.
- Click Save to Device
Data Center DMVPN QoS Validation
LiveAction can use its robust Netflow reporting to provide QoS statistics. To monitor Per-Tunnel QoS statistics use LiveAction’s Netflow reports. Perform the following task:
- Click the Home icon to view the network diagram
- Click the QoS tab
- Double-click on the data center’s DMVPN tunnel interface
- This will show the real time interface statistics of the tunnel interface. Notice how there are now QoS statistics for this Per-Tunnel configuration
- Right-click in the empty QoS bandwidth graph and select Show Flows
- This will run the corresponding NetFlow Applications > Application report for the data center’s DMVPN tunnel interface
- From the report list, select the Address> Destination Site Traffic report and click the Execute Report button
- This report will show the data center’s tunnel interface bandwidth utilization for each remote site. Right click on a remote site’s name in the table, select Drill Down on , and select the DSCP Report
- This will filter the previous report so only the bandwidth of the selected site will be visible. This bandwidth will be categorized by DSCP value. Since the Per-Tunnel QoS QUEUEING policy is referencing DSCP values for class selection, this report is the equivalent of a QoS bandwidth graph.
- For easy access to this report again, save the report. It can be viewed on-demand at any time or scheduled at daily weekly or monthly intervals.
Remote Ingress Shaping QoS
Remote Ingress Shaping (RIS) is a technique for protecting inbound tunnel traffic from casual internet traffic. It can protect VoIP, video, and priority tunnel data from internet-based TCP traffic. RIS is nothing more than a hierarchical QoS policy applied to a DMVPN router’s LAN interface in the outbound direction. A key configuration component of RIS is that the parent shaping policy will only use 95% of the internet circuit’s logical bandwidth. Usually, in DMVPN designs, internet service providers will use an Ethernet broadband connection with a CIR or cap on the throughput. The RIS QoS policy will shape to 95% of the CIR.
The configuration of a RIS hierarchical QoS policy is identical to other policies outlined in this document. The only differences will be highlighted.
- Click the Home icon to view the network diagram
- Click the QoS tab
- In the device list, right-click on a remote site device and select QoS > Manage QoS
- In the Manage QoS dialog window example below, a hierarchal policy named RIS_Shaping has already been created following the steps outlined in this document. In this example, the parent classes’ shaper is set to 95% of the logical bandwidth of the internet circuit.
- Once the RIS policy has been created, select the Interfaces tab and apply the RIS policy to the Output of the LAN interface
Validating DMVPN QoS with LiveAction
LiveAction can be used to validate DMVPN QoS performance in many ways, including:
- Reports
- Alerts
- Medianet (Cisco’s Medianet Performance Monitor)
Reports
LiveAction’s real-time and historic reporting provides a granular view of how DMVPN QoS policies perform. LiveAction stores QoS data in raw format, with no averaging. This gives engineers the ability to drill down into QoS events and view them as if they were occurring in real-time. LiveAction can also show QoS statistics from three angles. Before QoS, After QoS, and Drops. Let’s look at examples of LiveNX real-time QoS screenshots.
This screenshot is an After QoS screenshot of VoIP and video traffic on a DMVPN internet interface.
This is a second After QoS screenshot. Notice how the VIDEO queue is amber. This indicates drops are actively occurring in this queue. QoS Alerts are being generated and logged when a queue is amber.
This is a screenshot of the QoS Drops view. The purple in the graph shows the drop rate of traffic in the VIDEO queue. The VIDEO queue is not amber, indicating that the drops are not actively occurring.
Alerts
LiveAction generates an alert whenever a configurable QoS threshold is crossed. These alerts are saved and can be reviewed with a historical search. LiveAction alerts can also be sent to an external Syslog or Email server. LiveAction’s QoS alerts include:
- Interface Drop Rate
- QoS Class Drop Rate
- QoS Class Default Rate
- Interface Bandwidth Utilization
- QoS Bandwidth Utilization
Medianet
LiveNX is a Netflow collector and collects the Cisco Medianet Performance Monitor flow type. LiveNX can also generate the appropriate MediaNet NetFlow configuration for many device types. This allows for easy enablement in a point-and-click fashion. Let’s consider how Medianet flows can be used to validate the application performance of VoIP and video in a DMVPN network environment.
Below is a LiveNX topology diagram of a DMVPN network. The screenshot shows Medianet flows (VoIP and video) across the DMVPN network. Notice that the flow type pull-down is set to Medianet. In this example, all three monitored devices are exploring the Medianet flow type.
Real-time Medianet flows can be viewed in detail by double-clicking on a network device and selecting Medianet from the Flow Type pull-down. Both packet loss and jitter measurements of the RTP calls passing through this device are shown. Below is a screenshot of real-time Medianet flows.
Below is a second real-time screenshot of Medianet flows. Notice the cells in red in the Packet Loss Percentage column. This indicates a threshold was exceeded in LiveNX, and an alert is triggered for the poor performance of the call.
Medianet Flow Path Analysis
LiveNX presents an end-to-end visual of how VoIP/video calls perform across a network. It also correlates device, interface, and QoS statistics with the performance of the call. To see this end-to-end flow performance, follow these steps.
- Click on the Home icon, Flow tab, and click the Table icon
- The System Flow Table window will appear. Select the Medianet tab. This will show a list of VoIP/video (RTP) calls collected in the environment. The calls in red highlight problems.
- Right-click on a problem flow and select Show MediaNet Flow Path Analysis.
- This will execute a Medianet Flow Path Analysis. It will show the router or switches’ performance that the call passed through, the input and output interfaces at each hop, the input and output QoS policies at each hop, and the call’s Medianet statistics. The example below shows packet loss on the 2nd device in the calls path. Click on the Show Path button to visually see the performance of the call end-to-end.
- Below is an end-to-end visual view of the Medianet Flow Path Analysis. The device on the right is visually showing problems with the VoIP or video call on the DMVPN network in red.
Appendix A: DMVPN Putting it all together
Let’s tell the complete DMVPN QoS story for a sample network. Consider the following network diagram.
- This shows a small DMVPN network comprised of three sites: DATA_CENTER, REMOTE_SITE_B, and REMOTE_SITE_A.
- Each site is connected to the Internet with a 100 Mb hand-off
- REMOTE_SITE_B’s Internet circuit has a 1 Mb CIR
- REMOTE_SITE_A’s internet circuit has a 2 Mb CIR
Below is a second diagram of this network, but in this example, end-to-end flow visualization (based on NetFlow data) is shown across the network diagram representing VoIP and video applications on the DMVPN tunnel between the DATA_CENTER and REMOTE_SITE_A. Assume that no QoS is configured at any site, and only this one VoIP/video call is traversing the network (no other data, VoIP, video, or internet traffic.)
In this scenario, a Medianet Flow Path Analysis was executed in LiveNX on a VoIP call from the DATA_CENTER to REMOTE_SITE_A. The network shows the following hop-by-hop view for a VoIP or video call.
VoIP and video compete with data applications and Internet traffic for bandwidth. In this scenario, all applications compete with the end-to-end logical throughput, the 2Mb CIR is enforced by the Internet Provider at REMOTE_SITE_A. Consider the below screenshot. This diagram shows VoIP, video, and internal data passing between the DATA_CENTER and REMOTE_SITE_A inside the DMVPN tunnel. There is still no Internet traffic shown in this example. Assume that a lot of data is TCP and is being downloaded from the DATA_CENTER by REMOTE_SITE_A. The TCP data application is consuming so much bandwidth in the DMVPN that the 2Mb CIR of the service provider becomes congested and the provider starts dropping data.
If QoS is not configured, Medianet Flow Path Analysis will show the following hop-by-hop analysis with packet loss marked in red in the table and topology.
But, if the appropriate Per-Tunnel QoS policy was configured on the DATA_CENTER, this issue could be fixed. This example is a hierarchical QoS policy with a 2Mb shaping parent policy and a child queuing policy. The traffic sent in the DMVPN tunnel to REMOTE_SITE_A adheres to the remote device’s service provider CIR, and VoIP and video is protected inside the DMVPN tunnel.
Medianet Flow Path Analysis visually confirms the Per-Tunnel QoS policy is working, and that VoIP and video calls are performing well again.
The next screenshot shows what is more likely in a real network; VoIP, video, and data going through the DMVPN tunnel and casual Internet traffic all competing for the same 2Mb CIR Internet circuit at REMOTE_SITE_A Assume the casual Internet traffic is TCP and is consuming as much bandwidth as possible Just like as the DMVPN data application.
Again, Medianet Flow Path Analysis visually shows the VoIP call performance issues at REMOTE_SITE_A with the red color below.
But, if the appropriate RIS QoS policy was configured on the REMOTE_SITE_A, inbound TCP-based Internet traffic could be throttled. This example would be a hierarchical QoS policy with a 1.9Mb shaping parent policy (95% of 2Mb CIR) and a child queuing policy. The traffic sent in the DMVPN tunnel to REMOTE_SITE_A would still be protected as it entered the tunnel, and the tunnel would be protected from the casual inbound TCP Internet traffic.
Medianet Flow Path Analysis visually confirms the RIS QoS policy applied to the egress of the LAN interface is working, and that VoIP and Video calls are performing well again.
What about WAN QoS for the PEMOTE_SITE_A? It will need a hierarchical QoS policy. Its parent policy will shape at 2Mb, and its child policy will queue VoIP, video, and high priority DMVPN data.
What about a PIS policy at the DATA_CENTER? Depending on the traffic patterns at the DATA_CENTER device, a PIS policy may also be needed. If required, the DATA_CENTER’s PIS policy would have a parent policy shaper configured to 95Mb (95% of 100Mb) and a child queuing policy.
Appendix B. - Sample DMVPN Branch Office QoS Configuration with RIS
Branch Office Per-Tunnel QoS DMVPN Configuration | |
policy-map QUEUING class VOiP
priority percent 20 class VIDEO bandwidth percent 30 class MGMLDATA bandwidth percent 5 class CALL_SIGNALI NG bandwidth percent 5 class CRITICAL_DATA bandwidth percent 10 class class-defaulf air-queue policy-map lnterneLShaping_2Mb class class-default shape average 2000000 20000 0 service-policy QUEUING policy-map RIS_2Mb class class-default shape average 1900000 19000 0 service-policy QUEUING policy-map RIS_2Mb class class-default shape average 1900000 19000 0 service-policy QUEUING interface Tunnel description MULTl POINT GRE TUNNEL ip nhrp group MY _REMOTE_GROUP _A tunnel mode gre multipoint tunnel source GigabitEthernet0/0 interface GigabitEthernet 0 description INTERNET service-policy output Internet_Shaping_2Mb interface GigabitEthernet1 description LAN service-policy output RIS_2Mb |
Text BoxAppendix C. - Sample DMVPN Data Center Per-Tunnel QoS
Data Center Per-Tunnel QoS DMVPN Configuration | |
policy-map QUEUING class VOiP
priority percent 20 class VIDEO bandwidth percent 30 class MGMLDATA bandwidth percent 5 class CALL_SIGNALI NG bandwidth percent 5 class CRITICAL_DATA bandwidth percent 10 class class-default fair-queue policy-map lnterneLShaping_2Mb class class-default shape average 2000000 20000 0 service-policy QUEUING policy-map lnterneLShaping_1Mb class class-default shape average 1000000 10000 0 service-policy QUEUING interface Tunnel description MULTl-POINT GRE TUNNEL ip nhrp map group GroupA service-policy output lnternet_Shaping_2Mb ip nhrp map group GroupB service-policy output lnternet_Shaping_1Mb |
Managing DMVPN QoS with LiveAction
Managing DMVPN QoS with LiveNX
Minimize the impact of network congestion by setting QoS rules that reflect what is critical to your business environment.
Some networks are more complicated than others. This guide outlines the design framework for successful QoS deployment in Dynamic Multipoint VPN (DMVPN) environments.
Topics include:
- DMVPN QoS
- Network QoS Configurations
- DMVPN QoS Design
- DMVPN QoS Configuration
- QoS Pre-Classify
- Per-Tunnel QoS
- LiveNX Overview
- LiveNX DMVPN QoS Configuration
- Validating DMVPN QoS with LiveNX
LiveAction is an application-aware network performance management platform incorporating QoS, NetFlow, Packet analysis, threat detection, and more. LiveAction enhances understanding and control of the network by combining rich visualizations with application-level analysis.