The open source GNS3 platform is one of the ABSOLUTE MUST HAVE tools for any networking professional.  Definitely take a look at the GNS3 website and especially this great video by Keith Barker of CBT Nuggets to learn more about GNS3, how it works, etc.  There are tons of other great blog posts and YouTube videos explaining how GNS3 works, so I won’t go into those details.
GNS3 is an amazing tool not only for learning networking in general, it’s also indispensable tool for testing new topologies, proof-of-concepts, etc.
One can make new topologies, or even find existing ones on GNS3.com forums or websites like GNS3 Vault.  In this blog, I am sharing one such topology, which I made for testing purposes, details of which are provided below.

Testing Dual MPLS WAN with QoS and more

Download the above GNS3 Lab Topology here (the topology kinda ended up looking like a sideways rocket – cool! :p )

 Some enterprises make the mistake of having only one MPLS Service Provider (SP) as that can expose them to outages.  How do you introduce a second MPLS SP for better redundancy?

How do you make sure both prioritize (via QoS) your different applications appropriately?  How about utilizing one SP for some applications or applications groups and and the other SP for another set of applications?

In this lab, I was tasked with testing (and later implementing) how Dual WAN failover and QoS would work over two different MPLS Server Providers (assuming they support the same QoS Prec/DSCP values) and at the same time address the above challenges.  In this scenario, the customer is running EIGRP which it’s peering with both the ISPs.  The ISPs are running MPLS in the core, and redistributing the customer routes via MPLS VPN to the customer’s other sites.
You can test failover in between the ISPs by shutting down links at appropriate locations.  Feel free to download the GNS3 lab topology, play with different failure scenarios, and modify to better fit your particular scenario.  If you are learning, try changing the customers to utilize OSPF, or try to make changes in the MPLS portion of the network so that the ISP’s private 10.0.0.0/8 addresses are hidden when customer runs a trace route, etc.
Note:  You need a PC running with at least 6gb of memory for this topology as it consists of 13 routers.  If you are on Linux, you can probably get away with 4gb of memory.  This lab uses the following IOS image for all routers: c3725-adventerprisek9-mz.124-15.T14.bin .

You can also test the QoS and check to ensure that the policies are taking effect as demonstrated below.

(before sending traffic over ‘Voice’ network)

A#show policy-map interface loopback10
 Loopback10

Service-policy output: MARK-VOICE

Class-map: MARK-VOICE (match-all)
 1633 packets, 97980 bytes
 5 minute offered rate 0 bps, drop rate 0 bps
 Match: any
 QoS Set
 dscp ef
 Packets marked 1633

Class-map: class-default (match-any)
 0 packets, 0 bytes
 5 minute offered rate 0 bps, drop rate 0 bps
 Match: any

(sending ICMP packets over the ‘Voice’ network)

A#ping 172.1.243.1 source 172.5.243.1

Type escape sequence to abort.
 Sending 5, 100-byte ICMP Echos to 172.1.243.1, timeout is 2 seconds:
 Packet sent with a source address of 172.5.243.1
 !!!!!
 Success rate is 100 percent (5/5), round-trip min/avg/max = 52/97/148 ms

(showing the ‘MARK-VOICE’ policy to check if the packets were indeed marked as per configured policy)

A#show policy-map interface loopback10
 Loopback10

Service-policy output: MARK-VOICE

Class-map: MARK-VOICE (match-all)
 1638 packets, 98040 bytes
 5 minute offered rate 0 bps, drop rate 0 bps
 Match: any
 QoS Set
 dscp ef
 Packets marked 1638

Class-map: class-default (match-any)
 0 packets, 0 bytes
 5 minute offered rate 0 bps, drop rate 0 bps
 Match: any
Advertisements