Thursday, April 14, 2016

My first IWAN deployment

First off, let me get some of my resources out in the open.  These were INVALUABLE, and i'd have failed if these had not been made available:

http://www.cisco.com/c/dam/en/us/td/docs/solutions/CVD/Feb2016/CVD-IWANDesignGuide-FEB16.pdf

http://www.cisco.com/c/dam/en/us/td/docs/solutions/CVD/Feb2016/CVD-IWANConfigurationFilesGuide-FEB16.pdf

http://www.cisco.com/c/dam/en/us/solutions/collateral/enterprise-networks/intelligent-wan/cvd-iwan-diadesignguide-mar15.pdf

http://docwiki.cisco.com/wiki/PfR3:Solutions:IWAN

Next, this is the personal blog of a friend of mine, who convinced me to take on this project---GOOD READ on all things IWAN: http://spanport.net  -- He is the one who told me about the IOS-XE code 3.13 having MANY bugs...seriously upgrade to 3.16.02.S.  We couldn't get MFR or site-to-site PfR working on the 3.13 version.

Lastly, I attended Cisco's IWAN "Design & Deploy for Impact" training.  Anything indicated in red (like this) I've included POST training as it is a contradiction to my understanding.  I want you to know what I THOUGHT and what I was told (just in case you find yourself thinking the same thing.  Much thanks to Denise Fishburne, David Prall, Tom Kunath, & Mani Ganesan for putting me straight!

Now that we've got that out of the way, the statement of work essentially lists the following for our proof of concept:
  • 6 remote sites & 2 hub routers (1 MPLS & 1 INET).
  • Master controller for PfR resides on the MPLS HUB.  This goes against Cisco best practices, but will work for the purpose of our POC.
  • We're utilizing ISR 4331s for the remote routers as well as the hub routers.  Should this go prod we'll upgrade the hubs to something beefier.
  • We're using pre-shared keys for IPSEC auth...this will get upgraded to PKI once this goes prod.
  • We're using Cisco Prime for "Management of the IWAN infrastructure & spoke deployment."
That being said, you may be asking, "What is IWAN?"  IWAN is Cisco's flavor of service defined WAN (SDWAN).  Cisco's Intelligent WAN, or IWAN, is made up of the following pillars:

  • Transport Independence
You may have heard of the term "transport agnostic."  IWAN allows you to run an overlay network on top of any given provider, regardless of the underlying connectivity.  For example, you can connect sites with 4G internet, commercial internet, MPLS, and/or a combination of said means.  Unlike MPLS where we're heavily tied to the provider and have negotiated strict service-level aggreements (SLA), IWAN allows us to essentially hedge our bets; using multiple paths with different providers to ensure application performance.  
  • Intelligent Path Control
PfRv3 is the magic behing IWAN.  We essentially use smart probing in addition to active data flows to test for delay, loss, and jitter.  Should a path be deemed fall out of pre-determined metrics, IWAN will know preemptively if there is a better path for a given service.  We can create policies for different application profiles based on differing amounts of delay and/or jitter with different actions to perform should our application fall out of the acceptable bounds.  
  • Application Optimization 
AVC, or application visibility and control...or essentially NBAR2 allows the network administrator to identify >1400 applications.  Based on NBAR classification, we can use differentiated service code point (DSCP) markings to ensure our applications fall under the PfR polices that are created on our master conroller (MC--read futher).  Should NBAR not match on your home-grown application..you can still use the ISR's modular QoS to identify traffic.  It's VERY flexible!
  • Secure connectivity

Secure connectivity is established with the use of Zone based firewalls and front door VRFs (FVRFs).


Our reason/goal of implementing IWAN is to provide an alternate solution for remote site connectivity.  Tired of spending $1000 on remote site T1 connections?  Well, with IWAN, we can theoretically have 2 commercial internet connections and utilize smart probing to determine the BEST path for our chosen applications. 

"But..but what about my SLA..."  SCREW your SLA.  If/when commercial internet #1 is determined to be lossy, for example sake, PfR's probing will say "HEY, VOICE, START USING THE OTHER PATH!"--overriding what exists in the routing table!  The beauty of this product is that the policies are centrally managed so that we can have an environment with hundreds of spokes..without having to statically control the routing!  If spoke A has a bad internet connection..the rest of the network will know to avoid that connection until it is determined to be resolved!

In addition to "getting rid of the T1s," we also want to get some control/visiblity of our WAN!  We've all become soft and comfortable with the thought of simply handing our traffic to the ISP!  With IWAN (and other SD WAN technologies for that matter), our goal will be to 1. Get an better understanding of what we're running on our network and 2. Gain visibility of these applications so that we can better manage & troubleshoot.

Ok, off my soap box.   

But seriously, stop paying for those expensive ass T1s!!!!!

Now..if you review the configuration guide for IWAN there are a TON of configs on there..and if you aren't already comfortable with DMVPN, QoS, and basic EIGRP routing..you might want to go ahead and review those topics.  My goal is to talk IWAN (and honestly..PfR..as I'd never even messed with this before).  Furthermore, we'll discuss QoS..as there is quite a bit of QoS that is required to allow PfR to do its magic.      


Lets talk about our hubs

As I said in our overview, we'll have two hubs that we need to squeeze into an existing network: One for MPLS connectivity and one for INET connectivity.


Let's talk placement...

In an ideal world, both the routers would live at the edge of the network.  HAH, good luck with that.  The biggest thing I'm looking for in the placement is to provide physical redundancy.  In other words, I don't want an SFP, cable, and/or switch failure to cause loss of connectivity.  In this particular customer's network, we decided to hang the devices off a pair of Nexus 5Ks (and their respective FEXs).  ***WARNING*** this goes against Cisco best practice..but for our proof of concept purposes..it'll do!

Tom mentioned that while it may seem ideal to "clean up" the WAN edge at the hub once we've migrated all our sites to IWAN...it allows for flexibility in the future by having the IWAN routers "sit" behind the CE.  The biggest benefit one may gain, as instructed by Tom, was that since we can't use nested QoS policies on the phsyical interface (as it breaks per-tunnel QoS)...having a separate CE router allows us the flexibility of using a hierarchical QoS policy that we would otherwise not be able to have! 

Basic L2/L3 connectivity..

Now, here is where stuff gets exciting!  IWAN uses a concept called front-door VRF.  Essentially, this is a security mechanism that places the public facing (yes, we'll do this for the MPLS one too..) in a separate VRF.  Logically, the "outside" and the "inside" legs into the network are completely separate...but PHYSICALLY, they are identical!  I accomplished this by using port-channel sub-interfaces.  Since our connection to the Nexus DC switches are purely L2, I created two sub-interfaces on our MPLS hub (po20.951 & po20.953), for example.  To create the inside & outside L3 legs, I created two SVIs on the core L3 switch, 951 and 953.  After assigning the SVIs with IP addresses on the core, I put po20.951 into the "OUTSIDE" vrf.  Lastly, we assigned IP addresses on the hub routers that are in corresponding subnet.  If you've completed everything correctly at L2 (create VLAN instances), you SHOULD be able to ping from the hub router to the core router on both the global routing table (v953) as well as the OUTSIDE VRF (v951).

David Prall laughed when I told him that we have this setup!  "Do NOT use port-channels on the WAN edge of your IWAN hub."  While we can port-channel the INSIDE interfaces...we cannot port-channel on the WAN side, as it negates our per-tunnel QoS!  Well...shit!  What do we do to provide redundancy?  While we cannot use port-channels, we CAN have a separate physical path..we just can't channel them together!  This can be accomplished by using "tunnel source loopback#" and using separate paths to get to this loopback (notice the higher AD on one of the paths?)  I didn't include it...but one might want to use a track statement on the static routes so that we aren't just relying on the physical interface going up/down!




The "last" ..I'll say that a million times I'm sure.. thing you have to do is to connect the hub router to whatever internal routing protocol is being used internally.  In this scenario the customer is using OSPF as the IGP..so I'll need to get OSPF adjacency for global routing table; I'll use static routing for my OUTSIDE VRF connectivity.  In this scenario we'll be using EIGRP as your routing protocol for DMVPN.  The hub routers will be our point of redistribution...we'll come back to this as it requires some delicate handling.

Why EIGRP, though?  The "main" reason seems to be that if you're using OSPF...the customer probably doesn't want EIGRP on their network (trying to stay off proprietary protocols?).  An alternative is BGP!  Aside from the reason I listed, one might argue that BGP offers benefits that EIGRP cannot--primarily being the granular nature in one's ability to control the routing.

MPLS

This is REALLY going to depend on your MPLS environment.  For this customer they're using BGP for PE-CE communication while advertising a default route to all spokes.  My goal from a hub perspective is simply to make sure I have a route OUT to the core.  I simply slapped a default route on my OUTSIDE vrf with a next-hop of the L3 core...and since my core has a route to all my spoke sites MPLS addresses...that'll do it!

INET

So far we've ONLY discussed our MPLS router...as this is the easiest.  Heck, the INET hub placement is IDENTICAL, with the only difference being that NAT is involved.  What?!  NAT?!  Yea, we're statically NATing the "source" of our DMVPN connectivity & allowing the basic GRE/IPsec "stuff" on the firewall.  I used another static default on the OUTSIDE VRF but with a next-hop of our firewall's inside interface.  In addition to the firewall..we have a basic ACL applied on the outside interface allowing the same GRE/IPsec "stuff."    

Once my hub routers are on the network, the first thing I do is configure my tunnel interfaces without any crypto our routing configured.  My goal is to get this working in phases; first get DMVPN connectivity, then apply the crypto, then get my routing configured, and THEN worry about QoS/PfR.  If you try to apply ALL the configurations out the gate..good luck troubleshooting anything, should/WHEN it doesn't work.    

The hubs are in place...

Assuming we did everything correctly on the hub...lets set up a spoke!  This customer site had a test environment that proved to be invaluable!  In our test environment we have both a MPLS connection (T1) and a commercial internet connection.  

With the device connected to both providers, we set up both "OUTSIDE" interfaces the same as we did on the hub; IWAN-TRANSPORT-1 for the MPLS interface and IWAN-TRANSPORT-2 for the INET interface.  Because we are now using a VRF for MPLS connectivity, we have to modify BGP to use the address-family associated with the INET VRF instance!

router bgp 12345
no bgp default ipv4-unicast
!
address-family ipv4 vrf IWAN-TRANSPORT-1
 neighbor 1.2.3.4 remote-as 54321
 neighbor 1.2.3.4 description TO_MPLS_PROVIDER
 neighbor 1.2.3.4 password 7 2304982034820384
 neighbor 1.2.3.4 version 4
 neighbor 1.2.3.4 activate

Now....assuming we can ping 1.2.3.4 if we source our pings from VRF IWAN-TRANSPORT-1 and our password/remote-as is correct..BGP SHOULD come up.

We can verify our BGP adjacency by performing a "show bgp vpnv4 unicast all summary".  As I said earlier, we are only advertising a default route into BGP at the hub site...so we should expect to see "1" PfxRcd.  

On the internet side...you should simply need to slap an IP address on the interface and verify you can ping out to the internet sourced from that VRF: ping vrf IWAN-TRANSPORT-2 8.8.8.8.

One thing that i'll note is that how you configure the internet facing interface "depends."  If you're going to have central internet connectivity, or sending all internet through the hub, then you'll only need an ACL like we used on the INET hub.  BUT...if we decided to go with direct internet access (we'll come back to this), then we'll use zone-based ACLs on that outside interface!

The first phase...DMVPN

Now that our hub and our spokes should have basic connectivity, time to put on the next layer: DMVPN.  Honestly, by the time you're done implementing everything..you're tunnel interfaces are going to look ridiculous.  You'll have per-tunnel QoS, multicast, and IPsec....but as I said, lets start without all the gobbledigook.  Should you have any issues getting DMVPN connectivity, the first thing you'll want to check is to see if you have "tunnel vrf <VRF>" applied.  This little command is what tells DMVPN "Hey, use this VRF to form the underlay!"  If DMVPN is not forming, verify you have reachability by using good ole' ICMP.  Can you ping the hub's "tunnel source" from the spoke's "tunnel source?"  Lastly, verify that you have the NHRP nhs and nbma in the correct order!  

The second phase...IPsec

Now that we have DMVPN connectivity, lets put on our IPsec layer!  As I said earlier, we're using pre-shared keys currently, and i'll update this once we get cert-based auth working!  But honestly..there isn't a whole lot to say here..as Cisco's configuration guide has made this EASY.  The only thing i'll say is that if you are doing this to a remote site...do the remote-site first!  Once you apply the tunnel protection profile...if both sides aren't IPsec ready..you'll lose connectivity!  If you have ANY issues, verify your phase 1 and phase 2 configurations--should have mirroring transform-sets!  Be sure to verify that your traffic is being encrypted!--show crypto ipsec sa and verify the numbers are incrementing!

The third phase...Routing

This is where stuff can get squirrely: Routing.  I do NOT want to introduce any routing loops into my network...so there will be NO redistribution until I have all my routes tagged appropriately!  


Here are my goals with route-tagging:
  • Do NOT let my spokes advertise out anything they've learned from the hub.
    • Configure the spokes as EIGRP stubs.
    • Block anything with tags 101, 102, 103, or 104 outbound.
  • Do NOT let my spokes advertise a default route.
    • Block the prefix 0.0.0.0/0 outbound.
  • Do NOT let my hub routers learn anything advertised from the OTHER hub routers.
    • Block 101 & 102 inbound on the tunnel interfaces.
  • Do NOT let my hub routers redistribute anything BACK into OSPF that was learned via OSPF.
    • Tag & block on the hub routers (Tag 10 & block 20, while inverse on the other hub).  



As you can see..some of this is a bit redundant.  For instance, I'm not allowing either IWAN hub to learn anything from the other IWAN hub...even though my spokes are blocking learned routes from being advertised.  The point of this is to have MULTIPLE layers of blocking, should a spoke be added that doesn't mirror other spoke configurations.

Again, follow the CVD for the EIGRP configuration--you can't go wrong!  I'm likely a bit too obsessed with routing-loops...so I put in a couple more things to avoid it!

David/Denise--you got me again!  While my design DOES stop routing loops...it also could potentially break routing in general between spokes!  OK--here is our scenario:  

Under normal conditions there are no problems--spoke A can talk to spoke B & each spoke can talk tot he hub JUST fine.  Now, what if spoke A's MPLS path is down & spoke B's INET path is down?  Well....shit.  Instead of blocking ALL routes from being learned...we should simply poison them (delay 25000 on the upstream link) or advertise a summary to the BR (remember--longest prefix wins).  That way, should there be a path down scenerio like we discussed..spoke A can still talk to spoke B (in a hub-spoke fashion) by transiting through the other BRs!  To summarize, my original route-tagging/blocking would stop each hub BR from learning about the paths via the other hub.  We WANT them to learn the path...just in normal conditions NEVER use them.

You might notice in the CVD that we're summarizing on the hub to the spokes.  We'll get more into this once we get to the PfR configuration.  This is possible because we're using phase 3 DMVPN.  Remember, if we had been using phase 2, then each spoke would need a more specific route to allow for spoke-to-spoke communication!  If we weren't using DIA..i'd have simply advertised a default to all my spokes!

At this point we should have EIGRP adjacency to my spokes.  If we followed the CVD, we should see that our MPLS path (tunnel 10) is what is installed in the RIB, as the CVD has us configure a higher delay on the inet path tunnel interfaces.  This is important, as while PfR will allow us to override the routing table, we want to ensure we don't have asymmetric routing if our destination is not yet PfR controlled (i.e. you haven't cut all of your remote sites over to IWAN!).  

***Per Cisco, it is on the roadmap for EIGRP to include EIGRP stub-site & stub-site wan-interface configurations...this will do what we're doing with route tagging!!***

The fourth phase...QoS/PfR

QoS and PfR are the magic of IWAN.  Seriously, you can get DMVPN/EIGRP set up in a day..but fine tuning your PfR policies can be a non-stop process.

One thing worth noting is that you may see that I have a nested child policy on the spoke..but not on the WAN hubs.  The reason for this is to do with per-tunnel QoS.  Per Cisco, we cannot have a nested child policy on the WAN hubs, as this "Breaks per-tunnel QoS."

Furthermore, I used port-channels on my WAN edge at the hub...this is a no no for the same reason as using hierarchical QoS--it breaks per-tunnel QoS.  When this goes production, i'll be sure to have a separate physical path for the inside & outside!


This is the gist of what we're going to try and accomplish.  The first thing I want to talk about is regarding QoS tagging.  Cisco recommends an end-to-end QoS policy..where we're marking/classifying as close to the source as possible.  Unfortunately...I'm not going to re-do this customer's QoS policy...that is just wwwwwwwwwwwaaaaay out of scope.  To get around the fact that they lack a true QoS design, see "DSCP-MARKUP."  While the ISR 4331s support NBAR2, or "Next Generation NBAR" we aren't going down that road for the POC.  

The main applications this customer has running across their network are Exchange, Citrix, Voice, Video, and McAfee.  I simply went with using an ACL to classify/mark the inbound traffic on the devices.  That being said, Cisco Prime has some REALLY good templates that you can "borrow" that gets into some really neat classification using NBAR!  The point of this markup is ENTIRELY for PfR purposes, as we'll go ahead and discuss now.

PfR configuration is scarily easy.  Seriously, follow the CVD.  The only point worth mentioning is the loopback reachability, prefix-list application, and policy creation.  

First off, just make sure all your BRs can have reachability to the MCs loopback that you're using for PfR.  That's it!  

For the longest time I could not for the life of me figure out what the prefix-lists were for...here is my attempt at explaining that:  

The site-prefix are the prefixes that your hub is advertising to the spokes.  These prefixes are what is used for smart-probing.  You have different ways of approaching this: Create a summary route in EIGRP for 10.0.0.0/8 and include a prefix-list that only includes 10.0.0.8 OR have a HUGE prefix-list that includes every prefix that the hub advertises to the spokes.  

Why would you need this?  Well the documentation on this is fuzzy at best..but my interpretation (and those that my peers seem to agree with) is that while the spokes learn about other spoke prefixes dynamically, the site-prefix is that of the hub, or data center learned networks.  This part is NOT dynamic.  

So if we talk about the first option (summarized 10.0.0.0/8), we'll be sending probes for this prefix and the respective traffic-classes.  For example, if we have DSCP markings for EF, AF41, AF31, and 0....we'd have probing for the 10.0.0.0/8 network for the 4 traffic-classes.  Alternatively, if we included all the subnets in the prefix list (second option), we'd have probing for each traffic-class of each prefix.  But what does that mean?  My interpretation is that this prefix-list is a balancing act; create too small a prefix-list and your probing isn't sufficient.  Create too large a prefix-list and you'll kill your router's CPU with probes.

That being said, your prefix-list MUST match what is the RIB.  For example, if you aren't summarizing 10.0.0.0/8, but include 10.0.0.0/8 prefix-list..then the only thing PfR will be probing for will be the EXACT 10.0.0.0/8 prefix, nothing with a longer prefix!!!!

David Prall said I'm incorrect on this!  The only thing you need to do is ensure that you have a parent route for any site-prefixes learned from the hub!  While "overloading" the hub is unrealistic, having too small a site-prefix IS an issue.  For example, if we summarized 10.0.0.0/8 from the hub & used a site-prefix of 10.0.0.0/8....should ANY source-dest traffic for the particular marking from the hub fall out of policy, EVERYTHING is moving over to the alternate path.  To make this perfectly clear....you have a summary for 10.0.0.0/8 & your site-prefix is 10.0.0.0/8, but you have voice traffic going to 10.0.0.27/24 & 10.100.5.9/24....if there is voice latency going to 10.100.5.9/24 destination....its going to swing this voice traffic AND 10.0.0.27/24 over to the alternate path (assuming this path is better).  Alternatively, if we were to have a site-prefix list including 10.0.0.0/24 & 10.100.5.0/24 and experienced latency to 10.100.5.9...we'd only swing 10.100.5.9/24 to the alternate path, leaving 10.0.0.0/24 where it is!

Furthermore, the site-prefix does not have to have a 1-to-1 match in the RIB--you simply need a parent route!


Please see the PfR wiki for more information on the probing!

Now lets discuss the enterprise-prefix.  The enterprise-prefix list is, in my understanding, mainly used to differentiate enterprise from internet traffic.  If a prefix is a destination that falls OUTSIDE of this prefix-list, then the traffic will show as "INTERNET" and will be load-balanced.  If your prefix is within the range and not learned via a site-prefix, then it will not be included in PfR's probing/control, but will simply fall back on the routing table to avoid asymmetric routing.  Ultimately, this won't matter if all of your spoke's are IWAN/PfR controlled..but is a stopgap until you have all of your spokes converted.

Thanks Mani on this one!  By default, if the traffic is matched by your enterprise prefix-list...it by DEFAULT falls back to the routing table (as I said).  BUT, you can configure "load-balancing" under the PfR policy to load balance this traffic.  The only traffic that is load-balanced is the non-"performance" traffic (aka the traffic that is tracking on delay, jitter, loss).  Because we cannot track on delay, loss, jitter...we're only tracking on reachability.  

Tom brought up an interesting scenario on this topic!  Salesforce.com resolves to a public IP address (outside of our enterprise prefix-list range).  What if we ONLY want salesforce.com traffic on our INET path, never on MPLS?  We can add an entry in our PfR site-prefix list for the specific prefix!  Once added, we can add an entry in our PfR policy with path preference, as this is now technically "PfR controlled!"

Lastly, lets discuss the PfR MC policy!  I'll first say that I have not and will not modify the default policies (voice, video, low-latency-data, and/or bulk-data).  The most I've done is modified the policies to include the DSCP markings that are included in my "MARKUP" policy.  For example, this customer made it clear that they ONLY wanted voice/video on the MPLS..and the rest to take the internet path.  Well, that's easy enough--I simply made sure my path-preference was MPLS fallback INET for my voice/video classes and that the rest were the inverse.

Now, what if your prefix is PfR controlled BUT there is nothing in the PfR policy?  For instance, 10.1.0.0/16 is in our site-prefix list, but there is nothing for DSCP 0?  By default, it will use the routing table to determine the path to use.  What if we don't want it to ONLY use the path in the routing table?  Cisco's recommendation is to use "load-balancing" within the PfR policy!  By doing so, PfR load-balances this traffic across BOTH paths and tries to use a variance of 20% between the paths.  THIS CAN CAUSE ASYMMETRIC ROUTING....just an FYI..but yea, get over it!

EVEN if we have load-balancing configured---it will ONLY load-balance our non-performance "stuff."  "Stuff" being things within our policy that do not have priority1, priority2, etc...like voice, for example.  IF you have path preference, though, it will not load-balance (obviously).



Furthermore, one thing we can look into is using "INET fallback routing" so that we don't have to rely on probing across our MPLS path!





Once you have PfR connectivity, a few things worth checking:

show domain IWAN master policy
>This will verify that the spoke's have learned the policy that has been configured on the MC.

show domain IWAN master traffic-classes summary
>We expect to see a correlation between the DSCP values to the exit, matching the policy on the MC. 

show domain IWAN master traffic-classes dscp <dscp value>
>We expect to see information about the exact DSCP value.  This will tell us more information regarding the history, should we have issues we'll see the changing exits.

show domain IWAN master traffic-classes route-change <reason>
>This will give us a higher-level view of the PfR domain.  If you see multiple traffic-classes with changes due to issue X..then it will give you a starting point in troubleshooting.

show domain IWAN master site-prefix
>This gives a great view of prefixes learned either dynamically or via the MC's site-prefix list.  One thing worth noting is the "*10.0.0.0/8" entry with a site-id of 255.255.255.255.  This is from the MC's enterprise prefix-list!

The last piece I'm going to discuss...per-tunnel QoS "stuff"

Ok, like I said, one can get LOST in the web of QoS that is involved in IWAN.  The first thing we'll discuss is the per-tunnel QoS.  What is the purpose?  Well, imagine remote-site with a T1 (1.5Mbps) that has DMVPN connectivity to a hub site with a 100 Mbps MPLS connection.  Is it possible that the hub could send traffic faster than the remote site's T1 can handle?  Absolutely.

Per-tunnel QoS is simply a method of allowing the spoke to communicate with the hub to say "Hey, send traffic to me at rate X."  In our scenario, we created multiple per-tunnel QoS policies, given the varying bandwidth allowances for the different POC locations.  For example, a site with 50Mbps down/10Mbps up would subscribe to the 50 Mbps policy, as it could potentially receive 50Mbps from the hub.  

When creating these policies on the hub, we do two things: Allocate bandwidth percentages & set dscp tunnel values.  The first portion is so that we can guarantee bandwidth to the important classes (voice and video) while allowing a remaining percentage to our mission critical/bulk data.  Secondly, we assign dscp tunnel values so that if/when the traffic gets to someone who CARES about DSCP markings (i.e. our MPLS provider)..that they treat the traffic according to the contracted SLA!

Aside from our per-tunnel QoS, we're also doing some shaping on the physical interfaces!  The purpose of this is to avoid policing at the ISP edge.

Things to avoid...

no ip unreachables
I'm guility of this myself...its a habit, I get it.  DO NOT configure this on our physical WAN interfaces...IT BREAKS PMTUD.  Look into "ip icmp rate-limit unreachable"

no next-hop-self
This is phase 2 DMVPN.  Phase 2 DMVPN has no place, honestly, in modern DMVPN implementations, as it lacks the ability/support of summarization.  Mainly, phase 2 DMVPN is process switched until NBMA next-hop is determined...why put that CPU overhead in the mix?

Miscellaneous notes...

  • If using multicast, set the spoke pim dr-priority to 0.  Hell, set it to 0 just in case.
  • NHRP no-unique....allows branch to overwrite itself.
  • START with zone based firewall; may want DIA one day...
  • Per Cisco, max of 10 PfRv3 interfaces
  • Tunnel key used to differentiate AFTER encryption
  • If we don't disable NHRP route-watch, we CANNOT use spoke-to-spoke communication.  By disabling it, we're telling NHRP to ignore the check to see if there is a parent route.  Furthermore, we tell PfR to TAKE control to validate the path with smart probes. 
  • "Future is that all devices are BRs AND MCs at branches---dedicated MC at hub for sake of sparing the CPU"
  • CANNOT USE PORT-CHANNEL WITH ECMP----WEIGHT ONE LINK WITH separate link to another router with a separate L3 interface with a floating static route & source tunnel on loopback.
  • Configure "BW ingress" on an interface so the numbers are correct in the show interface.
  • QoS with port-channels......UGH.."Load-balancing vlan manual" global config.
  • path-pref MPLS1 MPLS2 fallback INET -- MPLS1 MPLS2 = OR; if site has links to one or the other it will choose the one it has.  If it has BOTH..it will try and load share across both.
  • If you have a site with a data VLAN, you may find that it is not "load-balancing" traffic across both paths--WHY!?  Because dscp 0, for example, with the single site vlan..we don't have enough granularity!  The only traffic class available is pinned to the one path!  If we want more granular load-sharing..break up that site /24 into 2x/28s...now we can load balance across both paths (should it require it for balancing purposes).
  • How can we "trick" IWAN into controlling a public IP address (example Sales Force)?  Well, by default, SalesForce's public IP address is..well, public!  As a result PfR will say "LOAD BALANCE THAT BAD BOY!!!!!!!!!!!!!!!"  If we don't want to load balance, we'll need to 1) "Trick" IWAN into controlling this by adding the public IP address into the MC site-prefix list.  2) We can create a policy that matches DSCP value of 0 and says PATHA fallback PATHB.  
  • Probing....
    • Probing is to fill empty time between active traffic and ageout timer (5 minute default) -- We can modify this timer...but do we want constant traffic?
    • While there is data traffic..probing is sent 1 packet every 1/3 monitor interval (default is 30 seconds)...we can lower this value for more critical applications---this is called "quick monitor."
We can configure ONE quick monitor interval...so 4 seconds we can assign multiple DSCP values..but no other intervals (not 1 second, not 2 seconds, etc).  Just know that this increases the traffic to the MC, as the monitor interval is how often to collect the information to send to the MC to make decisions!