Our DMVPN hubs are Cisco 881 routers connecting to a Cisco ASR 1006. We run the standard MTU setting on our DMVPN tunnels: Setting the MTU to 1400 with a maximum TCP segment size of 1360.
Everything is working fine and dandy until the night of an F5 upgrade. Post-migration all users are working.....except one user--who happens to be one of our F5 admins! He calls me up to verify our DMVPN connectivity, as his connection going down would seem to point to all of DMVPN going down.
Weird..my DMVPN connection is just fine..and so is the other 90+ spoke connections! The first thing we see after I get him access to the device is that it appears to be failing phase 1 negotiation--"MM_NO_STATE" cycling between "MM_KEY_EXCHANGE."
Hm...did his device lose his certificate somehow (we don't use pre-shared keys)? Nope--cert is there and appears to be active. We try to renew the certificate and delete the old one--no dice! After exhausting all troubleshooting efforts I feel comfortable doing--since I'm walking him through the CLI over the phone--we call it a night.
The next day we get a packet capture and see the following (Sorry for blotting out the head-in IP):
Hm....that doesn't look good.
Here is a look at the first fragment:
As you can see, the total length is 1492 bytes (if you include the 20 byte IP header).
This fragment is even smaller!
If you exclude the header...its only 8 bytes!
Based on the above line in the packet capture, this tells me that the head-in did not receive all the fragments in the allocated time slot..so it sent back an ICMP message to the source saying "HAY! I didn't get all the fragments---try again!" Based on the packet capture..this seems to be true, as the process continuously restarts and the process continues indefinitely.
Lets compare this capture against one that is working (my 881 for example!):
Based on the output..there appears to still be fragmentation. Bummer. Lets take a look at these fragments!
Similar to the first fragment with the problem 881...no big deal.
Interesting....this 2/3 fragmented packet is much larger than the 8 byte fragment we received from the problem child.
Why is this stuff being fragmented at all? If we modify the physical interface on the 881 ( on my 881), all we see is that the total length of the initial fragments go down..but they never go away. Most traffic on the internet is either TCP or small-packet UDP....but within IKE protocol, we have the rare distinction of large UDP packets. These packets just so happen to be the #5 and #6 packets in IKEv1 main mode...or the IKE_AUTH packets in IKEv2..especially when using certificate authentication.
Ok, I can sleep at night knowing this is a known thing when running IKE and certificates (vs preshared keys). But why this user in particular? Remember that weird 8 byte fragment? This user happens to be running good ole' DSL! With DSL, it uses a PPPoE connection (IP over PPP). There are 2 bytes from IP over PPP and 6 byes from PPPoE framing added to every packet. Is it possible that his DSL connection is adding just enough overhead...to cause even further fragmentation?
Last night I called him and asked him to try something for me: Modify the physical interface to have an ip mtu value of 1400--versus the default 1500.
"SWEET JESUS THE CONNECTION IS BACK!" Almost immediately after modifying the MTU to 1400...the VPN connection was successfully built.
Another stupid MTU finding!
In this scenario we have a customer running OTV. The L3 link between the devices is through an MPLS provider. We have connectivity across the link, but none of the web GUIs are working (storage array, HP blade chassis management, etc).
My immediate though is MTU. OTV adds an additional 42 bytes of overhead..so just like with PPPoE, IPsec, or any other encapsulating protocol...MTU must be taken into consideration.
The problem we have is that the MPLS provider is using the default MTU: 1500. When the packets are set from the OTV router to the MPLS provider using the default MTU of 1500, the receiving router gets the packet and says "NOPE, too big, fragmenting!"
The problem is OTV doesn't allow fragmentation by default! SO what can we do? We can increase the MTU of the entire path (jumbo MTU possibly) so that when the upstream links receive the datagrams they don't fragment? The problem of this is that it isn't really feasible..thats a lot of links and a lot of headache.
Alternatively, we can apply the following statement on the ASR: "otv fragmentation join-interface <join-interface>." The join-interface is the one peering with the upstream router; the interface with multicast configured (not the one with the bridge IDs).
Immediately after we applied the fragmentation command globally, (essentially telling OTV, "HEY dude, allow fragementation!") HTTP/HTTPS traffic started working.
Is fragmentation good? Well, not ideally..but we have to work around it in some scenerios.
No comments:
Post a Comment