without encryption which runs over any IP network (for example, Internet) • Used for scaling IPsec VPNs • Instead of a separate tunnel per each router pair, there is one tunnel interface on each router • All VPN members are in the same subnet • For adding a new device to VPN, most of configuration can be copied from another device • Uses Multipoint GRE (mGRE), Next Hop Resolution Protocol (NHRP) and IPsec
overlay (logical) and underlay (NBMA). • Dynamic spoke discovery • Multicast replication on the hub • Dynamic spoke-to-spoke tunnels • NAT-Traversal (NAT-T) support • Front Door VRF support • IPv6 over IPv4 underlay support
block of DMVPN • Removes the necessity of specifying tunnel destination • Relies on Next Hop Resolution Protocol (NHRP) with static or dynamic mappings • Tunnel key is required (must match on both sides) to identify the tunnel interface if several tunnels with the same tunnel source are used
DMVPN magic • Encapsulated into GRE, Protocol number 0x2001 • Client-server protocol (the hub is Next Hop Server NHS, spokes are Next Hop Clients NHC) • NHS maintains NHRP database with NHRP mappings (logical IP to NBMA) • NHC sends NHRP Registration Request to all NHS • NHS checks for uniqueness of the claimed logical IP and replies with NHRP Registration Reply
number, used to distinguish different NHRP clouds • NHRP cleartext authentication may be used to prevent configuration mistakes • NHRP hold-time controls how long NHRP mapping is valid, NHC sends Registration Request every one third of this value. Default hold-time is 7200 seconds • NHC should have a static mapping for NHS and multicast (if needed) • Static mapping can be done not only for hub, but also for other spokes, that’s why it is needed to specify NHS additionally • NHS needs to know where to replicate multicast traffic (if needed)
NHS • Implicit – mapping was learned from NHRP resolution or reply • Negative – requested NBMA mapping could not be obtained. Used to prevent triggering more NHRP resolution requests while the one is being resolved • Unique – NHRP mapping cannot be overwritten with the same logical IP but different NBMA address • Registered – created from receiving NHRP registration request
• Router – mappings are for remote router itself to access the network with this flag • Local – list of peers where Resolution Request was sent • Nat – remote node supports NHRP NAT extension for dynamic spoke- to-spoke behind a NAT router
first IP is logical, the second is NBMA ip nhrp map 10.0.0.5 209.165.200.241 ! on NHC specify NHS ip nhrp nhs 10.0.0.5 ! on NHC specify where to send multicast ip nhrp map multicast 209.165.200.241 ! on NHS specify where to replicate multicast ip nhrp map multicast dynamic ip nhrp network-id 50 ip nhrp holdtime 600 ip nhrp authentication cisco !
and data plane • NHRP triggers IPsec before installing new mappings • IPsec notifies NHRP when encryption is ready • NHRP installs mappings, and sends registration if needed. • GRE over IPsec is used • Transport mode should be used in most cases • Remember to add PSK key for spokes if spoke-to-spoke tunnels are going to be built • If there are several mGRE interfaces with the same source, the same profile must be used with the keyword shared
some NAT combinations • All DMVPN spokes must have a unique IP address after they have been NAT translated • Spokes can use Port Address Translation (PAT), Hub is allowed to use only Static NAT • UDP Port 4500 (non-isakmp) should be opened across the path • In show ip nhrp claimed NBMA address is IP address before NAT, NBMA address is IP address after NAT • Enabled by default, to disable use: (config)# no crypto ipsec nat-transparency udp-encapsulation
Spoke-to-Spoke tunnels • All traffic between the spokes is always going through the hub regardless if the next hop is hub or spoke • Summarization on the hub is allowed • Tunnel mode on NHC is gre ip instead of gre multipoint • tunnel destination must be specified as well
network types can be used for automatic neighbor discovery: • Point-to-multipoint (most common) • Broadcast • If broadcast is used, make sure to exclude spokes from participation in DR/BDR election (set DR priority to 0) • Remember that default OSPF network type on tunnel interface is point- to-point which can result in OSPF adjacencies’ flapping on the hub
iBGP can be used • eBGP needs an individual AS number per branch (you can use 4-byte AS numbers) or allowas-in should be used on the spokes (careful!) • For iBGP each Hub should be Route-Reflector (RR) • Options above are not needed if summarization on the Hub is used
based on Next Hop received from routing protocols • Routing protocols should support Third Party Next Hop feature • Summarization on the hub of Spokes’ prefixes is not allowed - summarization changes next hop to the Hub • Tunnel mode must be gre multipoint on spokes
supposed to go out of the tunnel interface, the Spoke looks into its NHRP cache to find NBMA address for the next hop • If not found, the packet is sent to the Hub • Also, NHRP Resolution Request is sent to the Hub which should forward it to the remote Spoke • Remote Spoke replies with NHRP Resolution Reply containing its own NBMA address directly to the source spoke. If IPsec is used, remote spoke will initiate IPsec tunnel first. • Source spoke adds a mapping to NHRP cache and subsequent packets are going via spoke-to-spoke tunnel
split horizon on the hub • To use third party next hop configure the following command on hub’s tunnel interface: (config-if)# no ip next-hop-self eigrp 100
They are triggered by the Hub using NHRP, not by routing protocols • All routes on spokes now point to the Hub • Summarization is now allowed on the Hub
subnet points to the Hub • The packet is sent to the Hub • Hub forwards the packet out of the same tunnel interface towards the spoke, recognizes this and sends NHRP Redirect message to the spoke • NHRP Redirect contains the original packet • Local Spoke now sends Resolution Request querying about destination IP towards the Hub • Hub forwards it to the remote Spoke
to the local Spoke with NBMA address for the whole subnet from the RIB • If the NHRP shortcut is enabled, the route with received subnet is installed in the routing table as NHRP (AD 250) or if the same route already exists to the hub, next hop override (nho) is installed into CEF table and is shown with % in the routing table
aggregation on the hub for both eBGP and iBGP otherwise you need to change NH to the hub • For eBGP you need to use the following configuration on the hub: (config-router)# neighbor 155.1.4.4 next-hop-self • For iBGP RR you need to use the following configuration on the hub: (config-router)# neighbor 155.1.4.4 next-hop-self all
for DMVPN: • Summarization anywhere • Next hop can be easily changed • Filtering can be performed anywhere • Traffic engineering can be achieved using filtering/offset-lists • Stub feature limits unnecessary queries to the spokes
the following design: • One area 0 in the whole network: in DMVPN cloud, behind the spokes, behind the hub • Area 0 behind the hub; DMVPN cloud + networks behind the spokes in another area • Area 0 is for DMVPN cloud and networks behind the spokes, while the networks behind the hub are in another area • Area 0 is only in DMVPN cloud, all networks behind the spokes and the hub are in their own areas
issues with DMVPN: • The whole DMVPN cloud should be in one area, because it is one subnet: every link flap on the spoke will be propagated through the whole area. • You can summarize only networks behind the hub if it is ABR • Routing updates (LSA) can’t be filtered within an area • It is hard to enforce traffic engineering in DMVPN + OSPF (one of the tricks is to use network type point-to-multipoint non-broadcast with individual cost per neighbor) • For Phase 2 only broadcast and non-broadcast network types can be used (which introduces risk of misconfiguration of DR priority on the spoke, which can bring the whole network down)
choice: • Increased routing policy granularity • As a compromise, increased administrative burden • iBGP vs eBGP: • iBGP RR concept perfectly matches Hub and Spoke model • eBGP requires individual AS number per branch, which may be problematic • Depends on requirements • BGP dynamic neighbors feature can be used on the hub: (config-router)# bgp listen [limit max-number] network/length peer-group peer-group-name
towards NBMA address is in VRF RIB? • Front door VRF DMVPN feature comes to the rescue • Instead of crypto isakmp key, configure vrf-aware keyring and let tunnel interface know in which VRF to perform lookup for NBMA address: crypto keyring keyring-name vrf vrf-name pre-shared-key address <subnet> <mask> key <password> ! interface tunnel <num> tunnel vrf <vrf-name> !
defined on the Hub • Spoke requests specific QoS policy using a group string in the Registration Request • Hub applies requested QoS policy on per-spoke basis • Configuration: • on the Spoke: (config-if)# ip nhrp group <group-name> • on the Hub: (config-if)# ip nhrp map group <group-name> service-policy output <qos- policy>