Just got a new pair of NCS5501 SE’s, and per my article here, they are upgraded to what is currently IOS XR 6.2.25.
Next, if you’re coming from Arista, you will probably come out of this less than happy. Cisco’s multi-chassis LAG option on this platform is their “MC-LAG” which is NOT an active/active technology; the dynamic LAG will put the links going to the secondary chassis in an inactive state until the primary fails, so you’re going to be throwing half your bandwidth away under normal operating conditions, which sucks even more on a platform like NCS where you have to pay for licenses to activate your ports just to have them sit idle. Arista’s MLAG is super easy to set up and is active-active through both chassis. NCS does not support pseudo mLACP for active/active, nor does it support vPC.
Moving on (can you tell I’m not happy), let’s set this up regardless. If you’re coming from normal IOS or an IOS XR platform that doesn’t use the MEF architecture, things may seem a little weird at first. It’s easy to get the hang of though so don’t worry. First thing, pick the ports you want bundled (i.e. port channel from old IOS) and just pick a bundle ID (think old channel ID from IOS) for them. Let’s throw that on first:
interface TenGigE0/0/0/2 bundle id 1 mode active ! interface TenGigE0/0/0/3 bundle id 1 mode active ! interface TenGigE0/0/0/4 bundle id 1 mode active ! interface TenGigE0/0/0/5 bundle id 1 mode active !
Next up, we’ll create the bundle interface, and in my case, 9000 byte MTU because it’s going to be carrying some jumbo frames:
interface Bundle-Ether1 mtu 9000 lacp switchover suppress-flaps 300
Assuming the config is the same on the remote end, your bundle interface should now be in an up state.
Next task is I want this bundle to carry traffic for a bridge domain (think VLAN from IOS days) numbered 123; so I should expect to be receiving packets over my new bundle interface with 802.1q tags indicating 123. To accomplish this, I’m going to add a layer 2 transport interface to the bundle interface; this is referred to as an Ethernet Flow Point (EFP) and will remind you of a VLAN sub-interface from IOS days:
interface Bundle-Ether1.123 l2transport encapsulation dot1q 123 rewrite ingress tag pop 1 symmetric
The rewrite on the above is to remove and add the dot1q tag as packets come and go through this virtual interface. When these packets are received, the tags will be popped and the packet will exist in the bridge group we define, exiting via the appropriate member of the bridge group, where the tag may be added back on via similar statement, or perhaps handled by a layer 3 interface if that’s what it was destined for.
Okay, so next step is bridge domain and bridge group. A group is where you can put multiple bridge domains (think VLAN) that have similar config. For example, in our case I’m going to create a bridge domain to hold what I’m treating like VLAN 123:
l2vpn bridge group INTERNAL bridge-domain vlan123 interface Bundle-Ether1.123 ! interface Bundle-Ether2.123 ! interface TenGigE0/0/0/21 ! routed interface BVI123 ! ! !
Okay, so in the above you’ll see I have my new bridge domain “vlan123” which was just the descriptive name I gave it. I then added three interfaces to it, the new EFP defined as bundle interface 1.123, a second one on another bundle I have, and finally a physical interface operating in layer 2 mode connected to some other device sending data in intended for my bridge domain vlan123. Additionally, I’ve defined a layer 3 virtual interface called a BVI (bridge virtual interface) with ID 123; this will appear in the bridge domain so it can interact with other devices present. We’ll define that next:
interface BVI123 ipv4 address 192.0.2.100 255.255.255.0
There you go, you now have a layer 3 interface with IP 192.0.2.100 talking on bridge domain vlan123, which means the other ports in vlan123 act like traditional switch ports and it can talk to their devices.
There is a MASSIVE caveat with this setup. The NCS platform can’t talk VRRP on BVI interfaces (or HSRP/GLBP platform-wide); so if you were thinking you’d do the above and set up a redundant first hop for your downstream devices, well you can’t.
Anyway, you noticed I have a second BE (bundle-ethernet) interface in the above. That happens to be an MC-LAG, so let’s see how that’s set up. I have two NCS5501’s that are terminating for one DHD (dual homed device) downstream, and from its perspective, it just has a port channel to one remote device, not realizing it’s really two for redundancy. Of course, just like the above caveat with VRRP on BVI’s, we have yet another Cisco caveat; MC-LAG’s will put half your links in standby, so the DHD on the other end will only be able to use half the total bandwidth at any time. In contrast, Arista’s MLAG, which is viewed as the same exact thing from the downstream DHD, is active-active across both chassis so it gets total bandwidth when both upstream attachment points are online. Heck, I have Brocade devices that do multi-chassis trunk active/active for 6+ years now; how could this be a feature that got left out of a 2017 device?!
On the physical port side, that looks exactly like creating a bundle interface above; you simple add a ‘bundle id 2 mode active’ to each physical interface on each chassis; the bundle id numbers must match across chassis so plan ahead.
Next up, we need both LDP and ICCP set up. LDP first:
mpls ldp log neighbor ! router-id 192.0.2.1 interface Bundle-Ether1.124 ! !
You’ll notice in the above that I have defined a router ID and an interface to use for the LDP communications. It is IMPERATIVE that you ensure you have sufficient redundancy between your two nodes for this channel to remain up, which is why I used a port channel, oops, I mean bundle interface. If LDP goes down, so do your MC-LAG’s, so a screwed up bundle can take all your bundles down on both sides, isolating your DHD’s.
Next up, redundancy / ICCP protocol. It will just want a neighbor IP, and it can be the same one you’re using for your LDP neighbor, but in the LDP config you’re using your router ID, and in this config you’re specifying the neighbor’s IP address to talk to. You’re also specifying a virtual MAC and priority; read up on those and set to your environment’s needs:
redundancy iccp group 100 mlacp node 1 mlacp system mac 0001.0001.0001 mlacp system priority 1 member neighbor 192.0.2.2 ! ! ! !
Okay, your ports are ready, LDP & ICCP are ready, lets move on to adding the bundle 2:
interface Bundle-Ether2 lacp switchover suppress-flaps 300 mlacp iccp-group 100 mlacp port-priority 10 bundle wait-while 100 !
Your multi-chassis LAG should be online now. If you “show int BE2” you’ll see the local details and both ports should be online. It will make no mention of the other half on the other chassis. However, on the other chassis if you do the same show int, you’ll get the bundle and channel as up, and both links will show as in standby state:
RP/0/RP0/CPU0:rtr1#show int be2 Mon Dec 18 17:12:11.208 UTC Bundle-Ether2 is up, line protocol is up Interface state transitions: 1 Hardware is Aggregated Ethernet interface(s), address is 0001.0001.60db Internet address is Unknown MTU 1514 bytes, BW 2000000 Kbit (Max: 2000000 Kbit) reliability 255/255, txload 0/255, rxload 0/255 Encapsulation ARPA, Full-duplex, 20000Mb/s loopback not set, Last link flapped 1d16h No. of members in this bundle: 2 TenGigE0/0/0/36 Full-duplex 10000Mb/s Active TenGigE0/0/0/37 Full-duplex 10000Mb/s Active RP/0/RP0/CPU0:rtr2#show int be2 Mon Dec 18 17:14:16.741 UTC Bundle-Ether2 is up, line protocol is up Interface state transitions: 1 Hardware is Aggregated Ethernet interface(s), address is 0001.0001.60db Internet address is Unknown MTU 1514 bytes, BW 0 Kbit reliability 255/255, txload Unknown, rxload Unknown Encapsulation ARPA, Full-duplex, 0Kb/s loopback not set, Last link flapped 1d16h No. of members in this bundle: 2 TenGigE0/0/0/36 Full-duplex 10000Mb/s Standby TenGigE0/0/0/37 Full-duplex 10000Mb/s Standby
If you had an Arista device, it would look like this instead. :-)
rtr#show int po16 Port-Channel16 is up, line protocol is up (connected) Hardware is Port-Channel, address is 0001.0001.0ffe Ethernet MTU 9214 bytes , BW 40000000 kbit Full-duplex, 40Gb/s Active members in this channel: 4 ... Ethernet1 , Full-duplex, 10Gb/s ... Ethernet31 , Full-duplex, 10Gb/s ... PeerEthernet1 , Full-duplex, 10Gb/s ... PeerEthernet31 , Full-duplex, 10Gb/s
Have you looked at EVPN with IRB?
Very interesting; will dig into it this week. Thanks!