Cisco 4100 Clustering. Part 3: FMC Configuration

In Part 3 we add FTD cluster to the Firepower Management Center (FMC). Before adding devices to FMC make sure cluster is formed otherwise FMC can not distinguish between Master and Slave. You can refresh on it from Part1 and Part2. Also, since FTD relies on the Cisco Smart Licenses make sure and enable it on FMC in advance. Some helpful information was discussed here. FTD licenses are a bit tricky. You will never receive a PAK or any kind of code to register. All that is done on the backend and assigned by Cisco to whoever sold you the solution. They (seller) need to perform necessary steps and transfer it to your Smart account before licenses show up on your license portal. Once licenses are assigned you can proceed with adding FTD sensors under Devices > Add Device.

Registration Key was assigned in Part 2 during FTD Configuration.

Once devices were individually added proceed with adding cluster device under Devices > Device Management > Add > Add Cluster. Assign Master and Slave devices as noted, Save and Deploy configuration.

Before proceeding any further one other important configuration change is to adjust MTU settings on CCL interface. Edit interface port-channel (PO) 48 and make it 100 bytes more than Data PO. In my case Data PO was 1500 (not using Jumbo frames) so CCL MTU was set to 1600.

However you are not done yet and what you will see happening is your cluster will look completely healthy and stable but once you make any minor changes and push config from FMC it will fall apart, Slave will disjoint the cluster and go offlline. You may see messages similar to the one below on FTD console.

firepower# cluster_file_receiver_with_ack_thread: -1 bytes received, but should receive 23321857 bytes
The snort configuration file copy has failed, because disk write is not properly finished.

The only way to bring it back up is to reboot the Slave.

And the cause of this chaos is MTU setting on the network side. In our setup Nexus switches have default port MTU of 1500 and DO NOT fragment packets so what happens Master is trying to sync config over CCL but since MTU is set to 1600 communication brakes and Slave leaves the cluster. The fix is to increase MTU size on Nexus side CCL to the value higher than 1500. However, when you try 1600 you most likely will get an error message similar to the one below.

Nexus(config)# interface port-channel48
Nexus(config-if)# mtu 1600
MTU can only be default or system jumbo MTU

So now you need to find out if you have jumbo frames enabled and if not what is your default MTU size. Check this link for more information.

Nexus# sh run all | i mtu
system jumbomtu 9216

In my case Nexus had default settings so I had to use 9216 value under CCL PO.

 interface port-channel48
  description To 4100 CCL
  switchport mode trunk
  spanning-tree port type edge trunk
  speed 10000
  mtu 9216
  vpc 48

Now we are ready to start building transparent data path. First, we need to create sub-interfaces matching data Vlans under Device ManagementCluster Device > Interfaces > Add Interfaces.  If for example, Vlan 201 is our data Vlan, then you would create vlan201 interface, where Security Zone designates ingress IPS zone, Interface is physical Port-channel (PO) created for data traffic, Sub-Interface ID can be anything and VLAN ID is vlan tag 201.

Next, let’s say Vlan 403 is the egress interface in our transparent deployment. We need to create another sub-interface with similar settings but for Security Zone create/designate new egress zone.

The last step is to bridge them all together with BVI. Create Bridge Group Interface under Device ManagementCluster Device > Interfaces > Add Interfaces. Assign unique Bridge Group ID (not relevant to Vlan tag), and add newly created sub-interfaces.

Pay attention to the sub-interfaces you add. Do not add main Port-channel by mistake.

Under IPv4 tab assign available IP address from Vlan201 IP range. This step is very important!

Now click OK, Save and Deploy.

Proceed with adding remaining Vlan pairs if you need to inspect additional Vlans.

On FMC configure at least one default allow any/any policy and assign it to Cluster device.

Once all the settings are in place and your cluster is healthy you can proceed with cutting over traffic to FTD cluster. The steps to take will be migrating Layer 3 gateway from original Ingress Vlan to new Egress Vlan. In reference to our example above we will need to deleted Vlan 201 interface and create Vlan 403 interface.

Before migration

interface Vlan201
ip address 192.168.201.1 255.255.255.128

After migration

no interface Vlan201
!
interface Vlan403
 ip address 192.168.201.1 255.255.255.128

When migration is complete do not forget to perform failover testing. I’ve tested shutting down Data PO and interfaces, CCL PO and interfaces, reloading Master unit. Everything tested successfully. The only time I lost a couple of pings was when I shut down CCL PO on Master.

Another important point is to enable FMC alerts and add FTD cluster to your monitoring solution. Since device management is out of band checking health state of Data and CCL port-channels is critical.

 

5 comments On Cisco 4100 Clustering. Part 3: FMC Configuration

  • Hi, I’m doing a test drive with FTD cluster and the ASA connected to home internet link, my ASA inside interface 10.6.5.2/24 was Connect to the core as access vlan 2, unfortunately I don’t have a nexus available for lab. Why did you create a sub 201 with IP address in the FTD and deleted the same vlan interface id in the core switch? In my lab I have a static 0.0.0.0 0.0.0.0 pointing to the ASA, and several VLANs for testing. How can I cutover my traffic the the FTD?

  • Is the IP address of the BVI pingable ? I just setup a FTD cluster in transparent mode using 4100 but cant ping the BVI IP address.

Leave a reply:

Your email address will not be published.

Site Footer

Sliding Sidebar