In some situations, you might encounter an issue with a subnet conflict with your Cisco Cloud Network Controller . This issue
might occur when the following conditions are met:
-
Your Cisco Cloud Network Controller is running on release 25.0(2)
-
The infra subnet for your Cisco Cloud Network Controller is configured within the 172.17.0.0/16 CIDR (for example, if you
entered 172.17.10.0/24
in the Infra Subnet field as part of the procedures in Deploying the Cisco Cloud Network Controller in Azure)
-
There is something else configured that overlaps with the 172.17.0.0/16 CIDR that you are using for the infra subnet for your
Cisco Cloud Network Controller (for example, if the Docker bridge IP subnet is configured with 172.17.0.0/16
, which is the default subnet in Cisco Cloud Network Controller ).
In this situation, your Cisco Cloud Network Controller might not be able to reach the CCR private IP address due to this subnet
conflict and the Cisco Cloud Network Controller will raise an SSH connectivity fault for the affected CCR.
You could determine if there might be a possible conflict by logging in as root into the Cisco Cloud Network Controller and
entering the route -n
command:
[root@ACI-Cloud-Fabric-1 ~]# route -n
Assume that you see output similar to the following:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 172.17.0.17 0.0.0.0 UG 16 0 0 oobmgmt
169.254.169.0 0.0.0.0 255.255.255.0 U 0 0 0 bond0
169.254.254.0 0.0.0.0 255.255.255.0 U 0 0 0 lxcbr0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.17.0.12 0.0.0.0 255.255.255.252 U 0 0 0 bond0
172.17.0.16 0.0.0.0 255.255.255.240 U 0 0 0 oobmgmt
In this example output, the highlighted text shows that a Docker bridge is configured with 172.17.0.0/16
.
Because this overlaps with the 172.17.0.0/16 CIDR that you used for the infra subnet for your Cisco Cloud Network Controller
, you might see an issue where you lose connectivity to the CCR, where you are not able to SSH into the CCR, and you see a
Host Unreachable message when you try to ping the CCR (such as in the following example, where 172.17.0.84 is the private
IP address of the CCR):
[root@ACI-Cloud-Fabric-1 ~]# ping 172.17.0.84
PING 172.17.0.84 (172.17.0.84) 56(84) bytes of data.
From 172.17.0.1 icmp_seq=1 Destination Host Unreachable
From 172.17.0.1 icmp_seq=2 Destination Host Unreachable
From 172.17.0.1 icmp_seq=3 Destination Host Unreachable
From 172.17.0.1 icmp_seq=5 Destination Host Unreachable
From 172.17.0.1 icmp_seq=6 Destination Host Unreachable
^C
--- 172.17.0.84 ping statistics ---
9 packets transmitted, 0 received, +5 errors, 100% packet loss, time 8225ms
pipe 4
[root@ACI-Cloud-Fabric-1 ~]#
To resolve the conflict in this situation, enter a REST API post similar to the following to change the IP address for the
other area that is causing the conflict:
https://{{apic}}/api/plgnhandler/mo/.xml
<apPluginPolContr>
<apContainerPol containerBip="<new-IP-address>" />
</apPluginPolContr>
For example, to move the Docker bridge IP address out from under the 172.17.0.0/16 CIDR, which was shown in the example scenario
above, you might enter a REST API post such as the following:
https://{{apic}}/api/plgnhandler/mo/.xml
<apPluginPolContr>
<apContainerPol containerBip="172.19.0.1/16" />
</apPluginPolContr>
where 172.19.0.1/16
is the new subnet for the Docker bridge. This moves the Docker bridge IP address under the 172.19.0.0/16 CIDR, where there
is no longer a conflict with the infra subnet for your Cisco Cloud Network Controller that is configured within the 172.17.0.0/16
CIDR.
You can use the same commands as before to verify that there is no longer a conflict:
[root@ACI-Cloud-Fabric-1 ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 172.17.0.17 0.0.0.0 UG 16 0 0 oobmgmt
169.254.169.0 0.0.0.0 255.255.255.0 U 0 0 0 bond0
169.254.254.0 0.0.0.0 255.255.255.0 U 0 0 0 lxcbr0
172.17.0.12 0.0.0.0 255.255.255.252 U 0 0 0 bond0
172.17.0.16 0.0.0.0 255.255.255.240 U 0 0 0 oobmgmt
172.19.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
In this example output, the highlighted text shows that a Docker bridge is configured with the IP address 172.19.0.0
. Because there is no overlap with the 172.17.0.0/16 CIDR that you are using for the infra subnet for your Cisco Cloud Network
Controller , there is no issue with connectivity with the CCR:
[root@ACI-Cloud-Fabric-1 ~]# ping 172.17.0.84
PING 172.17.0.84 (172.17.0.84) 56(84) bytes of data.
64 bytes from 172.17.0.84: icmp_seq=1 ttl=255 time=1.15 ms
64 bytes from 172.17.0.84: icmp_seq=2 ttl=255 time=1.01 ms
64 bytes from 172.17.0.84: icmp_seq=3 ttl=255 time=1.03 ms
64 bytes from 172.17.0.84: icmp_seq=4 ttl=255 time=1.03 ms
64 bytes from 172.17.0.84: icmp_seq=5 ttl=255 time=1.09 ms
64 bytes from 172.17.0.84: icmp_seq=6 ttl=255 time=1.06 ms
64 bytes from 172.17.0.84: icmp_seq=7 ttl=255 time=1.03 ms
64 bytes from 172.17.0.84: icmp_seq=8 ttl=255 time=1.05 ms
^C
--- 172.17.0.84 ping statistics ---
8 packets transmitted, 8 received, 0% packet loss, time 7005ms
rtt min/avg/max/mdev = 1.014/1.061/1.153/0.046 ms
[root@ACI-Cloud-Fabric-1 ~]#