Time Synchronization and NTP
Within the Cisco Application Centric Infrastructure (ACI) fabric, time synchronization is a crucial capability upon which many of the monitoring, operational, and troubleshooting tasks depend. Clock synchronization is important for proper analysis of traffic flows as well as for correlating debug and fault time stamps across multiple fabric nodes.
An offset present on one or more devices can hamper the ability to properly diagnose and resolve many common operational issues. In addition, clock synchronization allows for the full utilization of the atomic counter capability that is built into the ACI upon which the application health scores depend. Nonexistent or improper configuration of time synchronization does not necessarily trigger a fault or a low health score. You should configure time synchronization before deploying a full fabric or applications so as to enable proper usage of these features. The most widely adapted method for synchronizing a device clock is to use Network Time Protocol (NTP).
Prior to configuring NTP, consider what management IP address scheme is in place within the ACI fabric. There are two options for configuring management of all ACI nodes and Application Policy Infrastructure Controllers (APICs), in-band management and/or out-of-band management. Depending upon which management option is chosen for the fabric, configuration of NTP will vary. Another consideration in deploying time synchronization is where the time source is located. The reliability of the source must be carefully considered when determining if you will use a private internal clock or an external public clock.
In-Band Management NTP
Note |
See the Adding Management Access section in this guide for information about in-band management access. |
-
In-Band Management NTP—When an ACI fabric is deployed with in-band management, consider the reachability of the NTP server from within the ACI in-band management network. In-band IP addressing used within the ACI fabric is not reachable from anywhere outside the fabric. To leverage an NTP server external to the fabric with in-band management, construct a policy to enable this communication..
NTP over IPv6
NTP over IPv6 addresses is supported in hostnames and peer addresses. The gai.conf can also be set up to prefer the IPv6 address of a provider or a peer over an IPv4 address. The user can provide a hostname that can be resolved by providing an IP address (both IPv4 or IPv6, depending on the installation or preference).
Configuring NTP Using the GUI
Note |
There is a risk of hostname resolution failure for hostname based NTP servers if the DNS server used is configured to be reachable over in-band or out-of-band connectivity. If you use a hostname, ensure that the DNS service policy to connect with the DNS providers is configured. Also ensure that the appropriate DNS label is configured for the in-band or out-of-band VRF instances of the management EPG that you chose when you configured the DNS profile policy. |
Procedure
Step 1 |
On the menu bar, choose . |
Step 2 |
In the Navigation pane, choose . |
Step 3 |
In the Work pane, choose . |
Step 4 |
In the Create Date and Time Policy dialog box, perform the following actions: Repeat the steps for each provider that you want to create. |
Step 5 |
In the Navigation pane, choose . |
Step 6 |
In the Work pane, choose . |
Step 7 |
In the Create Pod Policy Group dialog box, perform the following actions: |
Step 8 |
In the Navigation pane, choose . |
Step 9 |
In the Work pane, double-click the desired pod selector name. |
Step 10 |
In the Properties area, from the Fabric Policy Group drop down list, choose the pod policy group you created. Click Submit. |
Configuring NTP Using the NX-OS Style CLI
When an ACI fabric is deployed with out-of-band management, each node of the fabric is managed from outside the ACI fabric. You can configure an out-of-band management NTP server so that each node can individually query the same NTP server as a consistent clock source.
Procedure
Step 1 |
configure Enters configuration mode. Example:
|
Step 2 |
template ntp-fabric ntp-fabric-template-name Specifies the NTP template (policy) for the fabric. Example:
|
Step 3 |
[no] server dns-name-or-ipaddress [prefer] [use-vrf {inb-default | oob-default}] [key key-value] Configures an NTP server for the active NTP policy. To make this server the preferred server for the active NTP policy, include the prefer keyword. If NTP authentication is enabled, specify a reference key ID. To specify the default management access EPG in-band or out-of-band for management access, include the use-vrf keyword with the inb-default or oob-default keyword. Example:
|
Step 4 |
[no] authenticate Enables (or disables) NTP authentication. Example:
|
Step 5 |
[no] authentication-key key-value Configures an authentication NTP authentication. The range is 1 to 65535. Example:
|
Step 6 |
[no] trusted-key key-value Configures a trusted NTP authentication. The range is 1 to 65535. Example:
|
Step 7 |
exit Returns to global configuration mode Example:
|
Step 8 |
template pod-group pod-group-template-name Configures a pod-group template (policy). Example:
|
Step 9 |
inherit ntp-fabric ntp-fabric-template-name Configures the NTP fabric pod-group to use the previously configured NTP fabric template (policy). Example:
|
Step 10 |
exit Returns to global configuration mode Example:
|
Step 11 |
pod-profile pod-profile-name Configures a pod profile. Example:
|
Step 12 |
pods {pod-range-1-255 | all} Configures a set of pods. Example:
|
Step 13 |
inherit pod-group pod-group-name Associates the pod-profile with the previously configured pod group. Example:
|
Step 14 |
end Returns to EXEC mode. Example:
|
Examples
This example shows how to configure a preferred out-of-band NTP server and how to verify the configuration and deployment.
apic1# configure t
apic1(config)# template ntp-fabric pol1
apic1(config-template-ntp-fabric)# server 192.0.20.123 use-vrf oob-default
apic1(config-template-ntp-fabric)# no authenticate
apic1(config-template-ntp-fabric)# authentication-key 12345 md5 abcdef1235
apic1(config-template-ntp-fabric)# trusted-key 12345
apic1(config-template-ntp-fabric)# exit
apic1(config)# template pod-group allPods
apic1(config-pod-group)# inherit ntp-fabric pol1
apic1(config-pod-group)# exit
apic1(config)# pod-profile all
apic1(config-pod-profile)# pods all
apic1(config-pod-profile-pods)# inherit pod-group allPods
apic1(config-pod-profile-pods)# end
apic1#
apic1# show ntpq
nodeid remote refid st t when poll reach delay offset jitter
------ - ------------ ------ ---- -- ----- ----- ----- ------ ------ ------
1 * 192.0.20.123 .GPS. u 27 64 377 76.427 0.087 0.067
2 * 192.0.20.123 .GPS. u 3 64 377 75.932 0.001 0.021
3 * 192.0.20.123 .GPS. u 3 64 377 75.932 0.001 0.021
Configuring NTP Using the REST API
Note |
There is a risk of hostname resolution failure for hostname based NTP servers if the DNS server used is configured to be reachable over in-band or out-of-band connectivity. If you use a hostname, ensure that the DNS service policy to connect with the DNS providers is configured. Also ensure that the appropriate DNS label is configured for the in-band or out-of-band VRF instances of the management EPG that you chose when you configured the DNS profile policy. |
Procedure
Step 1 |
Configure NTP. Example:
|
Step 2 |
Add the default Date Time Policy to the pod policy group. Example:
|
Step 3 |
Add the pod policy group to the default pod profile. Example:
|
Verifying NTP Operation Using the GUI
Procedure
Step 1 |
On the menu bar, choose . |
Step 2 |
In the Navigation pane, choose . The ntp_policy is the previously created policy. An IPv6 address is supported in the Host Name/IP address field. If you enter a hostname and it has an IPv6 address set, you must implement the priority of IPv6 address over IPv4 address. |
Step 3 |
In the Work pane, verify the details of the server. |
Verifying NTP Policy Deployed to Each Node Using the NX-OS Style CLI
Procedure
Step 1 |
Log onto an APIC controller in the fabric using the SSH protocol. |
Step 2 |
Attach to a node and check the NTP peer status, shown as follows:
|
Step 3 |
Repeat step 2 for different nodes in the fabric. |
NTP Server
The NTP server enables client switches to also act as NTP servers to provide NTP time information to downstream clients. When the NTP server is enabled, the NTP daemon on the switch responds with time information to all unicast (IPv4/IPv6) requests from NTP clients. NTP server implementation is complaint to NTP RFCv3. As per the NTP RFC, the server will not maintain any state related to the clients.
-
The NTP server enables IP addresses in all tenant VRFs and the in-band/out-of-band management VRFs to serve NTP clients.
-
The NTP server responds to incoming NTP requests on both Management VRFs or tenant VRFs, and responds back using the same VRF.
-
The NTP server supports both IPv4/IPv6.
-
Switches can sync as an IPv4 client and serve as an IPv6 server, and vice versa.
-
Switches can sync as an NTP client using either the out-of-band management or in-band management VRF and serve NTP clients from either management VRF or tenant VRF.
-
No additional contracts or IP table configurations are required.
-
If the switch is synced to the upstream server, then the server will send time info with the stratum number, and increment to its system peer's stratum.
-
If the switch clock is undisciplined (not synced to the upstream server), then the server will send time information with stratum 16. Clients will not be able to sync to this server.
By default, NTP server functionality is disabled. It needs to be enabled explicitly by the configuration policy.
Note |
Clients can use the in-band, out-of-band IP address of the leaf switch as the NTP server IP address. Clients can also use the bridge domain SVI of the EPG of which they are part or any L3Out IP address as the NTP server IP address for clients outside of the fabric. Fabric switches should not sync to other switches of the same fabric. The fabric switches should always sync to external NTP servers. |
Enabling the NTP Server Using the GUI
This section explains how to enable an NTP server when configuring NTP in the APIC GUI.
Procedure
Step 1 |
On the menu bar, choose . |
Step 2 |
In the Navigation pane, choose . The Date and Time option appears in the Navigation pane. |
Step 3 |
From the Navigation pane, right-click on Date and Time and choose Create Date and Time Policy. The Create Date and Time Policy dialog appears in the Work pane. |
Step 4 |
In the Create Date and Time Policy dialog box, perform the following actions: Repeat the steps for each provider that you want to create. |
Step 5 |
In the Navigation pane, choose Pod Policies then right-click on Policy Groups. The Create Pod Policy Group dialog appears. |
Step 6 |
In the Work pane, choose . |
Step 7 |
In the Create Pod Policy Group dialog box, perform the following actions: |
Step 8 |
In the Navigation pane, choose . |
Step 9 |
In the Work pane, double-click the desired pod selector name. |
Step 10 |
In the Properties area, from the Fabric Policy Group drop down list, choose the pod policy group you created. |
Step 11 |
Click Submit. |
Enabling the NTP Server Using the CLI
This section explains how to enable the NTP server feature using CLI commands.
Before you begin
Procedure
Step 1 |
Enter the global configure mode: Example:
|
Step 2 |
Configure an NTP server for the active NTP policy. Example:
|
Step 3 |
Specify the NTP server. Example:
|
Step 4 |
Enable the switches to act as NTP servers. Example:
|
Step 5 |
Enable the switches to act in NTP mastermode with a stratum value of 10. Example:
|
Step 6 |
Return to global configuration Example:
|
Enabling the NTP Server Using the REST API
This example demonstrates how to configure the NTP server using the REST API.
Procedure
Enable Example:
|