Please add breadcrumbs in author

Nexus Dashboard Orchestrator Tenants and Tenant Policies Templates for ACI Fabrics, Release 4.3.x

Tech Article
 
Last updated: March 28, 2024
PDF
Is this helpful? Feedback

Tenants Overview

A tenant is a logical container for application policies that enable an administrator to exercise domain-based access control. A tenant represents a unit of isolation from a policy perspective, but it does not represent a private network. Tenants can represent a customer in a service provider setting, an organization or domain in an enterprise setting, or just a convenient grouping of policies.

  • common-A special tenant with the purpose of providing "common" services to other tenants in ACI fabrics. Global reuse is a core principle in the common tenant. Some examples of common services include shared L3Outs, DNS, DHCP, Active Directory, and shared private networks or bridge domains.

  • dcnm-default-tn-A special tenant with the purpose of providing configuration for Cisco NDFC fabrics.

  • infra-The Infrastructure tenant that is used for all internal fabric communications, such as tunnels and policy deployment. This includes switch to switch and switch to APIC communications. The infra tenant does not get exposed to the user space (tenants) and it has its own private network space and bridge domains. Fabric discovery, image management, and DHCP for fabric functions are all handled within this tenant.

    When using Nexus Dashboard Orchestrator to manage Cisco NDFC fabrics, you will always use the default dcnm-default-tn tenant.

note.svg

Nexus Dashboard Orchestrator cannot manage the APIC’s mgmt tenant, so importing the tenant from APIC or creating a new tenant called mgmt in NDO it Is not allowed.
To manage tenants, you must have either Power User or Site and Tenant Manager read-write role. Three default tenants are pre-configured for you:


Tenant Policies Templates

Release 4.0(1) adds Tenant Policies templates, which allow you to configure the following tenant-wide policies:

  • Route Policies for Multicast

  • Route Map Policies for Route Control

  • Custom QoS Policies

  • DHCP Relay Policies

  • DHCP Option Policies

  • IGMP Interface Policies

  • IGMP Snooping Policies

  • MLD Snooping Policies

For additional information, see Creating Tenant Policy Templates.

Creating New Tenants

Before you begin:

You must have a user with either Power User or Site Manager read/write role to create and manage tenants.

This section describes how to add a new tenant using the Cisco Nexus Dashboard Orchestrator GUI. If you want to import one or more existing tenants from your fabrics, follow the steps that are described in Importing Existing Tenants instead.

  1. Log in to your Cisco Nexus Dashboard and open the Cisco Nexus Dashboard Orchestrator service.

  2. Create a new tenant.

    1. From the left navigation pane, choose Operate > Tenants.

    2. In the top right of the main pane, click Create Tenant.

      The Create Tenant screen opens.

  3. Provide tenant details.

    1. Provide the Display Name and optional Description.

      The tenant’s Display Name is used throughout the Orchestrator’s GUI whenever the tenant is shown. However, due to object naming requirements on the APIC, any invalid characters are removed and the resulting Internal Name is used when pushing the tenant to sites. The Internal Name that will be used when creating the tenant is displayed below the Display Name text box.

      note.svg

      You can change the Display Name of the tenant at any time, but the Internal Name cannot be changed after the tenant is created.


    2. In the Associated Sites section, check all the sites that you want to associate with this tenant.

      Only the selected sites are available for any templates using this tenant.

    3. (Optional) For each selected site, click the Edit button next to its name and choose one or more security domains.

      A restricted security domain allows a fabric administrator to prevent a group of users, such as Tenant A, from viewing or modifying any objects that are created by a group of users in a different security domain, such as Tenant B, when users in both groups have the same assigned privileges. For example, a tenant administrator in Tenant A’s restricted security domain will not be able to see policies, profiles, or users configured in Tenant B’s security domain. Unless Tenant B’s security domain is also restricted, Tenant B can see policies, profiles, or users configured in Tenant A.

      note.svg

      A user will always have read-only visibility to system-created configurations for which the user has proper privileges. A user in a restricted security domain can be given a broad level of privileges within that domain without the concern that the user could inadvertently affect another tenant’s physical environment.


      Security domains are created using the APIC GUI and can be assigned to various APIC policies and user accounts to control their access. For more information, see the Cisco APIC Basic Configuration Guide.

    4. In the Associated Users section, select the Cisco Nexus Dashboard Orchestrator users that are allowed to access the tenant.

      Only the selected users are able to use this tenant when creating templates.

  4. Click Save to finish adding the tenant.

Importing Existing Tenants

Before you begin:

You must have a user with either Power User or Site Manager read/write role to create and manage tenants.

This section describes how to import one or more existing tenants. If you want to create a new tenant using Cisco Nexus Dashboard Orchestrator, follow the steps that are described in Creating New Tenants instead.

  1. Log in to your Cisco Nexus Dashboard and open the Cisco Nexus Dashboard Orchestrator service.

  2. In the left navigation menu, click Operate > Sites.

  3. Locate the site from which you want to import the tenants, click the three dots to get actions (…​) menu, and choose Import Tenants.

    You can import tenants from one site at a time.

  4. In the Import Tenants dialog, select one or more tenants to import and click Ok.

    The selected tenants will be imported into the Cisco Nexus Dashboard Orchestrator and show in the OperateTenants page.

  5. Repeat these steps to import tenants from any other sites.

Creating Tenant Policy Templates

This section describes how to create one or more tenant policy templates. Tenant policy templates allow you to create and configure the following policies:

  • Route Map Policies for Multicast

  • Route Map Policies for Route Control

  • Custom QoS Policies

  • DHCP Relay Policies

  • DHCP Option Policies

  • IGMP Interface Policies

  • MLD Snooping Policies

  • L3Out Node Routing Policies

  • L3Out Interface Routing Policies

  • BGP Peer Prefix Policies

  • IP SLA Monitoring Policies

  • IP SLA Track Lists

    1. Log in to your Cisco Nexus Dashboard and open the Cisco Nexus Dashboard Orchestrator service.

    2. Create a new Tenant Policy template.

      1. From the left navigation pane, choose Configure > Tenant Templates > Tenant Policies.

      2. On the Tenant Policy Templates page, click Create Tenant Policy Template.

      3. In the Tenant Policies page’s right properties sidebar, provide the Name for the template.

      4. From the Select a Tenant drop-down, choose the tenant with which you want to associate this template.

        All the policies that you create in this template as described in the following steps will be associated with the selected tenant and deployed to it when you push the template to a specific site.

        By default, the new template is empty, so you must add one or more tenant policies as described in the following steps. You don’t have to create every policy available in the template - you can define one or more policies of each type to deploy along with this template. If you don’t want to create a specific policy, simply skip the step that describes it.

    3. Assign the template to one or more sites.

      The process for assigning Tenant Policy templates to sites is identical to how you assign application templates to sites.

      1. In the Template Properties view, click Actions and choose Sites Association.

        The Associate Sites to <template-name> window opens.

      2. In the Associate Sites window, check the check box next to the sites where you want to deploy the template.

        Note that only the on-premises ACI sites support tenant policy templates and will be available for assignment.

      3. Click Ok to save.

    4. Create a Route Map Policy for Multicast.

      This policy is part of the overarching Layer 3 Multicast use case. You can use the information in this section as a reference, but we recommend following the full set of steps that are described in the Layer 3 Multicast chapter of the Features and Use Cases section of this document.

      1. From the +Create Object dropdown, select Route Map Policy for Multicast.

      2. In the right properties sidebar, provide the Name for the policy.

      3. (Optional) Click Add Description and provide a description for the policy.

      4. Click +Add Route Map for Multicast Entries and provide the route map information.

        For each route map, you must create one or more route map entries. Each entry is a rule that defines an action based on one or more matching criteria based on the following information:

        • Order - Order is used to determine the order in which the rules are evaluated.

        • Group IP, Src IP, and RP IP - You can use the same multicast route map policy UI for two different use cases-To configure a set of filters for multicast traffic or to restrict a rendezvous point configuration to a specific set of multicast groups. Depending on which use case you’re configuring, you must fill some of the fields in this screen:

          • For multicast filtering, you can use the Source IP and the Group IP fields to define the filter. You must provide at least one of these fields, but can choose to include both. If one of the fields is left blank, it matches all values.

            The Group IP range must be between 224.0.0.0 and 239.255.255.255 with a netmask between /4 and /32. You must provide the subnet mask.

            The RP IP (Rendezvous Point IP) is not used for multicast filtering route maps, so leave this field blank.

          • For Rendezvous Point configuration, you can use the Group IP field to define the multicast groups for the RP.

            The Group IP range must be between 224.0.0.0 and 239.255.255.255 with a netmask between /4 and /32. You must provide the subnet mask.

            For a Rendezvous Point configuration, the RP IP is configured as part of the RP configuration. If a route-map is used for group filtering it is not necessary to configure an RP IP address in the route-map. In this case, leave the RP IP and Source IP fields empty.

        • Action - Action defines the action to perform, either Permit or Deny the traffic, if a match is found.

      5. Click the check mark icon to save the entry.

      6. Repeat the previous substeps to create any additional route map entries for the same policy.

      7. Click Save to save the policy and return to the template page.

      8. Repeat this step to create any additional Route Map for Multicast policies.

    5. Create a Route Map Policy for Route Control.

      This policy is part of the overarching L3Out and SR-MPLS L3Out use cases. You can use the information in this section as a reference, but we recommend following the full set of steps that are described in the External Connectivity (L3Out) and Multi-Site and SR-MPLS L3Out Handoff chapters of the Features and Use Cases section of this document.

      1. From the +Create Object drop-down, select Route Map Policy for Route Control.

      2. In the right properties sidebar, provide the Name for the policy.

      3. (Optional) Click Add Description and provide a description for the policy.

      4. Click +Add Entry and provide the route map information.

        For each route map, you must create one or more context entries. Each entry is a rule that defines an action based on one or more matching criteria based on the following information:

        • Context Order - Context order is used to determine the order in which contexts are evaluated. The value must be in the 0-9 range.

        • Context Action - Context action defines the action to perform (permit or deny) if a match is found. If the same value is used for multiple contexts, they are evaluated one in the order in which they are defined.

        When the context order and action are defined, choose how you want to match the context:

        • Click +Create Attribute to specify the action that will be taken should the context match.

          You can choose one of the following actions:

          • Set Community

          • Set Route Tag

          • Set Dampening

          • Set Weight

          • Set Next Hop

          • Set Preference

          • Set Metric

          • Set Metric Type

          • Set AS Path

          • Set Additional Community

          After you have configured the attribute, click Save.

        • If you want to associate the action that you defined with an IP address or prefix, click Add IP Address.

          In the Prefix field, provide the IP address prefix. Both IPv4 and IPv6 prefixes are supported, for example, 2003:1:1a5:1a5::/64 or 205.205.0.0/16.

          If you want to aggregate IPs in a specific range, check the Aggregate check box and provide the range. For example, you can specify 0.0.0.0/0 prefix to match any IP or you can specify 10.0.0.0/8 prefix to match any 10.x.x.x addresses.

        • If you want to associate the action that you defined with community lists, click Add Community.

          In the Community field, provide the community string. For example, regular:as2-nn2:200:300.

          Then choose the Scope: Transitive means that the community will be propagated across eBGP peering (across autonomous systems) while Non-Transitive means the community will not be propagated.

        note.svg

        You must specify an IP address or a Community string to match a specific prefix (even if you do not provide a Set attribute) because it defines the prefixes that must be announced out of the L3Out. This can be either BDs' subnets or transit routes learned from other L3Outs.


      5. Repeat the previous substeps to create any additional route map entries for the same policy.

      6. Click Save to save the policy and return to the template page.

      7. Repeat this step to create any additional Route Map for Route Control policies.

    6. Create a Custom QoS Policy.

      You can create a custom QoS policy in Cisco APIC to classify ingressing traffic based on its DSCP or CoS values and associate it to a QoS priority level (QoS user class) to properly handle it inside the ACI fabric. Classification is supported only if the DSCP values are present in the IP header or the CoS values are present in the Ethernet header of ingressing traffic. Also, the custom QoS policy can be used to modify the DSCP or CoS values in the header of ingressing traffic.

      As an example, custom QoS policies allow you to classify traffic coming into the ACI fabric traffic from devices that mark the traffic based only on the CoS value, such as Layer-2 packets which do not have an IP header.

      For detailed information about QoS functionality in ACI fabrics, see Cisco APIC and QoS.

      1. From the +Create Object drop-down, select Custom QoS Policy.

      2. In the right properties sidebar, provide the Name for the policy.

      3. (Optional) Click Add Description and provide a description for the policy.

      4. Click +Add DSCP Mappings and provide the required information.

        The DSCP-mapping configuration allows you to associate ingressing traffic, whose DSCP value is within the range that is specified in the mapping, to the specified QoS priority level (class). It also allows you to set the DSCP or CoS values of the ingressing traffic, so that those values can be retained when the traffic egresses the fabric.

        note.svg

        Retaining the target CoS value for egress traffic requires the configuration of the "Preserve CoS" policy, which is part of the NDO Fabric policies.
        If the "DSCP Target" or "Target CoS" values are set as part of both the DSCP Mapping and CoS Mapping, the values that are specified in the DSCP Mapping have precedence.


        For each mapping, you can specify the following fields:

        • DSCP From - The start of the DSCP range.

        • DSCP To - The end of the DSCP range.

        • DSCP Target - The DSCP value to set on ingressing traffic that will be retained for egressing traffic.

        • Target CoS - The CoS value to set on ingressing traffic that will be retained for egressing traffic when "Preserve CoS" is enabled.

        • Priority - The QoS priority class to which the traffic will be assigned.

        After you provide the mappings, click the check mark icon to save. Then you can click +Add DSCP Mappings to provide extra mappings within the same policy.

      5. Click Add to save the policy and return to the template page.

      6. Click +Add CoS Mappings and provide the required information.

        The DSCP-mapping configuration allows you to associate ingressing traffic (whose DSCP value is within the range that is specified in the mapping) to the specified QoS priority level (class). It also allows you to set the DSCP or CoS values of the ingressing traffic, so that those values can be retained when the traffic egresses the fabric.

        note.svg

        Retaining the target CoS value for egress traffic requires the configuration of the "Preserve CoS" policy in the NDO Fabric policies.
        In addition, if the "DSCP Target" or "Target CoS" values are set as part of both the DSCP Mapping and CoS Mapping, the values that are specified in the DSCP Mapping have precedence.


        For each mapping, you can specify the following fields:

        • Dot1P From - The start of the CoS range.

        • Dot1P To - The end of the CoS range.

        • DSCP Target - The DSCP value to set on ingressing traffic that will be retained for egressing traffic.

        • Target CoS - The CoS value to set on ingressing traffic that will be retained for egressing traffic when "Preserve CoS" is enabled.

        • Priority - The QoS priority class to which the traffic will be assigned.

        After you provide the mappings, click the check mark icon to save. Then you can click +Add Cos Mappings to provide extra mappings within the same policy.

      7. Click Add to save the policy and return to the template page.

      8. Repeat this step to create any additional Route Map for Route Control policies.

    7. Create a DHCP Relay Policy.

      This policy is part of the overarching DHCP Relay use case. You can use the information in this section as a reference, but we recommend following the full set of steps that are described in the DHCP Relay chapter of the Features and Use Cases section of this document.

      1. From the +Create Object drop-down, select DHCP Relay Policy.

      2. In the right properties sidebar, provide the Name for the policy.

      3. (Optional) Click Add Description and provide a description for the policy.

      4. Click Add Provider to configure the DHCP server to which you want to relay the DHCP requests originated by the endpoints.

      5. Select the provider type.

        When adding a relay policy, you can choose one of the following two types:

        • Application EPG-Specifies the application EPG that includes the DHCP server to which you want to relay the DHCP requests.

        • L3 External Network-Specifies the External EPG associated to the L3Out that is used to access the network external to the fabric where the DHCP server is connected.

        note.svg

        You can select any EPG or external EPG that has been created in the Orchestrator and assigned to the tenant you specified, even if you have not yet deployed it to sites. If you select an EPG that hasn’t been deployed, you can still complete the DHCP relay configuration, but you need to deploy the EPG before the relay is available for use.


      6. Click Select an Application EPG or Select an External EPG (based on the provider type you selected) and choose the provider EPG.

      7. In the DHCP Server Address field, provide the IP address of the DHCP server.

      8. Enable the DHCP Server VRF Preference option if necessary.

        This feature was introduced in Cisco APIC release 5.2(4). For more information on the use cases where it is required see the Cisco APIC Basic Configuration Guide.

      9. Click OK to save the provider information.

      10. Repeat the previous substeps for any additional providers in the same DHCP Relay policy.

      11. Repeat this step to create any additional DHCP Relay policies.

    8. Create a DHCP Option Policy.

      This policy is part of the overarching DHCP Relay use case. You can use the information in this section as a reference, but we recommend following the full set of steps that are described in the DHCP Relay chapter of the Features and Use Cases section of this document.

      1. From the +Create Object drop-down, select DHCP Option Policy.

      2. In the right properties sidebar, provide the Name for the policy.

      3. (Optional) Click Add Description and provide a description for the policy.

      4. Click Add Option.

      5. Provide option details.

        For each DHCP option, provide the following:

        • Name - While not technically required, we recommend using the same name for the option as listed in RFC 2132.

          For example, Name Server.

        • Id - Provide the value if the option requires one.

          For example, a list of name servers available to the client for the Name Server option.

        • Data - Provide the value if the option requires one.

          For example, a list of name servers available to the client for the Name Server option.

      6. Click OK to save.

      7. Repeat the previous substeps for any additional options in the same DHCP Option policy.

      8. Repeat this step to create any additional DHCP Option policies.

    9. Create an IGMP Interface Policy.

      IGMP snooping examines IP multicast traffic within a bridge domain to discover the ports where interested receivers reside. Using the port information, IGMP snooping can reduce bandwidth consumption in a multiaccess bridge domain environment to avoid flooding the entire bridge domain.

      For detailed information on IGMP snooping in ACI fabrics, see the "IGMP Snooping" chapter of the Cisco APIC Layer 3 Networking Configuration Guide for your release.

      1. From the +Create Object drop-down, select IGMP Interface Policy.

      2. In the right properties sidebar, provide the Name for the policy.

      3. (Optional) Click Add Description and provide a description for the policy.

      4. Provide policy details.

        • Allow Version 3 ASM - Allow accepting IGMP version 3 source-specific reports for multicast groups outside of the SSM range. When this feature is enabled, the switch creates an (S,G) mroute entry if it receives an IGMP version 3 report that includes both the group and source even if the group is outside of the configured SSM range. This feature is not required if hosts send (*, G) reports outside of the SSM range, or send (S,G) reports for the SSM range.

        • Fast Leave - Option that minimizes the leave latency of IGMPv2 group memberships on a given IGMP interface because the device does not send group-specific queries. When Fast Leave is enabled, the device removes the group entry from the multicast routing table immediately upon receiving a leave message for the group. The default is disabled.

          Use this only when there is only one receiver behind the BD/interface for a given group.

        • Report Link Local Groups - Enables sending reports for groups in 224.0.0.0/24. Reports are always sent for nonlink local groups. By default, reports are not sent for link local groups.

        • IGMP Version - IGMP version that is enabled on the bridge domain or interface. The IGMP version can be 2 or 3. The default is 2.

        • Advanced Settings - Click the arrow next to this section to expand.

          • Group Timeout - Group membership interval that must pass before the router decides that no members of a group or source exist on the network. Values range 3-65,535 seconds. The default is 260 seconds.

          • Query Interval - Sets the frequency at which the software sends IGMP host query messages. Values can range 1-18,000 seconds. The default is 125 seconds.

          • Query Response Interval - Sets the response time that is advertised in IGMP queries. Values can range 1-25 seconds. The default is 10 seconds.

          • Last Member Count - Sets the number of times that the software sends an IGMP query in response to a host leave message. Values can range 1-5. The default is 2.

          • Last Member Response Time - Sets the query interval waited after sending membership reports before the software deletes the group state. Values can range 1-25 seconds. The default is 1 second.

          • Startup Query Count - Configures snooping for several queries that are sent at startup when you do not enable Protocol Independent Multicast because multicast traffic does not need to be routed. Values can range 1-10. The default is 2 messages.

          • Startup Query Interval - This configures the IGMP snooping query interval at startup. The range is from 1 second to 18,000 seconds. The default is 125 seconds.

          • Querier Timeout - Sets the query timeout that the software uses when deciding to take over as the querier. Values can range 1-65,535 seconds. The default is 255 seconds.

          • Robustness Variable - Sets the robustness variable. You can use a larger value for a lossy network. Values can range 1-7. The default is 2.

          • State Limit Route Map - Used with Reserved Multicast Entries feature.

            The route map policy must be already created as described in Step 2.

          • Report Policy Route Map - Access policy for IGMP reports that is based on a route-map policy. IGMP group reports will only be selected for groups that are allowed by the route-map.

            The route map policy must be already created as described in Step 2.

          • Static Report Route Map - Statically binds a multicast group to the outgoing interface, which is handled by the switch hardware. If you specify only the group address, the (*, G) state is created. If you specify the source address, the (S, G) state is created. You can specify a route-map policy name that lists the group prefixes, group ranges, and source prefixes. A source tree is built for the (S, G) state only if you enable IGMPv3.

            The route map policy must be already created as described in Step 2.

          • Maximum Multicast Entries - Limit the mroute states for the BD or interface that are created by IGMP reports. Default is disabled and no limit is enforced. Valid range is 1-4294967295.

      5. Repeat this step to create any additional IGMP Interface policies.

    10. Create an MLD Snooping Policy.

      Multicast Listener Discovery (MLD) snooping enables the efficient distribution of IPv6 multicast traffic between hosts and routers. It is a Layer 2 feature that restricts IPv6 multicast traffic within a bridge domain to a subset of ports that have sent or received MLD queries or reports. In this way, MLD snooping provides the benefit of conserving the bandwidth on those segments of the network where no node has expressed interest in receiving the multicast traffic. This reduces the bandwidth usage instead of flooding the bridge domain, and also helps hosts and routers save unwanted packet processing.

      For detailed information on MLD snooping in ACI fabrics, see the "MLD Snooping" chapter of the Cisco APIC Layer 3 Networking Configuration Guide for your release.

      1. From the +Create Object drop-down, select MLD Snooping Policy.

      2. In the right properties sidebar, provide the Name for the policy.

      3. (Optional) Click Add Description and provide a description for the policy.

      4. Provide policy details.

        • Admin State - Enables or disables the MLD snooping feature.

        • Fast Leave Control - Allows you to turn on or off the fast-leave feature on a per bridge domain basis. This applies to MLDv2 hosts and is used on ports that are known to have only one host doing MLD behind that port.

          Default is disabled.

        • Querier Control - Enables or disables MLD snooping querier processing. MLD snooping querier supports the MLD snooping in a bridge domain where PIM and MLD are not configured because the multicast traffic does not need to be routed.

          Default is disabled.

        • Querier Version - Allows you to choose the querier version.

          Default is Version2.

        • Advanced Settings - Click the arrow next to this section to expand.

          • Query Interval - Sets the frequency at which the software sends MLD host query messages. Values can range 1-18,000 seconds.

            The default is 125 seconds.

          • Query Response Interval - Sets the response time that is advertised in MLD queries. Values can range 1-25 seconds .

            The default is 10 seconds.

          • Last Member Query Interval - Sets the query response time after sending membership reports before the software deletes the group state. Values can range 1-25 seconds.

            The default is 1 second.

          • Start Query Count - Configures snooping for several queries that are sent at startup when you do not enable PIM because multicast traffic does not need to be routed. Values can range 1-10 .

            The default is 2.

          • Start Query Interval - Configures a snooping query interval at startup when you do not enable PIM because multicast traffic does not need to be routed. Values can range 1-18,000 seconds.

            The default is 31 seconds.

      5. Repeat this step to create any additional MLD Snooping policies.

    11. Create an L3Out Node Routing Policy.

      This policy is part of the overarching L3Out and SR-MPLS L3Out configuration use case. You can use the information in this section as a reference, but we recommend following the full set of steps that are described in the External Connectivity (L3Out) chapter of the Features and Use Cases section of this document.

      1. In the main pane, choose Create Object > L3Out Node Routing Policy.

        503969.jpg
        Figure 1. Create Object
      2. Provide the Name for the policy, and Add at least one of the BFD MultiHop Settings, BGP Node Settings, or BGP Best Path Control options.

        503970.jpg
        Figure 2. BFD MultiHop Settings
        • BFD MultiHop Settings - provides forwarding failure detection for destinations with more than one hop.

          In this case, a MultiHop session is created between the source and destination instead of the interface like in single-hop scenarios.

          note.svg

          BFD MultiHop configuration requires Cisco APIC release 5.0(1) or later.


        • BGP Node Settings - allows you to configure BGP protocol timer and sessions settings for BGP adjacencies between BGP peers.

        • BGP Best Path Control - enables as-path multipath-relax, which allows load-balancing between multiple paths that are received from different BGP ASN.

    12. Create an L3Out Interface Routing Policy.

      This policy is part of the overarching L3Out and SR-MPLS L3Out configuration use case. You can use the information in this section as a reference, but we recommend following the full set of steps that are described in the External Connectivity (L3Out) chapter of the Features and Use Cases section of this document.

      1. In the main pane, choose Create ObjectL3Out Interface Routing Policy.

      2. Provide the Name for the policy, and define the BFD Settings, BFD Multi-Hop Settings, and OSPF Interface Settings.

        503971.jpg
        Figure 3. Create ObjectL3Out
        • BFD Settings - specifies BFD parameters for BFD sessions established between devices on interfaces that are directly connected.

          When multiple protocols are enabled between a pair of routers, each protocol has its own link failure detection mechanism, which may have different timeouts. BFD provides a consistent timeout for all protocols to allow consistent and predictable convergence times.

        • BFD MultiHop Settings - specifies BFD parameters for BFD sessions established between devices on interfaces that are not directly connected.

          You can configure these settings at the node level as mentioned in the "Tenant Policy Template: Node Routing Group Policy" section above, in which case the interfaces inherit those settings, or you can overwrite the node-level settings for individual interfaces in the Interface Routing group policy.

          note.svg

          BFD multi-hop configuration requires Cisco APIC release 5.0(1) or later.


        • OSPF Interface Settings - allows you to configure interface-level settings such as OSPF network type, priority, cost, intervals and controls.

          note.svg

          This policy must be created when deploying an L3Out with OSPF.


    13. Create a BGP Peer Prefix Policy.

      This policy is part of the overarching L3Out and SR-MPLS L3Out configuration use case. You can use the information in this section as a reference, but we recommend following the full set of steps that are described in the External Connectivity (L3Out) chapter of the Features and Use Cases section of this document.

      1. In the main pane, choose Create Object > BGP Peer Prefix Policy.

      2. Provide the Name for the policy, and define the Max Number of Prefixes and the Action to take if the number is exceeded.

        The following actions are available:

        • Log

        • Reject

        • Restart

        • Shutdown

    14. Create an IP SLA Monitoring Policy.

      This policy is part of the overarching L3Out and SR-MPLS L3Out configuration use case. You can use the information in this section as a reference, but we recommend following the full set of steps that are described in the External Connectivity (L3Out) chapter of the Features and Use Cases section of this document.

      1. In the main pane, choose Create ObjectIP SLA Monitoring Policy.

      2. Provide the Name for the policy, and define its settings.

        note.svg

        If you choose HTTP for the SLA Type, your fabric must be running Cisco APIC release 5.1(3) or later.


    15. Create an IP SLA Track List.

      This policy is part of the overarching L3Out and SR-MPLS L3Out configuration use case. You can use the information in this section as a reference, but we recommend following the full set of steps that are described in the External Connectivity (L3Out) chapter of the Features and Use Cases section of this document.

      1. In the main pane, choose Create ObjectIP SLA Track List.

      2. Provide the Name for the policy.

      3. Choose the Type.

        The definition of a route being available or not available can be based on Threshold Percentage or Threshold Weight.

      4. Click +Add Track List to Track Member Relation to add one or more track members to this track list.

        note.svg

        You must select a bridge domain or an L3Out to associate with the track member. If you do not already have the bridge domain (BD) or L3Out that is created, you can skip adding a track member, save the policy without assigning one, and come back to it after you have created the BD or L3Out.


      5. In the Add Track List to Track Member Relation dialog, provide the Destination IP, Scope Type, and choose the IP SLA Monitoring Policy.

        The scope for the track list can be either bridge domain or L3Out. The IP SLA Monitoring policy is the one you created in the previous step.

    16. Click Save to save the changes you’ve made to the template.

      note.svg

      When you save (or deploy) the template to one or more sites, the Orchestrator will verify that the specified nodes or interfaces are valid for the sites and will return an error.


    17. Click Deploy to deploy the template to the associated sites.

      The process for deploying tenant policy templates is identical to how you deploy application templates.

      If you have previously deployed this template but made no changes to it since, the Deploy summary indicates that there are no changes, and you can choose to redeploy the entire template. In this case, you can skip this step.

      Otherwise, the Deploy to sites window shows you a summary of the configuration differences that will be deployed to sites. Note that in this case only the difference in configuration is deployed to the sites. If you want to redeploy the entire template, you must deploy when to sync the differences, and then redeploy again to push the entire configuration as described in the previous paragraph.


First Published: 2024-03-01
Last Modified: 2024-03-01

Americas Headquarters
Cisco Systems, Inc.
170 West Tasman Drive
San Jose, CA 95134-1706
USA
http://www.cisco.com
Tel: 408 526-4000
800 553-NETS (6387)
Fax: 408 527-0883