Escolar Documentos
Profissional Documentos
Cultura Documentos
Page 1
Contents
Introduction ................................................................................................................................................. 4
Preface...................................................................................................................................................... 4
Audience ................................................................................................................................................... 4
Scope ........................................................................................................................................................ 4
Cisco APIC Overview ............................................................................................................................... 4
Hardware and Software Support ............................................................................................................... 5
Cisco ACI .................................................................................................................................................. 5
F5 BIG-IP .................................................................................................................................................. 6
F5 Device Package ...................................................................................................................................... 7
F5 Device Package Supported Features .................................................................................................. 7
F5 Device Package Supported Functions ................................................................................................ 8
Cisco ACI L4-L7 Service Insertion Configuration .................................................................................... 8
Prerequisites ............................................................................................................................................. 8
Configuring BIG-IP ............................................................................................................................... 8
Access the Management IP Address ................................................................................................... 8
Install the License ................................................................................................................................. 9
Integrating the Virtual Machine Manager When Using Virtual Machines ........................................... 10
Defining Fabric Access Policies to Communicate with F5 Hardware ................................................ 10
Creating Tenants ................................................................................................................................ 10
Create the Private Network and Bridge Domain ................................................................................ 11
Create the Application Profile and EPG ............................................................................................. 11
Import the F5 Device Package ............................................................................................................... 11
Create a Logical Device Cluster ............................................................................................................. 12
Creating the Physical BIG-IP Device Cluster ..................................................................................... 13
Creating the Virtual BIG-IP Device Cluster ........................................................................................ 17
Export the F5 Device Cluster from Tenant Common.............................................................................. 20
Create a Function Profile ........................................................................................................................ 21
Configuring Service Parameters for L4-L7 Server Load Balancing ................................................... 23
Create the Service Graph Template ....................................................................................................... 28
Configuring the One Node Option ...................................................................................................... 28
Configuring the Two Nodes Option .................................................................................................... 30
Configuring Dynamic Attached Endpoint Notification......................................................................... 30
Apply the L4-L7 Service Graph Template .............................................................................................. 31
Creating a One-Arm Mode Service Graph Template ......................................................................... 32
Creating a Two-Arm Mode Service Graph Template ......................................................................... 34
View the Configuration Pushed to BIG-IP .............................................................................................. 37
One-Arm Configuration ...................................................................................................................... 37
Partition Creation ................................................................................................................................ 37
VLAN Creation.................................................................................................................................... 37
Static Route Creation ......................................................................................................................... 37
Self-IP Address Creation .................................................................................................................... 38
Node Creation .................................................................................................................................... 38
Monitor Creation ................................................................................................................................. 39
Pool Creation ...................................................................................................................................... 39
Virtual Server Creation ....................................................................................................................... 39
Overall View Using the Network Map ................................................................................................. 40
Two-Arm Configuration ...................................................................................................................... 40
Partition Creation ................................................................................................................................ 40
VLAN Creation.................................................................................................................................... 41
Self-IP Address Creation .................................................................................................................... 41
Node Creation .................................................................................................................................... 42
Monitor Creation ................................................................................................................................. 42
2015 Cisco | F5. All rights reserved.
Page 2
Page 3
Introduction
Preface
This document discusses how to deploy F5 BIG-IP Local Traffic Manager (LTM) with the Cisco Application
Centric Infrastructure (Cisco ACI ) using a device package (XML definition of BIG-IP LTM functions) within the
data center. It presents the most commonly deployed use cases for Layer 4 through Layer 7 (L4-L7) services in
todays data center, integrated with the Cisco Application Policy Infrastructure Controller (APIC).
The topologies used in this document can be altered to reflect a setup and design that meets the customers
specific needs and environment.
Audience
This document is intended for use by network architects and engineers to aid in developing solutions for Cisco
ACI and F5 L4-L7 service insertion and automation.
Scope
This document defines the design recommendations for BIG-IP LTM integration into Cisco ACI architecture to
provide network services. Limited background information is included about other related components whose
understanding is required for the solution implementation.
More details about the Cisco ACI design can be found in Cisco ACI design guide.
Information about Cisco ACI service insertion can be found in the Cisco ACI white paper.
Details about F5 BIG-IP LTM can be found in BIG-IP LTM documentation.
Extensibility and openness: open source, open APIs, and open software flexibility for development and
operations (DevOps) teams and ecosystem partner integration
Page 4
Figure 1:
The APIC acts as a central point of configuration management and automation for L4-L7 services and tightly
coordinates service delivery, serving as the controller for network automation (Figure 1). A service appliance
(device) performs a service function defined in the service graph. One or more service appliances may be
required to render the services required by a service graph. A single service device can perform one or more
service functions.
The APIC enables the user to define a service graph or chain of service functions in the form of an abstract graph:
for example, a graph of the web application firewall (WAF) function, the load-balancing function, and the network
firewall function. The graph defines these functions based on a user-defined policy. One or more service
appliances may be needed to render the services required by the service graph. These service appliances are
integrated into the APIC using southbound APIs built into a device package that contains the XML schema of the
F5 device model. This schema defines the software version, functions provided by BIG-IP LTM (SSL termination,
Layer 4 server load balancing [SLB], etc.), parameters required to configure each function, and network
connectivity details. It also includes Python scripts that map APIC events to function calls to BIG-IP LTM.
The joint solution uses the Cisco Nexus 9000 Series Switches (Figure 2).
Figure 2:
Page 5
Spine switches: The spine provides the mapping database function and connectivity among leaf switches.
Leaf switches: The leaf switches provide physical and server connectivity and policy enforcement.
APIC: The controller is the point of configuration for policies and the place at which statistics are archived
and processed to provide visibility, telemetry, application health information, and overall management for
the fabric. The APIC is a physical server appliance like a Cisco UCS C220 M3 Rack Server,
The designs in this document have been validated on APIC Version 1.1(1o).
For more information about Cisco ACI hardware, please refer to the discussion at Cisco Application Centric
Infrastructure.
F5 BIG-IP
F5 VIPRION, BIG-IP, and BIG-IP Virtual Edition (VE) running software Versions 11.4.1 and later can be
integrated with Cisco ACI (Figure 3).
Figure 3:
If the F5 VE is being used, VMware vSphere integration must be performed with APIC before you use VE.
Cisco ACI can be closely integrated with the server virtualization layer. In practice, this integration means that
instantiating application policies through Cisco ACI will result in the equivalent constructs at the virtualization layer
(that is, port groups) being created automatically and mapped to the Cisco ACI policy.
After the integration with VMware vCenter is completed, the fabric or tenant administrator creates endpoint groups
(EPGs), contracts, and application profiles as usual. Upon creation of an EPG, the corresponding port group is
created at the virtualization level. The server administrator then connects virtual machines to these port groups.
Page 6
F5 Device Package
F5 Device Package Supported Features
The F5 device package supports the following features:
Dynamic endpoint attach and detach: Endpoints either can be prespecified in corresponding EPGs
(statically at any time) or added dynamically as they are attached to Cisco ACI. Endpoints are tracked by
a special endpoint registry mechanism of the policy repository. This tracking gives the APIC visibility into
the attached endpoints. The APIC passes this information to BIG-IP. From BIG-IPs point of view, this
endpoint attached is a member of a pool, and hence BIG-IP converts the APIC call that the device
package receives into an addition of a member to a particular pool.
Self-IP addresses: A self-IP address is an IP address in the BIG-IP system that you associate with a
VLAN, to access hosts in that VLAN. Through its netmask, a self-IP address represents an address
spacethat is, a range of IP addresses spanning the hosts in the VLANrather than a single host
address. You can associate self-IP addresses not only with VLANs, but also with VLAN groups.
Static routes: As part of managing routing on a BIG-IP system, you add static routes for destinations
that are not located on the directly connected network.
Listener: A virtual server is an IP address and port specification on the BIG-IP system. The BIG-IP
system listens for traffic destined for that virtual server and then directs that traffic either to a specific
host for load balancing or to an entire network.
Server pools: A load balancing pool is a set of devices, such as web servers, that you group together
to receive and process traffic. Instead of sending client traffic to the destination IP address specified
in the client request, the LTM system sends the request to any of the servers that are members of
that pool.
Monitor: The LTM system can monitor the health and performance of pool members and nodes that
the LTM system supports.
iRules: An iRule is a powerful and flexible feature in the LTM system that you can use to manage
your network traffic. Using syntax based on the industry-standard Tools Command Language (Tcl),
the iRules feature not only allows you to select pools based on header data, but also allows you to
direct traffic by searching on any type of content data that you define. Thus, the iRules feature
significantly enhances your ability to customize your content switching to suit your exact needs.
Source Network Address Translation (SNAT) pool management: SNAT translates the source IP
address in a connection to a BIG-IP system IP address that you define. The destination node then
uses that new source address as its destination address when responding to the request.
Page 7
Support for active-standby high-availability model per APIC logical device cluster
Configuration of BIG-IP licensing and out-of-band (OOB) management prior to APIC integration
For updated information about the parameters, please refer to the user guide for the particular version of the F5
device package being used.
You can download the F5 device package from https://downloads.f5.com.
Layer 4 SLB
Act on data found in the network- and transport-layer protocols (IP, TCP, FTP, and UDP).
Layer 7 SLB
Configuring BIG-IP
Before BIG-IP can be used for L4-L7 service insertion, it needs access to the management network and it needs
to be licensed out of band.
Page 8
3. Click <No> and continue with the wizard to enter you own IP address and netmask and default route.
Page 9
3. After the configuration is saved, choose System from the main menu. Then choose License.
4. Apply the registration key to license the system and follow the wizard.
Click the following links for more information about license installation:
VLAN pool
Interface policies
Switch policies
Click here for a detailed explanation of the steps for configuring these entities (look in the fabric connectivity
section).
Creating Tenants
A tenant is a logical container or a folder for application policies. This container can represent an actual tenant, an
organization, or a domain, or it can be used simply for convenience in organizing information. A tenant represents
a unit of isolation from a policy perspective, but it does not represent a private network.
Page 10
Two-arm mode: One tenant has two bridge domains (one representing the client subnet, and one
representing the server subnet).
One-arm mode: One tenant has three bridge domains (one representing the client subnet, one
representing the F5 subnet, and one representing the server subnet).
For more information about design decisions, refer to the F5 and Cisco design document.
Where they are in the structure of the data center (such as the DMZ)
For a typical deployment in two-arm mode, one tenant has two EPGs: one representing the client bride domain,
and one representing the server bridge domain.
For a typical deployment in one-arm mode, one tenant has three EPGs: one representing the client bridge
domain, one representing the F5 bridge domain, and one representing the server bridge domain.
Click here for a detailed explanation of the EPG configuration steps.
Page 11
3. Expand the menu at the left to verify that the device package is installed.
Page 12
The logical device cluster can be created in tenant common or in your created tenant. The advantage of creating
it in tenant common is that this logical device cluster can then be exported to multiple tenants and used by
multiple tenants. If you define the logical device cluster within your own tenant, then this cluster can be used by
only this one particular tenant.
To add a logical device cluster, follow these steps:
1. Navigate to the tenant of your choice and choose L4-L7 Services > Function Profiles > L4-L7 Devices.
2. Right-click L4-L7 Devices and then select Create L4-L7 Devices.
Depending on the mode and the equipment used, different parameters will be used to create the logical device
cluster.
Comments
Name
Enter the name of the device cluster (alphanumeric characters only; no special
characters).
Device Package
Select the BIG-IP device package from the pull-down menu (for example, F5-BIGIP1.2.0).
Model
Mode
Select HA Cluster because you are creating a device cluster with a single BIG-IP system.
Device Type
Context Aware
Select either Single or Multiple (select Single for one context; if you have more than one
context, then select Multiple).
Physical Domain
Select the APIC physical domain with which the BIG-IP ports are connected.
APIC to Device
Management
Connectivity
Select Out-Of-Band because you are using the management network for communication
to BIG-IP devices.
User Name
Enter the user ID on the BIG-IP system that has administrator access.
Password
This is the unique IP address of the management port on BIG-IP device 1. This address
should correspond to the address that is used to access the BIG-IP web GUI.
Management Port
Select HTTPS.
Connects To
Select Port if the physical port of the BIG-IP system is connected to the Cisco ACI
fabric.
Select PC if the physical port is configured as a port channel from BIG-IP to the ACI
fabric.
Select vPC if the physical port is connected as a virtual port channel from BIG-IP to
ACI fabric
Physical Interfaces
Page 13
Name
For a port channel, enter the physical interfaces used in X_Y notation (for
example, 1_1).
For a virtual port channel (vPC), enter the trunk name defined in the BIG-IP system.
Connects To
Select the Cisco ACI leaf node and port to which BIG-IP is connected.
Direction
3. Configure device 2 as shown here (needed only if you are deploying BIG-IP in high-availability mode):
Management IP
Address
This is the unique IP address of the management port on BIG-IP device 2 (for example,
the address that is used to access the BIG-IP web GUI).
Management Port
Select HTTPS.
Connects To
Select Port if the physical port of the BIG-IP system is connected to the Cisco ACI
fabric.
Select PC if the physical port is configured as a port channel from BIG-IP to the Cisco
ACI fabric.
Physical Interfaces
Name
Connects To
Select the Cisco ACI leaf node and port to which BIG-IP is connected.
Direction
Management Port
Select HTTPS.
Page 14
5. Click Next.
6. You need to specify some additional minimum parameters for the high-availability cluster to form. Select
All Parameters and fill in the following values for the device host:
Host Name
Enter the unique fully qualified device names (FQDNs) for the two devices (for example,
F5-1.example.com and F5-2.example.com).
7. Configure high availability as shown here (needed only if you are deploying BIG-IP in high-availability
mode; skip this step if you are deploying the solution in standalone mode):
High Availability
Interface Name
Enter the interface to use for the high-availability heartbeat. This is the interface that has
the direct link between the two devices (for example, 1_3).
High Availability
Self IP Address
Enter a unique IP address for each of the devices to be used for the heartbeat (for
example, 192.168.1.1 and 192.168.1.2).
High Availability
VLAN
Enter a unique VLAN tag that the BIG-IP can use for the heartbeat (for example, 300).
High Availability
Self IP Netmask
Page 15
8. After you have completed these steps, the device cluster will be created. A few minutes may be needed
for all the configuration to be completed and the high-availability cluster to become stable. Depending on
the setup, a synchronization timeout may occur on the APIC. After the configuration is complete and
stable, verify it.
Here you can see that the device cluster is stable.
9. Log into BIG-IP and confirm that the device group has been formed (it will contain one or two BIG-IP
members depending on whether BIG-IP is deployed in standalone mode or high-availability mode).
You can also see that the device is online, active, and synchronized (only if you are deploying BIG-IP in
high -availability mode).
Page 16
At this point the device cluster is created and ready for use in service graphs in Cisco ACI.
Comments
Name
Enter the name of the device cluster (alphanumeric characters only; no special
characters).
Device Package
Select the BIG-IP device package from the pull-down menu (for example, F5-BIGIP1.2.0).
Model
Page 17
Field Description
Comments
Mode
Select HA Cluster because you are creating a device cluster with a single BIG-IP system.
Device Type
Context Aware
Select Single.
VMM Domain
Select the APIC virtual machine manager (VMM) domain (the vCenter integration
domain).
APIC to Device
Management
Connectivity
Select Out-Of-Band because you are using the management network for communication
to BIG-IP devices.
User Name
Enter the user ID on the BIG-IP system that has administrator access.
Password
This is the unique IP address of the management port on BIG-IP device 1 This address
should correspond to the address that is used to access the BIG-IP web GUI.
Management Port
Select HTTPS.
VM
Select the virtual machine that you want to use from the drop-down menu.
Connects To
Select Port if the physical port of the BIG-IP system is connected to the Cisco ACI
fabric.
Select PC if the physical port is configured as a port channel from BIG-IP to the Cisco
ACI fabric.
Physical Interfaces
Name
For a port channel, enter the physical interfaces used in X_Y notation (for
example, 1_1).
For a vPC, enter the trunk name defined in the BIG-IP system.
vNIC
Enter the network adaptor for the corresponding BIG-IP interface (for example, for 1_1
enter network adaptor2, and for 1_2 enter network adaptor3.
Connects To
Select the Cisco ACI leaf node and port to which ESXi host the BIG-IP VE is connected.
Direction
3. Configure device 2 as shown here (needed only if you are deploying BIG-IP in high-availability mode):
Management IP
Address
This is the unique IP address of the management port on BIG-IP device 2 This should
correspond to the address that is used to access the BIG-IP web GUI.
Management Port
Select HTTPS.
VM
Select the virtual machine that you want to use from the drop-down menu.
Page 18
Connects To
Select Port if the physical port of the BIG-IP system is connected to the Cisco ACI
fabric
Select PC if the physical port is configured as a port channel from BIG-IP to the Cisco
ACI fabric
Physical Interfaces
Name
vNIC
Enter the network adaptor for the corresponding BIG-IP interface (for example, for 1_1,
enter network adaptor2, and for 1_2, enter network adaptor3).
Connects To
Select the Cisco ACI leaf node and port to which ESXi host the BIG-IP VE is connected.
Direction
requests both transverse (for example, a trunk port where there is a server VLAN and
an outside client VLAN).
4. Configure the cluster as shown here:
Management IP
Address
Management Port
Select HTTPS.
The remaining configuration is the same as for the physical appliance in high-availability mode.
Click here for more information about the APIC L4-L7 deployment.
2015 Cisco | F5. All rights reserved.
Page 19
2. Choose Export L4-L7 devices, choose the L4-L7 device to import and the tenant to which you want to
export it. Then click Submit.
3. Navigate to your tenants and under L4-L7 Services > Imported Devices, you will see the logical device
cluster that you imported.
Page 20
Page 21
Page 22
4. Give the function profile a name and select a profile from the drop-down list as a base (the drop-down list
includes all the function profiles that are provided in the device package by default).
5. Select the Parameters tab and select all in the Features list in the menu at the left. You will see all the
supported device package parameters that the user can edit.
Note the difference between Device Config and Function Config parameters:
Device Config parameters apply to the BIG-IP partition level and can be used by multiple virtual
servers.
Function Config parameters apply to the virtual server level and are unique to this particular virtual
server. The function configuration forms a relationship (referencing) with the device configuration.
6. After all the parameters needed have been updated, click Submit. The function profile will be created.
Parameter name
Description
Device Config/LocalTraffic/ClientSSL
Certificate
Device Config/LocalTraffic/ClientSSL
Key
Device Config/LocalTraffic/ClientSSL
KeySecret
Device Config/LocalTraffic/FastL4
IdleTimeout
Device Config/LocalTraffic/HTTP
Pipelining
Page 23
Folder path
Parameter name
Description
Device Config/LocalTraffic/HTTP
XForwardedFor
Device Config/LocalTraffic/Monitor
FailByAttempts
Device Config/LocalTraffic/Monitor
FrequencySeconds
Device Config/LocalTraffic/Monitor
ReceiveText
Device Config/LocalTraffic/Monitor
ReceiveText
Device Config/LocalTraffic/Monitor
SendText
Device Config/LocalTraffic/Monitor
Type
Monitor type.
Device Config/LocalTraffic/Pool
ActionOnServiceDown
Device Config/LocalTraffic/Pool
PoolType
Device Config/LocalTraffic/Pool
LBMethod
Device
Config/LocalTraffic/Pool/Member
ConnectRateLimit
Device
Config/LocalTraffic/Pool/Member
ConnectionLimit
Connection limit.
Device
Config/LocalTraffic/Pool/Member
IPAdress
Device
Config/LocalTraffic/Pool/Member
Port
Device
Config/LocalTraffic/Pool/Member
Ratio
Device
Config/LocalTraffic/Pool/PoolMonitor
PoolMonitorRel
Device Config/LocalTraffic/iRule
iRuleName
Device
Config/Network/ExternalSelfIP
Floating
Device
Config/Network/ExternalSelfIP
SelfIPAdress
Page 24
Folder path
Parameter name
Description
Device
Config/Network/ExternalSelfIP
SelfIPMask
Device
Config/Network/ExternalSelfIP
PortLockdown
Device Config/Network/InternalSelfIP
Floating
Device Config/Network/InternalSelfIP
SelfIPAddress
Device Config/Network/InternalSelfIP
SelfIPMask
Device Config/Network/InternalSelfIP
PortLockdown
Device Config/Network/Route
DestinationIPAddress
Destination IP address
Device Config/Network/Route
DestinationNetmask
Device Config/Network/Route
NextHopIPAddress
Device Config/Network/SNATPool
SNATIPAddress
Function Config/ClientSSL
ClientSSLRel
Function Config/FastL4
FastL4Rel
Function Config/HTTP
HTTPRel
Function Config/HTTPRedirect
HTTPRedirectRel
Function Config/Listener
DestinationIPAddress
Page 25
Folder path
Parameter name
Description
Function Config/Listener
DestinationNetmask
Function Config/Listener
DestinationPort
Function Config/Listener
Protocol
Function Config/Listener
SourceIPAddress
Function Config/Listener
SourceNetmark
Function Config/Listener
ConnectionMirroring
Function Config/iRule
iRuleRel
Function Config/Listener
SourcePort
Function Config/Listener
PersistenceProfileName
Function Config/Listener
FallbackPersistenceProfileName
Function Config/NetworkRelation
NetworkRel
Function Config/Pool
EPGConnectionRateLimit
Function Config/Pool
EPGConnectionLimit
Connection limit.
Function Config/Pool
EPGDestinationPort
Function Config/Pool
EPGRatio
Function Config/Pool
PoolRel
Page 26
For two-arm mode, both external and internal self-IP addresses are configured.
For high availability, specify a floating IP address for both internal and external self-IP addresses.
The pool type will be defined as Static or Dynamic depending on whether dynamic attach notification is
needed.
To use HTTP redirect, double-click the HTTP Redirect folder under Device Config and then reference it in
Function Config.
Internally, after HTTP redirect is selected, that virtual server is assigned an F5 written redirect iRule.
The assumption is that two service graphs need to be deployed: one listening on port 80, and one
listening on port 443 (where the virtual server listening on port 443 will have the redirect iRule
redirecting traffic to the virtual server listening on port 80).
Any profiles (HTTP, persistence, etc.) and iRules used are referenced from the BIG-IP configuration. (The
profile name and iRule name are assumed to be added to BIG-IP out of band before they are referenced
in the APIC.)
For SSL offload, use the APIC to enable SSL traffic management for client-side traffic. The primary way
that you can control SSL network traffic is by configuring a client profile. A client profile is a type of traffic
profile that enables the BIG-IP system to accept and terminate any client requests that are sent using a
fully SSL-encapsulated protocol. The certificate management is performed by BIG-IP and not by the
APIC. The APIC is used only to reference the SSL profile that is created.
Install a key and certificate pair on the BIG-IP system to terminate client-side secure connections.
While deploying the service graph or creating a function profile from APIC:
Specify the certificate, key and key secret for the client SSL profile (These parameters are
present under Device Config > Local Traffic > Client SSL.)
Associate that profile with the virtual server. (You do this under Function Config > Client SSL.)
For updated information about the parameters, refer to the user guide for the particular version of the F5 device
package being used.
You can download the F5 device package from https://downloads.f5.com.
Page 27
Cisco ACI treats services as an integral part of an application. Any services that are required are treated as a
service graph that is instantiated on the Cisco ACI fabric from the APIC. Users define the service for the
application, and service graphs identify the set of network or service functions that are needed by the application.
Each function is represented as a node.
After the graph is configured in the APIC, the APIC automatically configures the services according to the service
function requirements that are specified in the service graph. The APIC also automatically configures the network
according to the needs of the service function that is specified in the service graph, which does not require any
changes in the service device.
A service graph is represented as two or more tiers of an application with the appropriate service function inserted
between them. A service graph is inserted between the source and destination EPGs by a contract (Figure 4).
Figure 4:
Service Graph
When you create a service graph template, several template options are provided, based on the number of nodes
you need and the type, as described in the following sections.
Page 28
Page 29
Static pool: Using the APIC, you can define pool members statically: that is, when you deploy the service
graph, along with specifying other LTM parameters, you also assign the pool members (the pool type is
defined as Static). These static pool members are then configured on BIG-IP by the APIC.
Dynamic pool: For dynamic pool addition to work, the attachment notification on the function connector
(connecting to the server subnet) must be enabled on the APIC. In addition, you must define the pool as
Dynamic when you deploy the service graph. The user is not expected to supply the pool member
information.
Endpoints are added dynamically as they are attached to Cisco ACI. Endpoints are tracked by a special
endpoint registry mechanism of the policy repository. This tracking gives the APIC visibility into the
attached endpoints. The APIC passes this information to BIG-IP. From BIG-IPs point of view, this
endpoint is a member of a pool. The device package converts the APIC attached endpoint notification
that it receives to an iControl message (Simple Object Access Protocol [SOAP] or representational state
transfer [REST] API supported by BIG-IP) that BIG-IP understands. Then it adds that member to a
particular pool on BIG-IP.
Page 30
This feature is useful if workloads and servers to be load balanced need to be increased or decreased
based on time or load or other triggers. This feature is also useful if you need to load balance across a
large number (100 or more) servers, because manual addition of those servers to the pool is time
consuming.
After the template is created, depending on whether dynamic endpoint attachment is needed, one configuration
change is needed.
1. Expand the service graph template that is going to be used.
2. Choose Function Node ADC > Internal and select the Attachment Notification check box. (If you are
using static pools, then leave this check box disabled.)
After a service graph has been applied with a dynamically attached endpoint, you cannot change the parameters
on the LTM to reflect a static pool, or vice versa. If you need static instead of dynamic after the graph has been
deployed, then you will have to create a new graph. The existing graph cannot be modified.
Page 31
1-ARM Topology
1. After the one-arm mode service graph template is created, right-click the template and click Apply L4-L7
Service Graph Template. Assign the Consumer EPG, Provider EPG, and the contract name. You can
choose to create a new contract or use an existing contract created earlier.
Page 32
3. Select the bridge domain to be used for F5. Click the All Parameters tab and the All features link. Here is
where you would enter all the LTM-related configurable parameters. If you used a function profile to
create the graph template, the values specified in the function profile will be already present at this time. If
you did not use a function profile, then you will need to add all the required parameters at this point. After
the parameters have been added, click Finish.
Click here for a description of the above parameters exposed through the F5 device package.
If the subnets used for Consumer and Provider and F5 are different, make sure you have a default route
specified on the client and server and BIG-IP device as needed. A route on BIG-IP can be added through
the APIC. For the client and server being used, you will need to perform this process out of band for the
APIC.
Page 33
4. At this point, all the configurations will be pushed by the APIC to BIG-IP. You can view the status of the
graph under the tenant by choose L4-L7 Services > Deployed Graph Instances (the applied state implies
that the graph has been pushed correctly with no faults).
2-ARM Topology
Page 34
1. After the one-arm mode service graph template is created, right-click the template and click Apply L4-L7
Service Graph Template. Assign the Consumer EPG, Provider EPG, and the contract name. You can
create a new contract or use an existing contract created earlier.
Page 35
4. If this is high availability setup, you need to specify the node to which the self-IP address belongs in the
cluster. To do this, expand the network folder, double-click the external self-IP address and internal selfIP address, and then update the Apply to Specific Device parameter (choose a device from the dropdown list).
5. After you have added or updated all the parameters, click Finish.
6. At the point, the configuration will be pushed by the APIC to BIG-IP. You can view the status of the graph
under the tenant by choosing L4-L7 Services > Deployed Graph Instances (the applied state implies that
the graph has been pushed correctly with no faults).
If more than one graph needs to be deployed, use the exact same steps to apply the second graph.
Page 36
One-Arm Configuration
Partition Creation
In the upper-right corner, click the drop-down. You should see a partition created by the APIC.
VLAN Creation
On the Network tab, choose VLANS.
Page 37
Node Creation
On the Local Traffic tab, choose Nodes.
The nodes can be assigned statically, or they can be added dynamically (if you are using a dynamically attached
endpoint).
Page 38
Monitor Creation
On the Local Traffic tab, choose Monitors and click the APIC-created monitor.
Pool Creation
On the Local Traffic tab, choose Pools and click the APIC-created pool.
Page 39
Two-Arm Configuration
Partition Creation
In the upper-right corner, click the drop-down list. You should see a partition created by the APIC.
Page 40
VLAN Creation
On the Network tab, choose VLANs.
Page 41
Node Creation
On the Local Traffic tab, choose Nodes.
The nodes can be assigned statically, or they can be added dynamically (if you are using a dynamically attached
endpoint).
Monitor Creation
On the Local Traffic tab, choose Monitors and click the APIC-created monitor.
Pool Creation
On the Local Traffic tab, choose Pools and click the APIC-created pool.
Page 42
One-arm mode: Both the network adaptors are assigned the same port group because in one-arm
mode, only one bridge domain and EPG is assigned to BIG-IP.
Page 43
Two-arm mode: Both network adaptors are assigned different port groups because in two-arm mode,
one bridge domain and EPG is assigned to be a consumer, and one bridge domain and EPG is
assigned to be a provider.
Page 44
Page 45
Topology
Configure the access policies on the APIC as described earlier in this document to establish link-level connectivity
with BIG-IP (Figure 8).
Figure 8:
Configure BIG-IP
Follow these steps to configure BIG-IP:
1. Create a VLAN with an ID of X (for example, 255) on BIG-IP. This is the same VLAN ID that you will use
on the APIC for static binding.
Page 46
2. Create a trunk on BIG-IP tagged to VLAN X belonging to interface 1/1.Y (Y = 3; this value refers to the
port on BIG-IP that is physically connected to the leaf). Verify that the interface is up and in the initialized
state.
Page 47
3. The EPG has a static binding to the interface on the leaf to which BIG-IP is physically connected (port Y
to which BIG-IP is physically connected and VLAN ID X as mentioned earlier). Specify the interface on
the APIC using a policy group.
Page 48
Page 49
The Cisco ACI fabric can route traffic between the different networks because two contracts have been created
separately.
2. Create a new logical device using the new device package and the new BIG-IP system. Make sure that it
is in a stable state.
3. Create a new graph template using the new logical device. The new graph template is the same type as
the old graph template (for example, the old graph name is WebGraph1ArmStatic, and the new graph
name is WebGraph1ArmStatic120).
4. Create a new device selection policy using the same contract name and associate it with the new graph
and new logical device.
Page 50
Copy:
Scope: Subtree
Page 51
6. Based on the saved configuration, create a new set of L4-L7 parameters that use the new service graph:
Copied configuration:
7. Create a new XML post with the managed object graphNameOrLbl using the new graph name. Do not
modify the existing configuration. This is a new configuration using a new graph.
Migration does not happen at this time. Traffic is still passing through the old logical device using the old
device package.
Page 52
9. At the contract level, replace the old service graph with the new service graph.
After the configuration has been submitted, the APIC will render a new virtual server on the new BIG-IP system
that use the new device package. The APIC will also remove the virtual server configuration from the old BIG-IP
system. The service interruption impact will be about 30 seconds or more per graph.
Repeat the preceding steps until all graphs have been migrated from the old device package to the new package.
On the old BIG-IP system, all virtual servers, including the APIC partition, will be deleted. All service graphs will
now be migrated to the new BIG-IP system. The old BIG-IP system can be repurposed.
Troubleshooting Tips
APIC
Click here for a guide to APIC-specific troubleshooting.
For service insertionspecific troubleshooting, refer to the debug.log file on the APIC.
1. Log in to the APIC (from the CLI).
2. Enter cd/data/devicescript/F5.BIGIP.<device_pacakage_version>/logs/.
3. View these two files for problems related to F5 service insertion:
Debug.log
Periodic.log
2015 Cisco | F5. All rights reserved.
Page 53
BIG-IP
Troubleshooting information for BIG-IP LTMrelated problems can be found in the/var/log directory.
Enter Cd/var/log.
Page 54
Page 55
Troubleshooting tips
2015 Cisco and/or its affiliates. All rights reserved. Cisco and the Cisco logo are trademarks or registered
trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to
this URL: www.cisco.com/go/trademarks. Third-party trademarks mentioned are the property of their respective
owners. The use of the word partner does not imply a partnership relationship between Cisco and any other
company. (1110R)
F5 (NASDAQ: FFIV) provides solutions for an application world. F5 helps organizations seamlessly scale cloud,
data center, and software defined networking (SDN) deployments to successfully deliver applications to
anyone, anywhere, at any time. F5 solutions broaden the reach of IT through an open, extensible framework
and a rich partner ecosystem of leading technology and data center orchestration vendors. This approach lets
customers pursue the infrastructure model that best fits their needs over time. The world's largest businesses,
service providers, government entities, and consumer brands rely on F5 to stay ahead of cloud, security, and
mobility trends. For more information, go to f5.com.
C07-736160-00
2015 Cisco | F5. All rights reserved.
12/15
Page 56