Você está na página 1de 56

F5 BIG-IP Local Traffic Manager

Service Insertion with Cisco


Application Centric Infrastructure
Deployment Guide
December 2015

2015 Cisco | F5. All rights reserved.

Page 1

Contents
Introduction ................................................................................................................................................. 4
Preface...................................................................................................................................................... 4
Audience ................................................................................................................................................... 4
Scope ........................................................................................................................................................ 4
Cisco APIC Overview ............................................................................................................................... 4
Hardware and Software Support ............................................................................................................... 5
Cisco ACI .................................................................................................................................................. 5
F5 BIG-IP .................................................................................................................................................. 6
F5 Device Package ...................................................................................................................................... 7
F5 Device Package Supported Features .................................................................................................. 7
F5 Device Package Supported Functions ................................................................................................ 8
Cisco ACI L4-L7 Service Insertion Configuration .................................................................................... 8
Prerequisites ............................................................................................................................................. 8
Configuring BIG-IP ............................................................................................................................... 8
Access the Management IP Address ................................................................................................... 8
Install the License ................................................................................................................................. 9
Integrating the Virtual Machine Manager When Using Virtual Machines ........................................... 10
Defining Fabric Access Policies to Communicate with F5 Hardware ................................................ 10
Creating Tenants ................................................................................................................................ 10
Create the Private Network and Bridge Domain ................................................................................ 11
Create the Application Profile and EPG ............................................................................................. 11
Import the F5 Device Package ............................................................................................................... 11
Create a Logical Device Cluster ............................................................................................................. 12
Creating the Physical BIG-IP Device Cluster ..................................................................................... 13
Creating the Virtual BIG-IP Device Cluster ........................................................................................ 17
Export the F5 Device Cluster from Tenant Common.............................................................................. 20
Create a Function Profile ........................................................................................................................ 21
Configuring Service Parameters for L4-L7 Server Load Balancing ................................................... 23
Create the Service Graph Template ....................................................................................................... 28
Configuring the One Node Option ...................................................................................................... 28
Configuring the Two Nodes Option .................................................................................................... 30
Configuring Dynamic Attached Endpoint Notification......................................................................... 30
Apply the L4-L7 Service Graph Template .............................................................................................. 31
Creating a One-Arm Mode Service Graph Template ......................................................................... 32
Creating a Two-Arm Mode Service Graph Template ......................................................................... 34
View the Configuration Pushed to BIG-IP .............................................................................................. 37
One-Arm Configuration ...................................................................................................................... 37
Partition Creation ................................................................................................................................ 37
VLAN Creation.................................................................................................................................... 37
Static Route Creation ......................................................................................................................... 37
Self-IP Address Creation .................................................................................................................... 38
Node Creation .................................................................................................................................... 38
Monitor Creation ................................................................................................................................. 39
Pool Creation ...................................................................................................................................... 39
Virtual Server Creation ....................................................................................................................... 39
Overall View Using the Network Map ................................................................................................. 40
Two-Arm Configuration ...................................................................................................................... 40
Partition Creation ................................................................................................................................ 40
VLAN Creation.................................................................................................................................... 41
Self-IP Address Creation .................................................................................................................... 41
Node Creation .................................................................................................................................... 42
Monitor Creation ................................................................................................................................. 42
2015 Cisco | F5. All rights reserved.

Page 2

Pool Creation ...................................................................................................................................... 42


Virtual Server Creation ....................................................................................................................... 43
Overall View Using the Network Map ................................................................................................. 43
Assign a VMware vCenter Port Group for the Virtual ADC .................................................................... 43
Two-Node Service Graph: Cisco ASA and F5 ........................................................................................ 45
F5 BIG-IP Attached as an EPG ................................................................................................................. 45
Logical One-Arm Configuration .............................................................................................................. 45
Topology ................................................................................................................................................. 46
Configure BIG-IP .................................................................................................................................... 46
Configure the APIC ................................................................................................................................. 47
F5 Device Package Upgrade: Standalone BIG-IP .................................................................................. 50
Troubleshooting Tips ............................................................................................................................... 53
APIC ........................................................................................................................................................ 53
BIG-IP ..................................................................................................................................................... 54
Health Monitoring Using the APIC .......................................................................................................... 54
Cluster Health Score .......................................................................................................................... 54
Service Health Score .......................................................................................................................... 54
Configuration and Deployment Videos ................................................................................................... 56

2015 Cisco | F5. All rights reserved.

Page 3

Introduction
Preface

This document discusses how to deploy F5 BIG-IP Local Traffic Manager (LTM) with the Cisco Application

Centric Infrastructure (Cisco ACI ) using a device package (XML definition of BIG-IP LTM functions) within the
data center. It presents the most commonly deployed use cases for Layer 4 through Layer 7 (L4-L7) services in
todays data center, integrated with the Cisco Application Policy Infrastructure Controller (APIC).
The topologies used in this document can be altered to reflect a setup and design that meets the customers
specific needs and environment.

Audience
This document is intended for use by network architects and engineers to aid in developing solutions for Cisco
ACI and F5 L4-L7 service insertion and automation.

Scope
This document defines the design recommendations for BIG-IP LTM integration into Cisco ACI architecture to
provide network services. Limited background information is included about other related components whose
understanding is required for the solution implementation.
More details about the Cisco ACI design can be found in Cisco ACI design guide.
Information about Cisco ACI service insertion can be found in the Cisco ACI white paper.
Details about F5 BIG-IP LTM can be found in BIG-IP LTM documentation.

Cisco APIC Overview


The main characteristics of Cisco ACI include:

Simplified automation by an application-based policy model

Centralized management, automation, and orchestration

Mixed-workload and migration optimization

Secure and scalable multitenant environment

Extensibility and openness: open source, open APIs, and open software flexibility for development and
operations (DevOps) teams and ecosystem partner integration

Investment protection (for both staff and infrastructure resources)

2015 Cisco | F5. All rights reserved.

Page 4

Figure 1:

Cisco APIC: Central Point of Configuration Management and Automation

The APIC acts as a central point of configuration management and automation for L4-L7 services and tightly
coordinates service delivery, serving as the controller for network automation (Figure 1). A service appliance
(device) performs a service function defined in the service graph. One or more service appliances may be
required to render the services required by a service graph. A single service device can perform one or more
service functions.
The APIC enables the user to define a service graph or chain of service functions in the form of an abstract graph:
for example, a graph of the web application firewall (WAF) function, the load-balancing function, and the network
firewall function. The graph defines these functions based on a user-defined policy. One or more service
appliances may be needed to render the services required by the service graph. These service appliances are
integrated into the APIC using southbound APIs built into a device package that contains the XML schema of the
F5 device model. This schema defines the software version, functions provided by BIG-IP LTM (SSL termination,
Layer 4 server load balancing [SLB], etc.), parameters required to configure each function, and network
connectivity details. It also includes Python scripts that map APIC events to function calls to BIG-IP LTM.

Hardware and Software Support


Cisco ACI

The joint solution uses the Cisco Nexus 9000 Series Switches (Figure 2).
Figure 2:

Cisco ACI Solution

2015 Cisco | F5. All rights reserved.

Page 5

The solution described in this document requires the following components:

Spine switches: The spine provides the mapping database function and connectivity among leaf switches.

Leaf switches: The leaf switches provide physical and server connectivity and policy enforcement.

APIC: The controller is the point of configuration for policies and the place at which statistics are archived
and processed to provide visibility, telemetry, application health information, and overall management for

the fabric. The APIC is a physical server appliance like a Cisco UCS C220 M3 Rack Server,

The designs in this document have been validated on APIC Version 1.1(1o).
For more information about Cisco ACI hardware, please refer to the discussion at Cisco Application Centric
Infrastructure.

F5 BIG-IP
F5 VIPRION, BIG-IP, and BIG-IP Virtual Edition (VE) running software Versions 11.4.1 and later can be
integrated with Cisco ACI (Figure 3).
Figure 3:

Supported BIG-IP platforms with ACI

If the F5 VE is being used, VMware vSphere integration must be performed with APIC before you use VE.
Cisco ACI can be closely integrated with the server virtualization layer. In practice, this integration means that
instantiating application policies through Cisco ACI will result in the equivalent constructs at the virtualization layer
(that is, port groups) being created automatically and mapped to the Cisco ACI policy.
After the integration with VMware vCenter is completed, the fabric or tenant administrator creates endpoint groups
(EPGs), contracts, and application profiles as usual. Upon creation of an EPG, the corresponding port group is
created at the virtualization level. The server administrator then connects virtual machines to these port groups.

2015 Cisco | F5. All rights reserved.

Page 6

F5 Device Package
F5 Device Package Supported Features
The F5 device package supports the following features:

Dynamic endpoint attach and detach: Endpoints either can be prespecified in corresponding EPGs
(statically at any time) or added dynamically as they are attached to Cisco ACI. Endpoints are tracked by
a special endpoint registry mechanism of the policy repository. This tracking gives the APIC visibility into
the attached endpoints. The APIC passes this information to BIG-IP. From BIG-IPs point of view, this
endpoint attached is a member of a pool, and hence BIG-IP converts the APIC call that the device
package receives into an addition of a member to a particular pool.

LTM configurable parameters

Self-IP addresses: A self-IP address is an IP address in the BIG-IP system that you associate with a
VLAN, to access hosts in that VLAN. Through its netmask, a self-IP address represents an address
spacethat is, a range of IP addresses spanning the hosts in the VLANrather than a single host
address. You can associate self-IP addresses not only with VLANs, but also with VLAN groups.

Static routes: As part of managing routing on a BIG-IP system, you add static routes for destinations
that are not located on the directly connected network.

Listener: A virtual server is an IP address and port specification on the BIG-IP system. The BIG-IP
system listens for traffic destined for that virtual server and then directs that traffic either to a specific
host for load balancing or to an entire network.

Server pools: A load balancing pool is a set of devices, such as web servers, that you group together
to receive and process traffic. Instead of sending client traffic to the destination IP address specified
in the client request, the LTM system sends the request to any of the servers that are members of
that pool.

Monitor: The LTM system can monitor the health and performance of pool members and nodes that
the LTM system supports.

iRules: An iRule is a powerful and flexible feature in the LTM system that you can use to manage
your network traffic. Using syntax based on the industry-standard Tools Command Language (Tcl),
the iRules feature not only allows you to select pools based on header data, but also allows you to
direct traffic by searching on any type of content data that you define. Thus, the iRules feature
significantly enhances your ability to customize your content switching to suit your exact needs.

Source Network Address Translation (SNAT) pool management: SNAT translates the source IP
address in a connection to a BIG-IP system IP address that you define. The destination node then
uses that new source address as its destination address when responding to the request.

HTTP redirect: The parameter redirects an HTTP request to a specified URL.


Client SSL offload: Client-side traffic refers to connections between a client system and the BIG-IP
system. When you allow the BIG-IP system to manage client-side SSL traffic, the BIG-IP system
terminates incoming SSL connections by decrypting the client request. The BIG-IP system then
sends the request, in clear text, to a target server. Next, the BIG-IP system retrieves a clear-text
response (such as a webpage) and encrypts the request before sending it (in this case, the webpage)
back to the client.

2015 Cisco | F5. All rights reserved.

Page 7

Support for active-standby high-availability model per APIC logical device cluster

Configuration of BIG-IP licensing and out-of-band (OOB) management prior to APIC integration

For updated information about the parameters, please refer to the user guide for the particular version of the F5
device package being used.
You can download the F5 device package from https://downloads.f5.com.

F5 Device Package Supported Functions


The F5 device package supports the following functions:

Service function: Virtual server

Layer 4 SLB

Act on data found in the network- and transport-layer protocols (IP, TCP, FTP, and UDP).

Perform Layer 4 SLB with SSL offload.

Layer 7 SLB

Distribute requests based on data found in application-layer protocols such as HTTP.

Perform Layer 7 SLB with SSL offload.

Cisco ACI L4-L7 Service Insertion Configuration


Prerequisites
Review the design guide before proceeding to the L4-L7 service insertion configuration. The design guide will
help you determine the L2-L3 networking elements and topology required for your use case.
Elements need to be preconfigured before you begin the L4-L7 service insertion.

Configuring BIG-IP
Before BIG-IP can be used for L4-L7 service insertion, it needs access to the management network and it needs
to be licensed out of band.

Access the Management IP Address


Click here to learn more about how to assign a management IP address to BIG-IP
One method for accessing the management IP address, if you have console access to BIG-IP, is to use the config
command,
1. Log in to the console.
2. Run the command config on the command line. A wizard will appear to help you assign the management
IP address.

2015 Cisco | F5. All rights reserved.

Page 8

3. Click <No> and continue with the wizard to enter you own IP address and netmask and default route.

Install the License


You will need a registration key to apply the license.
From the GUI, follow these steps:
1. Log in to the GUI at https://<mgmt_ip_address>. The default username is admin, and the default
password is admin.
2. From the Setup Utility menu, choose Network. Then click Finished.

2015 Cisco | F5. All rights reserved.

Page 9

3. After the configuration is saved, choose System from the main menu. Then choose License.
4. Apply the registration key to license the system and follow the wizard.
Click the following links for more information about license installation:

License installation using the command-line interface (CLI)

License installation using the GUI

Integrating the Virtual Machine Manager When Using Virtual Machines


The APIC is a single-pane manager that automates the entire networking configuration for all virtual and physical
workloads, including access policies and L4-L7 services. In the case of vCenter, all the networking functions of
the VMware Virtual Distributed Switch (VDS) and port groups are performed using the APIC. The only task that a
vCenter administrator needs to perform in vCenter is to place the virtual network interface cards (vNICs) in the
appropriate groups that the APIC created.
Click here for more information about how to integrate vCenter and manage virtual machine domains and
connectivity.

Defining Fabric Access Policies to Communicate with F5 Hardware


Access policies govern the operation of switch access ports that provide connectivity to resources such as
storage, computing, Layer 2 and Layer 3 (bridged and routed) connectivity, virtual machine hypervisors, L4-L7
devices, and so on. If a tenant requires interface configurations other than those provided in the default link, Cisco
Discovery Protocol, Link Layer Discovery Protocol (LLDP), Link Aggregation Control Protocol (LACP), or
Spanning Tree Protocol, an administrator must configure access policies to enable such configurations on the
access ports of the leaf switches.
The following entities need to be configured to add a new device to the fabric:

Physical and external domains

VLAN pool

Attachable access entity profile

Interface policies

Switch policies

Click here for a detailed explanation of the steps for configuring these entities (look in the fabric connectivity
section).

Creating Tenants
A tenant is a logical container or a folder for application policies. This container can represent an actual tenant, an
organization, or a domain, or it can be used simply for convenience in organizing information. A tenant represents
a unit of isolation from a policy perspective, but it does not represent a private network.

2015 Cisco | F5. All rights reserved.

Page 10

Create the Private Network and Bridge Domain


A context (private network) is a unique Layer 3 forwarding and application policy domain (a private network or
virtual routing and forwarding [VRF] instance) that provides IP address space isolation for tenants.
A bridge domain represents a Layer 2 forwarding construct within the fabric.
In BIG-IP, a tenant in the APIC maps to a partition in BIG-IP. A contextual VRF instance is represented in BIG-IP
as a route domain.
For a typical deployment for one tenant, define:

One private network

One or more bridge domains

Two-arm mode: One tenant has two bridge domains (one representing the client subnet, and one
representing the server subnet).

One-arm mode: One tenant has three bridge domains (one representing the client subnet, one
representing the F5 subnet, and one representing the server subnet).
For more information about design decisions, refer to the F5 and Cisco design document.

Create the Application Profile and EPG


Application profiles contain one or more EPGs. Modern applications contain multiple components. For example,
an e-commerce application might require a web server, a database server, data located in a SAN, and access to
outside resources that enable financial transactions. The application profile contains as many (or as few) EPGs as
necessary that are logically related to the capabilities of an application.
EPGs can be organized according to any of the following:

The application they provide (such as SAP applications)

The function they provide (such as infrastructure)

Where they are in the structure of the data center (such as the DMZ)

Whatever organizing principle that a fabric or tenant administrator chooses to use

For a typical deployment in two-arm mode, one tenant has two EPGs: one representing the client bride domain,
and one representing the server bridge domain.
For a typical deployment in one-arm mode, one tenant has three EPGs: one representing the client bridge
domain, one representing the F5 bridge domain, and one representing the server bridge domain.
Click here for a detailed explanation of the EPG configuration steps.

Import the F5 Device Package


A critical part of service integration is installing the device package. After a device package has been uploaded,
the APIC is aware of the device features and functions, and configuration tasks can be moved from the device to
the APIC.
The F5 device package allows the APIC to define, configure, and monitor BIG-IP. You can think of the device
package as a tool or translator that allows the APIC and BIG-IP to communicate with each other.

2015 Cisco | F5. All rights reserved.

Page 11

To import the device package, follow these steps:


1. Navigate to L4-L7 Services > Packages > Actions > Import Device Package.

2. Click Submit to install the device package.

3. Expand the menu at the left to verify that the device package is installed.

Create a Logical Device Cluster


A device cluster (also known as a logical device) is one or more concrete devices that act as a single device. A
device cluster has logical interfaces, which describe the interface information for the device cluster. During service
graph template rendering, function node connectors are associated with logical interfaces. The APIC allocates the
network resources (VLAN or Virtual Extensible LAN [VXLAN]) for a function node connector during service graph
template instantiation and rendering and programs the network resources on the logical interfaces.
An administrator can set up a maximum of two concrete device clusters in the active-standby mode.
The logical device cluster has information about BIG-IP credentials that the APIC will use to communicate with
BIG-IP.

2015 Cisco | F5. All rights reserved.

Page 12

The logical device cluster can be created in tenant common or in your created tenant. The advantage of creating
it in tenant common is that this logical device cluster can then be exported to multiple tenants and used by
multiple tenants. If you define the logical device cluster within your own tenant, then this cluster can be used by
only this one particular tenant.
To add a logical device cluster, follow these steps:
1. Navigate to the tenant of your choice and choose L4-L7 Services > Function Profiles > L4-L7 Devices.
2. Right-click L4-L7 Devices and then select Create L4-L7 Devices.
Depending on the mode and the equipment used, different parameters will be used to create the logical device
cluster.

Creating the Physical BIG-IP Device Cluster


1. Configure the settings as shown here:
Field Description

Comments

Name

Enter the name of the device cluster (alphanumeric characters only; no special
characters).

Device Package

Select the BIG-IP device package from the pull-down menu (for example, F5-BIGIP1.2.0).

Model

Select Unknown Manual.

Mode

Select HA Cluster because you are creating a device cluster with a single BIG-IP system.

Device Type

Select Physical because you are configuring a physical BIG-IP device.

Context Aware

Select either Single or Multiple (select Single for one context; if you have more than one
context, then select Multiple).

Physical Domain

Select the APIC physical domain with which the BIG-IP ports are connected.

APIC to Device
Management
Connectivity

Select Out-Of-Band because you are using the management network for communication
to BIG-IP devices.

User Name

Enter the user ID on the BIG-IP system that has administrator access.

Password

Enter the corresponding password.

2. Configure device 1 as shown here:


Management IP
Address

This is the unique IP address of the management port on BIG-IP device 1. This address
should correspond to the address that is used to access the BIG-IP web GUI.

Management Port

Select HTTPS.

Connects To

Select Port if the physical port of the BIG-IP system is connected to the Cisco ACI
fabric.
Select PC if the physical port is configured as a port channel from BIG-IP to the ACI
fabric.
Select vPC if the physical port is connected as a virtual port channel from BIG-IP to
ACI fabric

Physical Interfaces

These are the physical interfaces.

2015 Cisco | F5. All rights reserved.

Page 13

Name

For a port channel, enter the physical interfaces used in X_Y notation (for
example, 1_1).
For a virtual port channel (vPC), enter the trunk name defined in the BIG-IP system.

Connects To

Select the Cisco ACI leaf node and port to which BIG-IP is connected.

Direction

Provide the label for the port type:


Use Provider for ports connected to the servers.
Use Consumer for ports at which requests come into BIG-IP.
Use Provider and Consumer for physical ports through which server and client
requests both transverse (for example, a trunk port where there is a server VLAN and
an outside client VLAN).

3. Configure device 2 as shown here (needed only if you are deploying BIG-IP in high-availability mode):
Management IP
Address

This is the unique IP address of the management port on BIG-IP device 2 (for example,
the address that is used to access the BIG-IP web GUI).

Management Port

Select HTTPS.

Connects To

Select Port if the physical port of the BIG-IP system is connected to the Cisco ACI
fabric.
Select PC if the physical port is configured as a port channel from BIG-IP to the Cisco
ACI fabric.

Physical Interfaces
Name

These are the physical interfaces.


For a port channel, enter the physical interfaces used in X_Y notation (for
example, 1_1).
For a vPC, enter the trunk name defined in the BIG-IP system.
Select vPC if the physical port is connected as a virtual port channel from BIG-IP to
ACI fabric

Connects To

Select the Cisco ACI leaf node and port to which BIG-IP is connected.

Direction

Provide the label for the port type:


Use Provider for ports connected to the servers.
Use Consumer for ports at which requests come into BIG-IP.
Use Provider and Consumer for physical ports through which server and client
requests both transverse (for example, a trunk port where there is a server VLAN and
an outside client VLAN).

4. Configure the cluster as shown here:


Management IP
Address

Enter the IP address of either device 1 or device 2.

Management Port

Select HTTPS.

2015 Cisco | F5. All rights reserved.

Page 14

5. Click Next.
6. You need to specify some additional minimum parameters for the high-availability cluster to form. Select
All Parameters and fill in the following values for the device host:
Host Name

Enter the unique fully qualified device names (FQDNs) for the two devices (for example,
F5-1.example.com and F5-2.example.com).

7. Configure high availability as shown here (needed only if you are deploying BIG-IP in high-availability
mode; skip this step if you are deploying the solution in standalone mode):
High Availability
Interface Name

Enter the interface to use for the high-availability heartbeat. This is the interface that has
the direct link between the two devices (for example, 1_3).

High Availability
Self IP Address

Enter a unique IP address for each of the devices to be used for the heartbeat (for
example, 192.168.1.1 and 192.168.1.2).

High Availability
VLAN

Enter a unique VLAN tag that the BIG-IP can use for the heartbeat (for example, 300).

High Availability
Self IP Netmask

Enter the network mask in dot notation (for example, 255.255.255.0).

The following screen image shows an example of the completed page:

2015 Cisco | F5. All rights reserved.

Page 15

8. After you have completed these steps, the device cluster will be created. A few minutes may be needed
for all the configuration to be completed and the high-availability cluster to become stable. Depending on
the setup, a synchronization timeout may occur on the APIC. After the configuration is complete and
stable, verify it.
Here you can see that the device cluster is stable.

9. Log into BIG-IP and confirm that the device group has been formed (it will contain one or two BIG-IP
members depending on whether BIG-IP is deployed in standalone mode or high-availability mode).
You can also see that the device is online, active, and synchronized (only if you are deploying BIG-IP in
high -availability mode).

2015 Cisco | F5. All rights reserved.

Page 16

At this point the device cluster is created and ready for use in service graphs in Cisco ACI.

Creating the Virtual BIG-IP Device Cluster


The configurations are similar to those for the physical BIG-IP cluster, with some slight differences for some of the
parameters.
1. Configure the settings shown here:
Field Description

Comments

Name

Enter the name of the device cluster (alphanumeric characters only; no special
characters).

Device Package

Select the BIG-IP device package from the pull-down menu (for example, F5-BIGIP1.2.0).

Model

Select Unknown Manual.

2015 Cisco | F5. All rights reserved.

Page 17

Field Description

Comments

Mode

Select HA Cluster because you are creating a device cluster with a single BIG-IP system.

Device Type

Select Virtual because you are configuring a virtual BIG-IP device.

Context Aware

Select Single.

VMM Domain

Select the APIC virtual machine manager (VMM) domain (the vCenter integration
domain).

APIC to Device
Management
Connectivity

Select Out-Of-Band because you are using the management network for communication
to BIG-IP devices.

User Name

Enter the user ID on the BIG-IP system that has administrator access.

Password

Enter the corresponding password.

2. Configure device 1 as shown here:


Management IP
Address

This is the unique IP address of the management port on BIG-IP device 1 This address
should correspond to the address that is used to access the BIG-IP web GUI.

Management Port

Select HTTPS.

VM

Select the virtual machine that you want to use from the drop-down menu.

Connects To

Select Port if the physical port of the BIG-IP system is connected to the Cisco ACI
fabric.
Select PC if the physical port is configured as a port channel from BIG-IP to the Cisco
ACI fabric.

Physical Interfaces
Name

These are the physical interfaces.

For a port channel, enter the physical interfaces used in X_Y notation (for
example, 1_1).
For a vPC, enter the trunk name defined in the BIG-IP system.

vNIC

Enter the network adaptor for the corresponding BIG-IP interface (for example, for 1_1
enter network adaptor2, and for 1_2 enter network adaptor3.

Connects To

Select the Cisco ACI leaf node and port to which ESXi host the BIG-IP VE is connected.

Direction

Provide the label for the port type:


Use Provider for ports connected to the servers.
Use Consumer for ports at which requests come into BIG-IP.
Use Provider and Consumer for physical ports through which server and client
requests both transverse (for example, a trunk port where there is a server VLAN and
an outside client VLAN).

3. Configure device 2 as shown here (needed only if you are deploying BIG-IP in high-availability mode):
Management IP
Address

This is the unique IP address of the management port on BIG-IP device 2 This should
correspond to the address that is used to access the BIG-IP web GUI.

Management Port

Select HTTPS.

VM

Select the virtual machine that you want to use from the drop-down menu.

2015 Cisco | F5. All rights reserved.

Page 18

Connects To

Select Port if the physical port of the BIG-IP system is connected to the Cisco ACI
fabric
Select PC if the physical port is configured as a port channel from BIG-IP to the Cisco
ACI fabric

Physical Interfaces
Name

These are the physical interfaces.


For a port channel, enter the physical interfaces used in X_Y notation (for
example, 1_1).
For a vPC, enter the trunk name defined in the BIG-IP system.

vNIC

Enter the network adaptor for the corresponding BIG-IP interface (for example, for 1_1,
enter network adaptor2, and for 1_2, enter network adaptor3).

Connects To

Select the Cisco ACI leaf node and port to which ESXi host the BIG-IP VE is connected.

Direction

Provide the label for the port type:

Use Provider for ports connected to the servers.


Use Consumer for ports at which requests come into BIG-IP.
Use Provider and Consumer for physical ports through which server and client

requests both transverse (for example, a trunk port where there is a server VLAN and
an outside client VLAN).
4. Configure the cluster as shown here:
Management IP
Address

Enter the IP address of either device 1 or device 2.

Management Port

Select HTTPS.

The remaining configuration is the same as for the physical appliance in high-availability mode.
Click here for more information about the APIC L4-L7 deployment.
2015 Cisco | F5. All rights reserved.

Page 19

Export the F5 Device Cluster from Tenant Common


If the logical device cluster in created in tenant common instead of in the user-defined tenant, the logical device
cluster will need to be exported from tenant common to the user-defined tenant.
The logical device interface, which is a proxy object in the tenant that points to the logical device, is used when
the device cluster is defined in a common tenant and other tenants want to use the service provided by this
device cluster.
To export the device cluster, follow these steps:
1. Navigate to Tenant > common > L4-L7 Services > L4-L7 Devices and right-click.

2. Choose Export L4-L7 devices, choose the L4-L7 device to import and the tenant to which you want to
export it. Then click Submit.

3. Navigate to your tenants and under L4-L7 Services > Imported Devices, you will see the logical device
cluster that you imported.

2015 Cisco | F5. All rights reserved.

Page 20

Create a Function Profile


A function profile is a template for one or more functions suitable for a specific application. Creating a function
profile is the equivalent to defining an abstract graph within a device package with meaningful defaults for a
function that defines the graph. The user can use the built-in function profile by referencing it in the device
package at the time that the service graph is defined. Use of the function profile reduces the number of
parameters that a user needs to provide to instantiate a service function for a specific application. A device
package developer can include as many function profiles as applicable.
If a function profile is not defined in the device package by default, the user can create a function profile with
predefined parameters and then apply that function profile to a template.
To create a function profile, follow these steps:
1. Navigate to Tenant > L4-L7 Services > Function Profile and right-click
2. Create a new function profile group and then, under that group, create a new function profile.

2015 Cisco | F5. All rights reserved.

Page 21

3. Choose Create L4-L7 Service Function Profile.

2015 Cisco | F5. All rights reserved.

Page 22

4. Give the function profile a name and select a profile from the drop-down list as a base (the drop-down list
includes all the function profiles that are provided in the device package by default).

5. Select the Parameters tab and select all in the Features list in the menu at the left. You will see all the
supported device package parameters that the user can edit.
Note the difference between Device Config and Function Config parameters:

Device Config parameters apply to the BIG-IP partition level and can be used by multiple virtual
servers.

Function Config parameters apply to the virtual server level and are unique to this particular virtual
server. The function configuration forms a relationship (referencing) with the device configuration.
6. After all the parameters needed have been updated, click Submit. The function profile will be created.

Configuring Service Parameters for L4-L7 Server Load Balancing


The service parameters used for L4-L7 SLB are shown here:
Folder path

Parameter name

Description

Device Config/LocalTraffic/ClientSSL

Certificate

Client SSL certificate file name. It


must be present on the BIG-IP
system prior to deploying the
service graph.

Device Config/LocalTraffic/ClientSSL

Key

Client SSL key file name. It must be


present on the BIG-IP system prior
to deploying the service graph.

Device Config/LocalTraffic/ClientSSL

KeySecret

Client SSL password used to


decrypt the Client SSL key file.

Device Config/LocalTraffic/FastL4

IdleTimeout

Connection idle time (in seconds)


required to make a connection a
candidate for deletion.

Device Config/LocalTraffic/HTTP

Pipelining

Allow pipelining of HTTP requests.

2015 Cisco | F5. All rights reserved.

Page 23

Folder path

Parameter name

Description

Device Config/LocalTraffic/HTTP

XForwardedFor

Insert the X-Forwarded-For header


in HTTP connections.

Device Config/LocalTraffic/Monitor

FailByAttempts

Number of times a monitor must fail


in order to trigger a service down
event.

Device Config/LocalTraffic/Monitor

FrequencySeconds

Monitoring frequency, in seconds.

Device Config/LocalTraffic/Monitor

ReceiveText

Monitor receive string.

Device Config/LocalTraffic/Monitor

ReceiveText

Monitor receive string.

Device Config/LocalTraffic/Monitor

SendText

Monitor send string.

Device Config/LocalTraffic/Monitor

Type

Monitor type.

Device Config/LocalTraffic/Pool

ActionOnServiceDown

Action to take when a pool member


has a state of Down.

Device Config/LocalTraffic/Pool

PoolType

This value can be STATIC or


DYNAMIC. DYNAMIC is required
when using dynamic endpoints, but
you may not list pool members
when set to DYNAMIC.

Device Config/LocalTraffic/Pool

LBMethod

Load balancing algorithm.

Device
Config/LocalTraffic/Pool/Member

ConnectRateLimit

Connect rate limit.

Device
Config/LocalTraffic/Pool/Member

ConnectionLimit

Connection limit.

Device
Config/LocalTraffic/Pool/Member

IPAdress

Pool member IP address.

Device
Config/LocalTraffic/Pool/Member

Port

Pool member port.

Device
Config/LocalTraffic/Pool/Member

Ratio

Ratio for load balancing algorithm.

Device
Config/LocalTraffic/Pool/PoolMonitor

PoolMonitorRel

Reference to pool monitor to use for


this pool definition. Note the path
consists of the name of your
LocalTraffic folder, a forward slash
(/), and then the name of your
Monitor folder. For example,
LTM/monitor.

Device Config/LocalTraffic/iRule

iRuleName

Name of the iRule to use. Note the


iRule must exist on the BIG-IP
system prior to deploying a service
graph.

Device
Config/Network/ExternalSelfIP

Floating

Enable floating for self IP. Must be


YES or NO.

Device
Config/Network/ExternalSelfIP

SelfIPAdress

External self IP address.

2015 Cisco | F5. All rights reserved.

Page 24

Folder path

Parameter name

Description

Device
Config/Network/ExternalSelfIP

SelfIPMask

External self IP mask.

Device
Config/Network/ExternalSelfIP

PortLockdown

Specifies which ports are open on


the self IP.

Device Config/Network/InternalSelfIP

Floating

Enable floating for Self IP. Must be


YES or NO.

Device Config/Network/InternalSelfIP

SelfIPAddress

Internal self IP address.

Device Config/Network/InternalSelfIP

SelfIPMask

Internal self IP mask.

Device Config/Network/InternalSelfIP

PortLockdown

Specifies which ports are open on


the self IP.

Device Config/Network/Route

DestinationIPAddress

Destination IP address

Device Config/Network/Route

DestinationNetmask

Destination network mask.

Device Config/Network/Route

NextHopIPAddress

Next hop router IP address

Device Config/Network/SNATPool

SNATIPAddress

Source IP address for SNAT.

Function Config/ClientSSL

ClientSSLRel

Reference to the Client SSL object


to use. Note the path consists of the
name of your LocalTraffic folder, a
forward slash (/), and then the
name of your Device
Config>LocalTraffic>ClientSSL
folder. For example, LTM/ssl.

Function Config/FastL4

FastL4Rel

Reference to the FlashL4 object to


use. Note the path consists of the
name of your LocalTraffic folder, a
forward slash (/), and then the
name of your Device
Config>LocalTraffic>FlashL4
folder. For example, LTM/14.

Function Config/HTTP

HTTPRel

Reference to the HTTP object to


use. Note the path consists of the
name of your LocalTraffic folder, a
forward slash (/), and then the
name of your Device
Config>LocalTraffic>HTTP folder.
For example, LTM/http.

Function Config/HTTPRedirect

HTTPRedirectRel

Reference to the HTTPRedirect


object to use. Note the path
consists of the name of your
LocalTraffic folder, a forward slash
(/), and then the name of your
Device Config>LocalTraffic>
HTTPRedirect folder. For example,
LTM/http-redirect.

Function Config/Listener

DestinationIPAddress

Virtual service IP address.

2015 Cisco | F5. All rights reserved.

Page 25

Folder path

Parameter name

Description

Function Config/Listener

DestinationNetmask

Virtual service network mask.

Function Config/Listener

DestinationPort

Virtual service port.

Function Config/Listener

Protocol

Virtual server protocol, valid values


are TCP or UDP.

Function Config/Listener

SourceIPAddress

Allowed client source address or


network.

Function Config/Listener

SourceNetmark

Allowed client source netmask.

Function Config/Listener

ConnectionMirroring

Specifies whether connection


mirroring is enabled. A forward
slash (/), and then the name of your
Device Config>LocalTraffic>Pool
folder. For example, LTM/httpredirect.

Function Config/iRule

iRuleRel

Reference to the iRule object to


use. Note the path consists of the
name of your LocalTraffic folder, a
forward slash (/), and then the
name of your Device
Config>LocalTraffic>iRule folder.
For example, LTM/my-irule.

Function Config/Listener

SourcePort

Specifies if the clients source port


should be preserved or modified.

Function Config/Listener

PersistenceProfileName

The name of the persistence profile


to use. If no partition name is
specified, it is assumed to in
/Common.

Function Config/Listener

FallbackPersistenceProfileName

The name of the fallback


persistence profile.

Function Config/NetworkRelation

NetworkRel

Reference to the Network object to


use. Note the path consist of the
name of your Device
Config>Network folder.

Function Config/Pool

EPGConnectionRateLimit

Connect rate limit.

Function Config/Pool

EPGConnectionLimit

Connection limit.

Function Config/Pool

EPGDestinationPort

Destination port for endpoint pool


members.

Function Config/Pool

EPGRatio

Ratio for load balancing algorithm.

Function Config/Pool

PoolRel

Reference to the Pool object to use.


Note the path consists of the name
of your LocalTraffic folder,

2015 Cisco | F5. All rights reserved.

Page 26

Note the following details in the configuration:

For two-arm mode, both external and internal self-IP addresses are configured.

For one-arm mode, only the internal self-IP address is configured.

For high availability, specify a floating IP address for both internal and external self-IP addresses.

The pool type will be defined as Static or Dynamic depending on whether dynamic attach notification is
needed.

If the pool type is Dynamic, no node members need to be added.


If the pool type is Dynamic, define the pool port under Device Config on the Pool tab. (This setting
defines the port on which the node member will listen in the case of a dynamic attached endpoint.)

If the pool type is Static, node members need to be added.

To use HTTP redirect, double-click the HTTP Redirect folder under Device Config and then reference it in
Function Config.

Internally, after HTTP redirect is selected, that virtual server is assigned an F5 written redirect iRule.
The assumption is that two service graphs need to be deployed: one listening on port 80, and one
listening on port 443 (where the virtual server listening on port 443 will have the redirect iRule
redirecting traffic to the virtual server listening on port 80).

Any profiles (HTTP, persistence, etc.) and iRules used are referenced from the BIG-IP configuration. (The
profile name and iRule name are assumed to be added to BIG-IP out of band before they are referenced
in the APIC.)

For SSL offload, use the APIC to enable SSL traffic management for client-side traffic. The primary way
that you can control SSL network traffic is by configuring a client profile. A client profile is a type of traffic
profile that enables the BIG-IP system to accept and terminate any client requests that are sent using a
fully SSL-encapsulated protocol. The certificate management is performed by BIG-IP and not by the
APIC. The APIC is used only to reference the SSL profile that is created.

Managing SSL traffic requires you to complete these tasks:

Install a key and certificate pair on the BIG-IP system to terminate client-side secure connections.
While deploying the service graph or creating a function profile from APIC:

Specify the certificate, key and key secret for the client SSL profile (These parameters are
present under Device Config > Local Traffic > Client SSL.)

Associate that profile with the virtual server. (You do this under Function Config > Client SSL.)

For updated information about the parameters, refer to the user guide for the particular version of the F5 device
package being used.
You can download the F5 device package from https://downloads.f5.com.

2015 Cisco | F5. All rights reserved.

Page 27

Create the Service Graph Template


Using the service graph template, you can define:

Type of BIG-IP deployment (one arm or two arm)

Number of nodes in the service graph (one or two)

Device function (virtual server, SharePoint Server, etc.)

Profile to be used (basic or customized function profile)

Cisco ACI treats services as an integral part of an application. Any services that are required are treated as a
service graph that is instantiated on the Cisco ACI fabric from the APIC. Users define the service for the
application, and service graphs identify the set of network or service functions that are needed by the application.
Each function is represented as a node.
After the graph is configured in the APIC, the APIC automatically configures the services according to the service
function requirements that are specified in the service graph. The APIC also automatically configures the network
according to the needs of the service function that is specified in the service graph, which does not require any
changes in the service device.
A service graph is represented as two or more tiers of an application with the appropriate service function inserted
between them. A service graph is inserted between the source and destination EPGs by a contract (Figure 4).
Figure 4:

Service Graph

When you create a service graph template, several template options are provided, based on the number of nodes
you need and the type, as described in the following sections.

Configuring the One Node Option


Configure the One Node option as follows:
1. Select One Node. You can then use any function profile created.

2015 Cisco | F5. All rights reserved.

Page 28

2. Select an option. A template will be created.

2015 Cisco | F5. All rights reserved.

Page 29

Configuring the Two Nodes Option


Configure the Two Nodes option as follows:
1. Select Two Nodes.
2. Use this option to use a firewall and application delivery controller (ADC) in a service chain.

Configuring Dynamic Attached Endpoint Notification


You can configure a static pool or a dynamic pool.

Static pool: Using the APIC, you can define pool members statically: that is, when you deploy the service
graph, along with specifying other LTM parameters, you also assign the pool members (the pool type is
defined as Static). These static pool members are then configured on BIG-IP by the APIC.

Dynamic pool: For dynamic pool addition to work, the attachment notification on the function connector
(connecting to the server subnet) must be enabled on the APIC. In addition, you must define the pool as
Dynamic when you deploy the service graph. The user is not expected to supply the pool member
information.
Endpoints are added dynamically as they are attached to Cisco ACI. Endpoints are tracked by a special
endpoint registry mechanism of the policy repository. This tracking gives the APIC visibility into the
attached endpoints. The APIC passes this information to BIG-IP. From BIG-IPs point of view, this
endpoint is a member of a pool. The device package converts the APIC attached endpoint notification
that it receives to an iControl message (Simple Object Access Protocol [SOAP] or representational state
transfer [REST] API supported by BIG-IP) that BIG-IP understands. Then it adds that member to a
particular pool on BIG-IP.

2015 Cisco | F5. All rights reserved.

Page 30

This feature is useful if workloads and servers to be load balanced need to be increased or decreased
based on time or load or other triggers. This feature is also useful if you need to load balance across a
large number (100 or more) servers, because manual addition of those servers to the pool is time
consuming.
After the template is created, depending on whether dynamic endpoint attachment is needed, one configuration
change is needed.
1. Expand the service graph template that is going to be used.
2. Choose Function Node ADC > Internal and select the Attachment Notification check box. (If you are
using static pools, then leave this check box disabled.)

After a service graph has been applied with a dynamically attached endpoint, you cannot change the parameters
on the LTM to reflect a static pool, or vice versa. If you need static instead of dynamic after the graph has been
deployed, then you will have to create a new graph. The existing graph cannot be modified.

Apply the L4-L7 Service Graph Template


Figure 5 shows the bridge domains and EPGs used in the examples that follow.
Figure 5:

Bridge Domains and EPGs

2015 Cisco | F5. All rights reserved.

Page 31

Creating a One-Arm Mode Service Graph Template


In an APIC one-arm deployment, a bridge domain must be assigned to the F5 function node. This bridge domain
can be a separate bridge domain created for F5, or it can be an existing bridge domain (Figure 6).
Figure 6:

1-ARM Topology

1. After the one-arm mode service graph template is created, right-click the template and click Apply L4-L7
Service Graph Template. Assign the Consumer EPG, Provider EPG, and the contract name. You can
choose to create a new contract or use an existing contract created earlier.

2015 Cisco | F5. All rights reserved.

Page 32

2. Click Next. On the next screen, choose a logical device cluster.

3. Select the bridge domain to be used for F5. Click the All Parameters tab and the All features link. Here is
where you would enter all the LTM-related configurable parameters. If you used a function profile to
create the graph template, the values specified in the function profile will be already present at this time. If
you did not use a function profile, then you will need to add all the required parameters at this point. After
the parameters have been added, click Finish.

Click here for a description of the above parameters exposed through the F5 device package.
If the subnets used for Consumer and Provider and F5 are different, make sure you have a default route
specified on the client and server and BIG-IP device as needed. A route on BIG-IP can be added through
the APIC. For the client and server being used, you will need to perform this process out of band for the
APIC.

2015 Cisco | F5. All rights reserved.

Page 33

4. At this point, all the configurations will be pushed by the APIC to BIG-IP. You can view the status of the
graph under the tenant by choose L4-L7 Services > Deployed Graph Instances (the applied state implies
that the graph has been pushed correctly with no faults).

Creating a Two-Arm Mode Service Graph Template


In an APIC two-arm deployment, no bridge domain is associated with the F5 L4-L7 device (Figure 7).
Figure 7:

2-ARM Topology

2015 Cisco | F5. All rights reserved.

Page 34

1. After the one-arm mode service graph template is created, right-click the template and click Apply L4-L7
Service Graph Template. Assign the Consumer EPG, Provider EPG, and the contract name. You can
create a new contract or use an existing contract created earlier.

2. Click Next. On the next screen, choose a logical device cluster.


3. Click the All Parameters tab and the All features link. Here is where you would enter all the LTM-related
configurable parameters. If you used a function profile to create the graph template, the values specified
in the function profile will be already present at this time. If you did not use a function profile, then you will
need to add all the required parameters at this point.

2015 Cisco | F5. All rights reserved.

Page 35

4. If this is high availability setup, you need to specify the node to which the self-IP address belongs in the
cluster. To do this, expand the network folder, double-click the external self-IP address and internal selfIP address, and then update the Apply to Specific Device parameter (choose a device from the dropdown list).

5. After you have added or updated all the parameters, click Finish.
6. At the point, the configuration will be pushed by the APIC to BIG-IP. You can view the status of the graph
under the tenant by choosing L4-L7 Services > Deployed Graph Instances (the applied state implies that
the graph has been pushed correctly with no faults).

If more than one graph needs to be deployed, use the exact same steps to apply the second graph.

2015 Cisco | F5. All rights reserved.

Page 36

View the Configuration Pushed to BIG-IP


After the graph has been applied and deployed, all the configuration is pushed to BIG-IP.
A tenant and context combination in the APIC corresponds to a partition and route domain combination in BIG-IP.
Therefore, for each tenant, a partition is created, and within each tenant, the number of graphs deployed
corresponds to the virtual servers created on BIG-IP.
This section shows some of the configuration pushed to BIG-IP from the list of parameters.

One-Arm Configuration
Partition Creation
In the upper-right corner, click the drop-down. You should see a partition created by the APIC.

VLAN Creation
On the Network tab, choose VLANS.

The VLAN created on BIG-IP correspond to those assigned by the APIC.

Static Route Creation


On the Network tab, choose Route.

2015 Cisco | F5. All rights reserved.

Page 37

Self-IP Address Creation


On the Network tab, choose Self IPs.

Node Creation
On the Local Traffic tab, choose Nodes.

The nodes can be assigned statically, or they can be added dynamically (if you are using a dynamically attached
endpoint).

2015 Cisco | F5. All rights reserved.

Page 38

Monitor Creation
On the Local Traffic tab, choose Monitors and click the APIC-created monitor.

Pool Creation
On the Local Traffic tab, choose Pools and click the APIC-created pool.

Virtual Server Creation


On the Local Traffic tab, choose Virtual Server.

2015 Cisco | F5. All rights reserved.

Page 39

Overall View Using the Network Map


On the Local Traffic tab, choose Network Map.

Two-Arm Configuration
Partition Creation
In the upper-right corner, click the drop-down list. You should see a partition created by the APIC.

2015 Cisco | F5. All rights reserved.

Page 40

VLAN Creation
On the Network tab, choose VLANs.

The VLANs created on BIG-IP correspond to those assigned by the APIC.


Self-IP Address Creation
On the Network tab, choose Self IPs.

2015 Cisco | F5. All rights reserved.

Page 41

Node Creation
On the Local Traffic tab, choose Nodes.

The nodes can be assigned statically, or they can be added dynamically (if you are using a dynamically attached
endpoint).

Monitor Creation
On the Local Traffic tab, choose Monitors and click the APIC-created monitor.

Pool Creation
On the Local Traffic tab, choose Pools and click the APIC-created pool.

2015 Cisco | F5. All rights reserved.

Page 42

Virtual Server Creation


On the Local Traffic tab, choose Virtual Server.

Overall View Using the Network Map


On the Local Traffic tab, choose Network Map.

Assign a VMware vCenter Port Group for the Virtual ADC


The last step is to confirm and assign the correct vCenter port groups created by the APIC to the virtual
machines. If virtual factors of the client and server and BIG-IP are used, then the following configuration needs to
be checked. The port group name is a concatenation of the tenant name, the application profile name, and the
EPG name.
1. The APIC automatically assigns the port group to BIG-IP VE based on the information you provided about
interfaces and direction (consumer and provider) while creating the logical device cluster for BIG-IP.

One-arm mode: Both the network adaptors are assigned the same port group because in one-arm
mode, only one bridge domain and EPG is assigned to BIG-IP.

2015 Cisco | F5. All rights reserved.

Page 43

Two-arm mode: Both network adaptors are assigned different port groups because in two-arm mode,
one bridge domain and EPG is assigned to be a consumer, and one bridge domain and EPG is
assigned to be a provider.

2. Assign a client to a port group that belongs to the consumer EPG.

3. Assign a server to a port group that belongs to the consumer EPG.


For dynamic endpoint attachment to work, make sure that the right port group is assigned by pinging the
default gateway of your bridge domain or the self-IP address of BIG-IP (to make the server active). After
the ping is successful, an attachment notification will be send to the APIC, and then the node will be
added to BIG-IP and will be part of the pool (a pool member).

2015 Cisco | F5. All rights reserved.

Page 44

Two-Node Service Graph: Cisco ASA and F5


Cisco and F5 have partnered to create a how-to guide describing the best practices for implementing highly
available virtual services in a Cisco ACI enhanced data center. This section discusses how to implement a
multiservice graph between groups of endpoints. It focuses on the Cisco Adaptive Security Virtual Appliance
(ASAv) and the BIG-IP LTM load balancer virtual appliance deployed in a VMware virtualized server farm
connected to a Cisco ACI fabric of leaf and spine switches.
Refer to the following white paper for information about how to configure a 2-node service graph: Secure ACI
Data Centers: Deploying Highly Available Services with Cisco and F5.
Click here to view a video that shows the process for deploying a 2-node graph.

F5 BIG-IP Attached as an EPG


In this deployment model, F5 is not inserted as a L4-L7 service. Instead, it is inserted into the network as an
endpoint.
BIG-IP is connected to one of the leaf nodes in the network in one-arm mode. All configuration on BIG-IP is
performed out of band. The APIC does not control any configuration on BIG-IP. The APIC in this case controls the
network and the way that traffic flows to and from BIG-IP.
The topology to connect F5 to the leaf node uses a one-arm configuration.

Logical One-Arm Configuration


In a one-arm configuration, only one interface on BIG-IP is physically connected to the Cisco ACI fabric. All
internal and external traffic uses this one physical interface.

2015 Cisco | F5. All rights reserved.

Page 45

Topology
Configure the access policies on the APIC as described earlier in this document to establish link-level connectivity
with BIG-IP (Figure 8).
Figure 8:

BIG-IP connectivity to ACI Fabric

Configure BIG-IP
Follow these steps to configure BIG-IP:
1. Create a VLAN with an ID of X (for example, 255) on BIG-IP. This is the same VLAN ID that you will use
on the APIC for static binding.

2015 Cisco | F5. All rights reserved.

Page 46

2. Create a trunk on BIG-IP tagged to VLAN X belonging to interface 1/1.Y (Y = 3; this value refers to the
port on BIG-IP that is physically connected to the leaf). Verify that the interface is up and in the initialized
state.

Configure the APIC


Follow these steps to configure the APIC:
1. Create three bridge domains:

F5EPG-BD (belongs to subnet A)

External-BD (belongs to subnet B)

Internal-BD (belongs to subnet C)

2. Add an application profile and add three EPGs to it:

Create an EPG for the providing application (Internal-EPG) belonging to Internal-BD.

Create an EPG for the consuming application (External-EPG) belonging to External-BD.

Create an EPG for F5 (F5-EPG) belonging to F5EPG-BD.

2015 Cisco | F5. All rights reserved.

Page 47

3. The EPG has a static binding to the interface on the leaf to which BIG-IP is physically connected (port Y
to which BIG-IP is physically connected and VLAN ID X as mentioned earlier). Specify the interface on
the APIC using a policy group.

4. Create two contracts:

Set up a contract between External-EPG (consumed) and F5-EPG (provided).

Set up a contract between F5-EPG (consumed) and Internal-EPG (provided).

2015 Cisco | F5. All rights reserved.

Page 48

2015 Cisco | F5. All rights reserved.

Page 49

The Cisco ACI fabric can route traffic between the different networks because two contracts have been created
separately.

F5 Device Package Upgrade: Standalone BIG-IP


If a major upgrade of the device package version is required, use the following steps to migrate the service graph
from the old device package to the new device package in a BIG-IP standalone scenario.
In a standalone scenario, a new BIG-IP system is required for graph migration. After all graphs have been
migrated to the new BIG-IP system, the old BIG-IP system can be repurposed.
The process involves creating a new graph using the new device package with the exact same parameters as the
old graph, and then replacing the old graph with the new graph at the contract level.
1. Import the new device package. The new device package will coexist with old device package. In this
example, device package 1.1.1 is the old package, and device package 1.2.0 is the new package.

2. Create a new logical device using the new device package and the new BIG-IP system. Make sure that it
is in a stable state.
3. Create a new graph template using the new logical device. The new graph template is the same type as
the old graph template (for example, the old graph name is WebGraph1ArmStatic, and the new graph
name is WebGraph1ArmStatic120).
4. Create a new device selection policy using the same contract name and associate it with the new graph
and new logical device.

2015 Cisco | F5. All rights reserved.

Page 50

5. On provider EPG level, copy from the existing L4-L7 parameters.

Copy:

Content: Only Configuration

Scope: Subtree

Export Format: XML

2015 Cisco | F5. All rights reserved.

Page 51

6. Based on the saved configuration, create a new set of L4-L7 parameters that use the new service graph:

Contract name: WebContract

Old graph name: WebGraph1ArmStatic

New graph name: WebGraph1ArmStatic120

Search for managed object: ctrctNameOrLbl

Copied configuration:

7. Create a new XML post with the managed object graphNameOrLbl using the new graph name. Do not
modify the existing configuration. This is a new configuration using a new graph.

8. Through the APIC, post the new L4-L7 configuration.


Note:

Migration does not happen at this time. Traffic is still passing through the old logical device using the old
device package.

2015 Cisco | F5. All rights reserved.

Page 52

9. At the contract level, replace the old service graph with the new service graph.

After the configuration has been submitted, the APIC will render a new virtual server on the new BIG-IP system
that use the new device package. The APIC will also remove the virtual server configuration from the old BIG-IP
system. The service interruption impact will be about 30 seconds or more per graph.
Repeat the preceding steps until all graphs have been migrated from the old device package to the new package.
On the old BIG-IP system, all virtual servers, including the APIC partition, will be deleted. All service graphs will
now be migrated to the new BIG-IP system. The old BIG-IP system can be repurposed.

Troubleshooting Tips
APIC
Click here for a guide to APIC-specific troubleshooting.
For service insertionspecific troubleshooting, refer to the debug.log file on the APIC.
1. Log in to the APIC (from the CLI).
2. Enter cd/data/devicescript/F5.BIGIP.<device_pacakage_version>/logs/.
3. View these two files for problems related to F5 service insertion:

Debug.log
Periodic.log
2015 Cisco | F5. All rights reserved.

Page 53

BIG-IP
Troubleshooting information for BIG-IP LTMrelated problems can be found in the/var/log directory.

Log in to BIG-IP (from the CLI).

Enter Cd/var/log.

View the files in ltm.

Click here for more information about BIG-IP troubleshooting.

Health Monitoring Using the APIC


The APIC uses health scores to communicate the high-level status of the components in the Cisco ACI
architecture. The BIG-IP device package for the APIC implements the health-score assessment (determines the
value). Using the APIC GUI, you can examine specific parts of the service graph to obtain more detail to allow you
to correct the problems affecting your application.
The BIG-IP device package for the APIC calculates two health scores.

Cluster Health Score


The device health score indicates BIG-IP cluster health based on an internal BIG-IP algorithm.
For the overall device, the health is based on a weighted measure consisting of CPU use, memory use, and the
number of connections per second. As CPU and memory use goes up, the health score goes down. As the
number of connections per second goes up, the health score goes down.

Service Health Score


The service health score indicates virtual server health based on the state of internal and external interfaces and
pool availability.
For virtual servers, the health score is calculated based on the pool status. If all pool members are reachable, the
health score for the virtual server is reported as 100. If one out two pool members are down, the health score for
the virtual server is reported as 50. Each virtual servers health score is computed separately.

2015 Cisco | F5. All rights reserved.

Page 54

If a pool is not available, that will reflect on the APIC.

2015 Cisco | F5. All rights reserved.

Page 55

Configuration and Deployment Videos


Click here to view deployment videos that cover the following topics:

Installing the device package

Integrating the VMM

Creating a logical device cluster: VE high availability

Creating a logical device cluster: VE standard availability

Creating a logical device cluster: physical high availability

Creating a logical device cluster: physical SA

Creating a function profile

Creating a one-node graph: ADC in two-arm mode deployment

Creating a one-node graph: ADC in one-arm mode deployment

Creating a two-node graph: Cisco ASA and F5 ADC

Troubleshooting tips

2015 Cisco and/or its affiliates. All rights reserved. Cisco and the Cisco logo are trademarks or registered
trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to
this URL: www.cisco.com/go/trademarks. Third-party trademarks mentioned are the property of their respective
owners. The use of the word partner does not imply a partnership relationship between Cisco and any other
company. (1110R)
F5 (NASDAQ: FFIV) provides solutions for an application world. F5 helps organizations seamlessly scale cloud,
data center, and software defined networking (SDN) deployments to successfully deliver applications to
anyone, anywhere, at any time. F5 solutions broaden the reach of IT through an open, extensible framework
and a rich partner ecosystem of leading technology and data center orchestration vendors. This approach lets
customers pursue the infrastructure model that best fits their needs over time. The world's largest businesses,
service providers, government entities, and consumer brands rely on F5 to stay ahead of cloud, security, and
mobility trends. For more information, go to f5.com.
C07-736160-00
2015 Cisco | F5. All rights reserved.

12/15
Page 56

Você também pode gostar