Você está na página 1de 38

OpenStack Networking Topics

6/3/2013

Jeffery Padgett (Gap Inc Infrastructure)


@jbpadgett Cool Geeks DevOps Meetup

OpenStack Install
There is lots of information on the web over the past few years about the OpenStack Project. There are in fact many install guides. There are also many folks and organizations that have made the OpenStack install easy by rolling up web served shell scripts, cookbooks, manifests, and even distros for getting your own OpenStack.

Assumptions
Most of the instructions, scripts, and distros for Openstack installs on the web have to make some assumptions about you: Assumption #1:
You are a typical impatient DEV that wants it up NOW. Figure out details later.

Assumption #2:
You are an OpenStack N00b and cant comprehend what the heck you are getting into yet.

Assumption #3:
You are playing around in a lab or local dev environment using VMs on vagrant or similar.

Assumption #4:
You have smart engineers and network folks to help you out when you intend to use OpenStack on
real hardware with real users. In other words, you know enough to dig DEEP on complex topics.

What is Missing?
With all these nice people and organizations on the web making

OpenStack easy to install, there is something critical missing:

1. Proper planning for networking 2. Adequate Instructions for configuring networking in OpenStack

What are the key components in OpenStack Networking?


First things first. Lets break down all the areas networking will affect your install: Hypervisor Networking Network L2/L3 Physical Switch Configuration OpenStack Network Architectures Options

Desired L3 Routing & IP Addressing Models


OpenStack Tenant Models & Guest Instances Networking

OpenStack Networking Ecosystem Design Dependencies


Lets just be upfront here on OpenStack networking: This is an ecosystem stack. One design choice does affect the other. Lack of planning can and will bite you.
Tenant Model Subnet Planning

OpenStack Network Models

L2 & L3 network switch configs

Hypervisor Networking

Hypervisor Networking
There are several types of servers in OpenStack. Typically they are all deployed as hypervisors (though not required for all). They all should be configured with as robust a network design as possible.

Controller Nodes

Compute Nodes
Network Nodes

Hypervisor Networking
OpenStack Networks and Physical Hardware
Here is the reference architecture taken straight from the OpenStack documentation.

Hypervisor Networking
Controller Nodes
Controller nodes are the brains of an OpenStack deployment. They communicate with all OpenStack nodes. Networking issues that are important here are:

OpenStack API Network (public/service-facing)


This is a service VLAN/ IP range meaning it should be reachable via the internet or in the case of private/corp network installs using an IP range that is L3 reachable on the LAN/WAN.

OpenStack Mgmt Network (backend facing)


This is a backend facing VLAN/IP range meaning it only needs to be
reachable all compute, network, and controller nodes within a given availability zone/data center. This means it can be a simple L2 VLAN.

Hypervisor Networking
Network Nodes
Network nodes are special nodes within OpenStack. They allow private networks that tenant guest nodes use to either be directly reachable or translated via NAT. In other words, they are the networking brains behind OpenStack creating bridges among all the compute nodes and the guests they host.

Network nodes have changed significantly with the introduction of the QUANTUM project. Quantum replaced the legacy Nova
Network networking model in OpenStack. Quantum has become more robust and reliable with the Grizzly release, but still has an HA limitation (Nova-Network Multi-mode feature parity). Basically nothing happens in OpenStack without a properly functioning and configured Network Node. Networking issues that are important here are:

OpenStack External Network (public/service-facing)


This is a service VLAN/ IP range meaning it should be reachable via the internet or in the case of private/corp network installs using an IP range that is L3 reachable on the LAN/WAN.

OpenStack Mgmt Network (backend facing)


This is a backend facing VLAN/IP range meaning it only needs to be reachable by all compute, network, and controller nodes within a given availability zone/data center. This means it can be a simple L2 VLAN.

Openstack Data Network (backend facing)


This is a backend facing VLAN/IP range meaning it only needs to be reachable by all compute and network nodes within a given availability zone/data center. This means it can be a simple L2 VLAN.

Hypervisor Networking
Compute Nodes
Compute nodes are where all the guest virtual machines or instances live in an OpenStack deployment. The compute nodes must be able to speak on the network to the network nodes. In the legacy Nova Network days with multi-host mode, you would find the compute and network node services running on every single compute node for HA. This will likely be the same path for future releases of Quantum since it provided a robust HA model for deployment. Networking issues that are important here are:

OpenStack Mgmt Network (backend facing)


This is a backend facing VLAN/IP range meaning it only needs to be reachable by all compute, network, and controller nodes within a given availability zone/data center. This means it can be a simple L2 VLAN.

OpenStack Data Network (backend facing)


This is a backend facing VLAN/IP range meaning it only needs to be reachable by all compute and
network nodes within a given availability zone/data center. This means it can be a simple L2 VLAN.

Sample Hypervisor Networking Design Template

OpenStack Networking Architecture Models


Before you can select an appropriate OpenStack networking architecture model, you need

to understand the key components. Each of these components provides services that you
might expect and some that you might mistakenly (at least for today) try to do yourself.

Core OpenStack Quantum Networking components: Network - An isolated L2 segment, analogous to VLAN in physical

networking.
Subnet - A block of v4 or v6 IP addresses and associated configuration state. Port - A connection point for attaching a single device, such as the NIC of a virtual server, to a virtual network. Also describes the associated network configuration, such as the MAC and IP addresses to be used on that port. Plugin Agent - Local vswitch configuration. DHCP Agent - DHCP to Tenants.

L3 Agent

- Layer 3 + NAT for guest instances.

OpenStack Networking Architecture Models


Plugin Agent Local vSwitch configuration.
Depending on your desired configuration, you can choose to use open vSwitch for handling your virtual distributed switching traffic, or you can opt to use other vSwitch providers via a plugin architecture. Some prefer to keep all control plane operations of switching managed on a particular platform and/or group. This is completely up to you. For purposes of this discussion, we will assume the open vSwitch approach. Here are the current vDS plugins supported by OpenStack today:

OpenStack Networking Architecture Models


OpenStack (with Quantum) offers 5 primary network architecture design models: Single Flat Network Multiple Flat Network Mixed Flat & Private Network Provider Router with Private Networks Per Tenant Router with Private Networks

OpenStack Networking Architecture Models


Single Flat Network
VM IP addresses exposed to LAN. Flat DHCP Manager gives out public IP addresses via DNSMasq on the Network node. Uses external L3 router. No floating IPs (NAT) here since all IPs dished out are public.

OpenStack Networking Architecture Models


Multiple Flat Network
Many L2 VLANs implemented with one or multiple Tenants.

VM IP addresses exposed to LAN.


Flat DHCP Manager gives out public IP addresses via DNSMasq on the Network node.

Uses external L3 router.


No floating IPs (NAT) here since all IPs dished out are public.

OpenStack Networking Architecture Models


Mixed Flat & Private Network
Private VLANs and Public VLANs Flat DHCP Manager gives out IP addresses via DNSMasq on the Network node. A VM can performs NAT & Routing for private nets to public nets. Tenants can create local Networks just for them.

OpenStack Networking Architecture Models


Provider Router + Private Networks
Floating Public IPs

Tenants can create local Networks just for them.


A virtual or physical provider router does NAT for private IPs to public ones using SNAT. Flat DHCP Manager gives out IP addresses via DNSMasq on the Network node.

OpenStack Networking Architecture Models


Provider Router + Private Networks
Floating Public IPs Tenants can create local Networks just for them. Tenants get a virtual or physical provider router doing NAT for private IPs to public ones using SNAT. Flat DHCP Manager gives out IP addresses via DNSMasq on the Network node. The provider still provides a physical router for all public IPs.

Single & Multiple Flat Networking Architecture

Mixed Flat & Private Network

Provider-Router+Private-Networks

Per-Tenant-Routers+Private-Networks

OpenStack Networking Architecture Model Example


Multiple Flat Network Architecture Model EXAMPLE
A simple reference OpenStack network architecture for a typical private company with their own servers and network equipment. You control the network connectivity Your private network is really your private VLANs that are routable within your enterprise. Mapping VLANs and Subnets to Tenants becomes conceptually easier to groc.

Sample Tenant-Guest Instances Networking Design Template

OpenStack Networking & Tenant Consumption Models


Key Design Concept: Your Tenant or Consumption Model as I call it always maps back to a VLAN and associated IP Subnets. So, plan to reserve some ranges of VLANs and IP subnets

according to how you think you may want to deploy


machine instances.

OpenStack Networking & Tenant Consumption Models


Consumption/Tenant Model Examples:
Monolithic Enterprise Infra Tenant Model
One or multiple networks with associated VLANs/subnets.

This tenant builds things on behalf of everyone when end user self service provisioning is not as important as
machines just getting built for customers. Think of this as using an OpenStack tenant like traditional vmware Vcenter.

Group/ Org Tenant with many Users Model


All users for the tenant share a network(s) and its associated VLAN/Subnet(s).

Provides a good way to give bunch of developers in a team the ability to build machines with a shared tenant. They
just are a user within the tenant. There is no security segmentation among instances built in this model. Meaning a developer can access any other developers box. This is likely not to matter for a team working together.

Per Group/ Per Developer Tenant Model


Giving out a full tenant account to every group or developer. By each team/developer getting their own tenant account, their machines are segmented off via tenant security. If used internally, can be seen as wasteful for subnet allocations unless they are small. Each tenant gets their own network and associated VLAN/Subnet(s).

OpenStack Networking & Tenant Consumption Models


Consumption/Tenant Models and IP Address Planning (IPAM)
There is a design choice upfront when creating networks to choose how to map the subnets to a given tenant account. You need to be careful when creating networks to choose the --shared argument if you intend to have multiple tenants on the same IP subnet. If you dont choose this argument Quantum will assume a single tenant will use that network subnet(s).

SHARED TENANT ACCOUNT MODEL Can use dedicated subnets

Can use shared subnets


Usually a larger allocation of IP addresses or subnets

DEDICATED TENANT ACCOUNT MODEL

Can use dedicated subnets


Can use shared subnets Can be a large or small allocation of IP addresses or subnets

OpenStack Networking & Tenant Consumption Models Example


Network Allocations using different Tenant Consumption Models:
Tenant Yoda (shared tenant, multiple users, multiple flat network model)
quantum net-create dagobah-net1 shared --provider:network_type flat -provider:physical_network dagobah-datacenter1 --router:external=True quantum subnet-create dagobah-net1 10.10.10.0/24 quantum subnet-create dagobah-net1 10.10.20.0/24

quantum net-create dagobah-net2 shared --provider:network_type flat -provider:physical_network dagobah-datacenter1 --router:external=True quantum subnet-create dagobah-net2 10.10.30.0/24

OpenStack Hypervisor Networking

Hypervisor Networking Workflow:

1. Configure the Network switches with dot1q VLAN trunking.


2. Configure dot1q bond interfaces on each hypervisor for the

four required OpenStack VLANs + any other functional VLANs


you may want (iSCSI, NAS, etc) as subinterfaces. 3. Configure bridge interfaces for each of the VLANs as subinterfaces.

OpenStack Hypervisor Networking


VLANS & BOND INTERFACES
* VLANS are dot1q trunked from the switches to the server NIC ports. * For every VLAN that the server needs an IP interface, create a bond.x software NIC * For any VLAN that just has guest VMs inside it, but the hypervisor does not need to have an IP interface, a bond.x software NIC is not needed.

NOTES:
For a channel bonding interface to be valid, the kernel module must be loaded. To ensure that the module is loaded when the channel bonding interface is brought up, create a new file as root named bonding.conf in the /etc/modprobe.d/ directory. Note that you can name this file anything you like as long as it ends with a .conf extension. Parameters for the bonding kernel module must be specified as a space-separated list in the BONDING_OPTS="bonding parameters" directive in the ifcfg-bondN interface file. Do not specify options for the bonding device in /etc/modprobe.d/bonding.conf, or in the deprecated /etc/modprobe.conf file.

Add this to the bonding.conf alias bond0 bonding

OpenStack Hypervisor Networking


LINUX BRIDGES & BONDING
Bonding makes 2 physical NICs act as one from the linux perspective. Cisco Virtual Port Channels (vpc) makes both physical nics active at the same time on 2 different network switches. Linux Bridges make KVM virtual machines with virtual NICs be able to share the same logical bonded NIC with the HOST OS.

Do not create linux bridges between two different VLANs. Create one linux bridge per VLAN. (Avoid mixing different VLANs in a single bridge.) ifcfg-br100 ifcfg-br101

This is different from Linux Bond interfaces which facilitate dot1q VLAN trunking for multiple VLANs. Linux bridges allow the LINUX OS to share a bond0.xxx VLAN interface with KVM virtual machines for guest VM networking.

ifcfg-br100
SLAVE="bond0.100""

OpenStack Hypervisor Networking


Hypervisor ifcfg files for Enterprise Linux Distros
/etc/sysconfig/network-scripts/ Physical NICs ifcfg-em1 ifcfg-em2

Bond Interfaces ifcfg-bond0 ifcfg-bond0.100 ifcfg-bond0.101

Bridge Interfaces ifcfg-br100 ifcfg-br101

Physical Interfaces Examples


ifcfg-em1
DEVICE="em1"
MASTER="bond0" SLAVE="yes" NM_CONTROLLED="no" ONBOOT="yes" TYPE="Ethernet" BOOTPROTO="none" IPV6INIT="no" HWADDR=00:00:00:00:00:00"

ifcfg-em2
DEVICE="em2"
MASTER="bond0" SLAVE="yes" NM_CONTROLLED="no" ONBOOT="yes" TYPE="Ethernet" BOOTPROTO="none" IPV6INIT="no" HWADDR=00:00:00:00:00:00"

Bond Interfaces Examples


ifcfg-bond0
DEVICE="bond0" BOOTPROTO="none" ONBOOT="yes" TYPE="Ethernet" BONDING_OPTS="mode=4 miimon=100" IPV6INIT="no" MTU="9000

ifcfg-bond0.100
DEVICE="bond0.100" ONBOOT="yes" VLAN="yes" TYPE="Ethernet" BOOTPROTO="static"

BRIDGE="br100

ifcfg-bond0.101
DEVICE="bond0.101" ONBOOT="yes" VLAN="yes"

TYPE="Ethernet"
BOOTPROTO="static" BRIDGE="br101

Bridge Interfaces Examples


ifcfg-br100
DEVICE="br100" ONBOOT="yes" VLAN="yes" TYPE="Bridge" SLAVE="bond0.100" HOSTNAME=yoda1.dagobah.com" IPADDR="10.10.100.99" NETMASK="255.255.255.0" DNS1=8.8.8.8"

GATEWAY="10.10.100.1"
IPV6INIT="no" MTU="1500

ifcfg-br101
DEVICE="br101"

ONBOOT="yes"
VLAN="yes" TYPE="Bridge" SLAVE="bond0.101" IPADDR="10.10.101.99" NETMASK="255.255.255.0" DNS1=8.8.8.8" GATEWAY="10.10.101.1" IPV6INIT="no" MTU="1500"

Thanks!
@jbpadgett http://Padgeblog.com

Você também pode gostar