Escolar Documentos
Profissional Documentos
Cultura Documentos
6/3/2013
OpenStack Install
There is lots of information on the web over the past few years about the OpenStack Project. There are in fact many install guides. There are also many folks and organizations that have made the OpenStack install easy by rolling up web served shell scripts, cookbooks, manifests, and even distros for getting your own OpenStack.
Assumptions
Most of the instructions, scripts, and distros for Openstack installs on the web have to make some assumptions about you: Assumption #1:
You are a typical impatient DEV that wants it up NOW. Figure out details later.
Assumption #2:
You are an OpenStack N00b and cant comprehend what the heck you are getting into yet.
Assumption #3:
You are playing around in a lab or local dev environment using VMs on vagrant or similar.
Assumption #4:
You have smart engineers and network folks to help you out when you intend to use OpenStack on
real hardware with real users. In other words, you know enough to dig DEEP on complex topics.
What is Missing?
With all these nice people and organizations on the web making
1. Proper planning for networking 2. Adequate Instructions for configuring networking in OpenStack
Hypervisor Networking
Hypervisor Networking
There are several types of servers in OpenStack. Typically they are all deployed as hypervisors (though not required for all). They all should be configured with as robust a network design as possible.
Controller Nodes
Compute Nodes
Network Nodes
Hypervisor Networking
OpenStack Networks and Physical Hardware
Here is the reference architecture taken straight from the OpenStack documentation.
Hypervisor Networking
Controller Nodes
Controller nodes are the brains of an OpenStack deployment. They communicate with all OpenStack nodes. Networking issues that are important here are:
Hypervisor Networking
Network Nodes
Network nodes are special nodes within OpenStack. They allow private networks that tenant guest nodes use to either be directly reachable or translated via NAT. In other words, they are the networking brains behind OpenStack creating bridges among all the compute nodes and the guests they host.
Network nodes have changed significantly with the introduction of the QUANTUM project. Quantum replaced the legacy Nova
Network networking model in OpenStack. Quantum has become more robust and reliable with the Grizzly release, but still has an HA limitation (Nova-Network Multi-mode feature parity). Basically nothing happens in OpenStack without a properly functioning and configured Network Node. Networking issues that are important here are:
Hypervisor Networking
Compute Nodes
Compute nodes are where all the guest virtual machines or instances live in an OpenStack deployment. The compute nodes must be able to speak on the network to the network nodes. In the legacy Nova Network days with multi-host mode, you would find the compute and network node services running on every single compute node for HA. This will likely be the same path for future releases of Quantum since it provided a robust HA model for deployment. Networking issues that are important here are:
to understand the key components. Each of these components provides services that you
might expect and some that you might mistakenly (at least for today) try to do yourself.
Core OpenStack Quantum Networking components: Network - An isolated L2 segment, analogous to VLAN in physical
networking.
Subnet - A block of v4 or v6 IP addresses and associated configuration state. Port - A connection point for attaching a single device, such as the NIC of a virtual server, to a virtual network. Also describes the associated network configuration, such as the MAC and IP addresses to be used on that port. Plugin Agent - Local vswitch configuration. DHCP Agent - DHCP to Tenants.
L3 Agent
Provider-Router+Private-Networks
Per-Tenant-Routers+Private-Networks
This tenant builds things on behalf of everyone when end user self service provisioning is not as important as
machines just getting built for customers. Think of this as using an OpenStack tenant like traditional vmware Vcenter.
Provides a good way to give bunch of developers in a team the ability to build machines with a shared tenant. They
just are a user within the tenant. There is no security segmentation among instances built in this model. Meaning a developer can access any other developers box. This is likely not to matter for a team working together.
quantum net-create dagobah-net2 shared --provider:network_type flat -provider:physical_network dagobah-datacenter1 --router:external=True quantum subnet-create dagobah-net2 10.10.30.0/24
NOTES:
For a channel bonding interface to be valid, the kernel module must be loaded. To ensure that the module is loaded when the channel bonding interface is brought up, create a new file as root named bonding.conf in the /etc/modprobe.d/ directory. Note that you can name this file anything you like as long as it ends with a .conf extension. Parameters for the bonding kernel module must be specified as a space-separated list in the BONDING_OPTS="bonding parameters" directive in the ifcfg-bondN interface file. Do not specify options for the bonding device in /etc/modprobe.d/bonding.conf, or in the deprecated /etc/modprobe.conf file.
Do not create linux bridges between two different VLANs. Create one linux bridge per VLAN. (Avoid mixing different VLANs in a single bridge.) ifcfg-br100 ifcfg-br101
This is different from Linux Bond interfaces which facilitate dot1q VLAN trunking for multiple VLANs. Linux bridges allow the LINUX OS to share a bond0.xxx VLAN interface with KVM virtual machines for guest VM networking.
ifcfg-br100
SLAVE="bond0.100""
ifcfg-em2
DEVICE="em2"
MASTER="bond0" SLAVE="yes" NM_CONTROLLED="no" ONBOOT="yes" TYPE="Ethernet" BOOTPROTO="none" IPV6INIT="no" HWADDR=00:00:00:00:00:00"
ifcfg-bond0.100
DEVICE="bond0.100" ONBOOT="yes" VLAN="yes" TYPE="Ethernet" BOOTPROTO="static"
BRIDGE="br100
ifcfg-bond0.101
DEVICE="bond0.101" ONBOOT="yes" VLAN="yes"
TYPE="Ethernet"
BOOTPROTO="static" BRIDGE="br101
GATEWAY="10.10.100.1"
IPV6INIT="no" MTU="1500
ifcfg-br101
DEVICE="br101"
ONBOOT="yes"
VLAN="yes" TYPE="Bridge" SLAVE="bond0.101" IPADDR="10.10.101.99" NETMASK="255.255.255.0" DNS1=8.8.8.8" GATEWAY="10.10.101.1" IPV6INIT="no" MTU="1500"
Thanks!
@jbpadgett http://Padgeblog.com