Table Of Contents

Previous topic

Virtualization

Next topic

Authentication and Authorization

This Page

Psst... hey. You're reading the latest content, but it might be out of sync with code. You can read Nova 2011.1 docs or all OpenStack docs too.

Networking

Todo

  • document hardware specific commands (maybe in admin guide?) (todd)
  • document a map between flags and managers/backends (todd)

The nova.network.manager Module

Network Hosts are responsible for allocating ips and setting up network.

There are multiple backend drivers that handle specific types of networking topologies. All of the network commands are issued to a subclass of NetworkManager.

Related Flags

network_driver:Driver to use for network creation
flat_network_bridge:
 Bridge device for simple network instances
flat_interface:FlatDhcp will bridge into this interface if set
flat_network_dns:
 Dns for simple network
flat_network_dhcp_start:
 Dhcp start for FlatDhcp
vlan_start:First VLAN for private networks
vpn_ip:Public IP for the cloudpipe VPN servers
vpn_start:First Vpn port for private networks
cnt_vpn_clients:
 Number of addresses reserved for vpn clients
network_size:Number of addresses in each private subnet
floating_range:Floating IP address block
fixed_range:Fixed IP address block
date_dhcp_on_disassociate:
 Whether to update dhcp when fixed_ip is disassociated
fixed_ip_disassociate_timeout:
 Seconds after which a deallocated ip is disassociated
exception nova.network.manager.AddressAlreadyAllocated(message=None)

Bases: nova.exception.Error

Address was already allocated.

class nova.network.manager.FlatDHCPManager(network_driver=None, *args, **kwargs)

Bases: nova.network.manager.NetworkManager

Flat networking with dhcp.

FlatDHCPManager will start up one dhcp server to give out addresses. It never injects network settings into the guest. Otherwise it behaves like FlatDHCPManager.

FlatDHCPManager.allocate_fixed_ip(context, instance_id, *args, **kwargs)

Setup dhcp for this network.

FlatDHCPManager.deallocate_fixed_ip(context, address, *args, **kwargs)

Returns a fixed ip to the pool.

FlatDHCPManager.init_host()

Do any initialization that needs to be run if this is a standalone service.

FlatDHCPManager.setup_compute_network(context, instance_id)

Sets up matching network for compute hosts.

class nova.network.manager.FlatManager(network_driver=None, *args, **kwargs)

Bases: nova.network.manager.NetworkManager

Basic network where no vlans are used.

FlatManager does not do any bridge or vlan creation. The user is responsible for setting up whatever bridge is specified in flat_network_bridge (br100 by default). This bridge needs to be created on all compute hosts.

The idea is to create a single network for the host with a command like: nova-manage network create 192.168.0.0/24 1 256. Creating multiple networks for for one manager is currently not supported, but could be added by modifying allocate_fixed_ip and get_network to get the a network with new logic instead of network_get_by_bridge. Arbitrary lists of addresses in a single network can be accomplished with manual db editing.

If flat_injected is True, the compute host will attempt to inject network config into the guest. It attempts to modify /etc/network/interfaces and currently only works on debian based systems. To support a wider range of OSes, some other method may need to be devised to let the guest know which ip it should be using so that it can configure itself. Perhaps an attached disk or serial device with configuration info.

Metadata forwarding must be handled by the gateway, and since nova does not do any setup in this mode, it must be done manually. Requests to 169.254.169.254 port 80 will need to be forwarded to the api server.

FlatManager.allocate_floating_ip(context, project_id)
FlatManager.associate_floating_ip(context, floating_address, fixed_address)
FlatManager.deallocate_floating_ip(context, floating_address)
FlatManager.disassociate_floating_ip(context, floating_address)
FlatManager.init_host()

Do any initialization that needs to be run if this is a standalone service.

FlatManager.setup_compute_network(context, instance_id)

Network is created manually.

class nova.network.manager.NetworkManager(network_driver=None, *args, **kwargs)

Bases: nova.manager.SchedulerDependentManager

Implements common network manager functionality.

This class must be subclassed to support specific topologies.

NetworkManager.allocate_fixed_ip(context, instance_id, *args, **kwargs)

Gets a fixed ip from the pool.

NetworkManager.allocate_floating_ip(context, project_id)

Gets an floating ip from the pool.

NetworkManager.associate_floating_ip(context, floating_address, fixed_address)

Associates an floating ip to a fixed ip.

NetworkManager.create_networks(context, cidr, num_networks, network_size, cidr_v6, label, *args, **kwargs)

Create networks based on parameters.

NetworkManager.deallocate_fixed_ip(context, address, *args, **kwargs)

Returns a fixed ip to the pool.

NetworkManager.deallocate_floating_ip(context, floating_address)

Returns an floating ip to the pool.

NetworkManager.disassociate_floating_ip(context, floating_address)

Disassociates a floating ip.

NetworkManager.get_network_host(context)

Get the network host for the current context.

NetworkManager.init_host()

Do any initialization that needs to be run if this is a standalone service.

NetworkManager.lease_fixed_ip(context, mac, address)

Called by dhcp-bridge when ip is leased.

NetworkManager.periodic_tasks(context=None)

Tasks to be run at a periodic interval.

NetworkManager.release_fixed_ip(context, mac, address)

Called by dhcp-bridge when ip is released.

NetworkManager.set_network_host(context, network_id)

Safely sets the host of the network.

NetworkManager.setup_compute_network(context, instance_id)

Sets up matching network for compute hosts.

NetworkManager.setup_fixed_ip(context, address)

Sets up rules for fixed ip.

class nova.network.manager.VlanManager(network_driver=None, *args, **kwargs)

Bases: nova.network.manager.NetworkManager

Vlan network with dhcp.

VlanManager is the most complicated. It will create a host-managed vlan for each project. Each project gets its own subnet. The networks and associated subnets are created with nova-manage using a command like: nova-manage network create 10.0.0.0/8 3 16. This will create 3 networks of 16 addresses from the beginning of the 10.0.0.0 range.

A dhcp server is run for each subnet, so each project will have its own. For this mode to be useful, each project will need a vpn to access the instances in its subnet.

VlanManager.allocate_fixed_ip(context, instance_id, *args, **kwargs)

Gets a fixed ip from the pool.

VlanManager.create_networks(context, cidr, num_networks, network_size, cidr_v6, vlan_start, vpn_start, **kwargs)

Create networks based on parameters.

VlanManager.deallocate_fixed_ip(context, address, *args, **kwargs)

Returns a fixed ip to the pool.

VlanManager.get_network_host(context)

Get the network for the current context.

VlanManager.init_host()

Do any initialization that needs to be run if this is a standalone service.

VlanManager.setup_compute_network(context, instance_id)

Sets up matching network for compute hosts.

The nova.network.linux_net Driver

Implements vlans, bridges, and iptables rules using linux utilities.

class nova.network.linux_net.IptablesManager(execute=None)

Bases: object

Wrapper for iptables

See IptablesTable for some usage docs

A number of chains are set up to begin with.

First, nova-filter-top. It’s added at the top of FORWARD and OUTPUT. Its name is not wrapped, so it’s shared between the various nova workers. It’s intended for rules that need to live at the top of the FORWARD and OUTPUT chains. It’s in both the ipv4 and ipv6 set of tables.

For ipv4 and ipv6, the builtin INPUT, OUTPUT, and FORWARD filter chains are wrapped, meaning that the “real” INPUT chain has a rule that jumps to the wrapped INPUT chain, etc. Additionally, there’s a wrapped chain named “local” which is jumped to from nova-filter-top.

For ipv4, the builtin PREROUTING, OUTPUT, and POSTROUTING nat chains are wrapped in the same was as the builtin filter chains. Additionally, there’s a snat chain that is applied after the POSTROUTING chain.

IptablesManager.apply(*args, **kwargs)

Apply the current in-memory set of iptables rules

This will blow away any rules left over from previous runs of the same component of Nova, and replace them with our current set of rules. This happens atomically, thanks to iptables-restore.

class nova.network.linux_net.IptablesRule(chain, rule, wrap=True, top=False)

Bases: object

An iptables rule

You shouldn’t need to use this class directly, it’s only used by IptablesManager

class nova.network.linux_net.IptablesTable

Bases: object

An iptables table

IptablesTable.add_chain(name, wrap=True)

Adds a named chain to the table

The chain name is wrapped to be unique for the component creating it, so different components of Nova can safely create identically named chains without interfering with one another.

At the moment, its wrapped name is <binary name>-<chain name>, so if nova-compute creates a chain named “OUTPUT”, it’ll actually end up named “nova-compute-OUTPUT”.

IptablesTable.add_rule(chain, rule, wrap=True, top=False)

Add a rule to the table

This is just like what you’d feed to iptables, just without the “-A <chain name>” bit at the start.

However, if you need to jump to one of your wrapped chains, prepend its name with a ‘$’ which will ensure the wrapping is applied correctly.

IptablesTable.remove_chain(name, wrap=True)

Remove named chain

This removal “cascades”. All rule in the chain are removed, as are all rules in other chains that jump to it.

If the chain is not found, this is merely logged.

IptablesTable.remove_rule(chain, rule, wrap=True, top=False)

Remove a rule from a chain

Note: The rule must be exactly identical to the one that was added. You cannot switch arguments around like you can with the iptables CLI tool.

nova.network.linux_net.bind_floating_ip(floating_ip, check_exit_code=True)

Bind ip to public interface

nova.network.linux_net.ensure_bridge(*args, **kwargs)

Create a bridge unless it already exists.

Parameters:
  • interface – the interface to create the bridge on.
  • net_attrs – dictionary with attributes used to create the bridge.

If net_attrs is set, it will add the net_attrs[‘gateway’] to the bridge using net_attrs[‘broadcast’] and net_attrs[‘cidr’]. It will also add the ip_v6 address specified in net_attrs[‘cidr_v6’] if use_ipv6 is set.

The code will attempt to move any ips that already exist on the interface onto the bridge and reset the default gateway if necessary.

nova.network.linux_net.ensure_floating_forward(floating_ip, fixed_ip)

Ensure floating ip forwarding rule

nova.network.linux_net.ensure_metadata_ip()

Sets up local metadata ip

nova.network.linux_net.ensure_vlan(vlan_num)

Create a vlan unless it already exists

nova.network.linux_net.ensure_vlan_bridge(vlan_num, bridge, net_attrs=None)

Create a vlan and bridge unless they already exist

nova.network.linux_net.ensure_vlan_forward(public_ip, port, private_ip)

Sets up forwarding rules for vlan

nova.network.linux_net.floating_forward_rules(floating_ip, fixed_ip)
nova.network.linux_net.get_dhcp_hosts(context, network_id)

Get a string containing a network’s hosts config in dhcp-host format

nova.network.linux_net.get_dhcp_leases(context, network_id)

Return a network’s hosts config in dnsmasq leasefile format

nova.network.linux_net.init_host()

Basic networking setup goes here

nova.network.linux_net.metadata_forward()

Create forwarding rule for metadata

nova.network.linux_net.remove_floating_forward(floating_ip, fixed_ip)

Remove forwarding for floating ip

nova.network.linux_net.unbind_floating_ip(floating_ip)

Unbind a public ip from public interface

nova.network.linux_net.update_dhcp(*args, **kwargs)

(Re)starts a dnsmasq server for a given network

if a dnsmasq instance is already running then send a HUP signal causing it to reload, otherwise spawn a new instance

nova.network.linux_net.update_ra(*args, **kwargs)

Tests

The network_unittest Module

Legacy docs

The nova networking components manage private networks, public IP addressing, VPN connectivity, and firewall rules.

Components

There are several key components:

  • NetworkController (Manages address and vlan allocation)
  • RoutingNode (NATs public IPs to private IPs, and enforces firewall rules)
  • AddressingNode (runs DHCP services for private networks)
  • BridgingNode (a subclass of the basic nova ComputeNode)
  • TunnelingNode (provides VPN connectivity)

Component Diagram

Overview:

                               (PUBLIC INTERNET)
                                |              \
                               / \             / \
                 [RoutingNode] ... [RN]    [TunnelingNode] ... [TN]
                       |             \    /       |              |
                       |            < AMQP >      |              |
[AddressingNode]--  (VLAN) ...         |        (VLAN)...    (VLAN)      --- [AddressingNode]
                       \               |           \           /
                      / \             / \         / \         / \
                       [BridgingNode] ...          [BridgingNode]


                 [NetworkController]   ...    [NetworkController]
                                   \          /
                                     < AMQP >
                                        |
                                       / \
                      [CloudController]...[CloudController]

While this diagram may not make this entirely clear, nodes and controllers communicate exclusively across the message bus (AMQP, currently).

State Model

Network State consists of the following facts:

  • VLAN assignment (to a project)
  • Private Subnet assignment (to a security group) in a VLAN
  • Private IP assignments (to running instances)
  • Public IP allocations (to a project)
  • Public IP associations (to a private IP / running instance)

While copies of this state exist in many places (expressed in IPTables rule chains, DHCP hosts files, etc), the controllers rely only on the distributed “fact engine” for state, queried over RPC (currently AMQP). The NetworkController inserts most records into this datastore (allocating addresses, etc) - however, individual nodes update state e.g. when running instances crash.

The Public Traffic Path

Public Traffic:

               (PUBLIC INTERNET)
                      |
                    <NAT>  <-- [RoutingNode]
                      |
[AddressingNode] -->  |
                   ( VLAN )
                      |    <-- [BridgingNode]
                      |
               <RUNNING INSTANCE>

The RoutingNode is currently implemented using IPTables rules, which implement both NATing of public IP addresses, and the appropriate firewall chains. We are also looking at using Netomata / Clusto to manage NATting within a switch or router, and/or to manage firewall rules within a hardware firewall appliance.

Similarly, the AddressingNode currently manages running DNSMasq instances for DHCP services. However, we could run an internal DHCP server (using Scapy ala Clusto), or even switch to static addressing by inserting the private address into the disk image the same way we insert the SSH keys. (See compute for more details).