Todo
Network Hosts are responsible for allocating ips and setting up network.
There are multiple backend drivers that handle specific types of networking topologies. All of the network commands are issued to a subclass of NetworkManager.
Related Flags
network_driver: | Driver to use for network creation |
---|---|
flat_network_bridge: | |
Bridge device for simple network instances | |
flat_interface: | FlatDhcp will bridge into this interface if set |
flat_network_dns: | |
Dns for simple network | |
flat_network_dhcp_start: | |
Dhcp start for FlatDhcp | |
vlan_start: | First VLAN for private networks |
vpn_ip: | Public IP for the cloudpipe VPN servers |
vpn_start: | First Vpn port for private networks |
cnt_vpn_clients: | |
Number of addresses reserved for vpn clients | |
network_size: | Number of addresses in each private subnet |
floating_range: | Floating IP address block |
fixed_range: | Fixed IP address block |
date_dhcp_on_disassociate: | |
Whether to update dhcp when fixed_ip is disassociated | |
fixed_ip_disassociate_timeout: | |
Seconds after which a deallocated ip is disassociated |
Bases: nova.exception.Error
Address was already allocated.
Bases: nova.network.manager.NetworkManager
Flat networking with dhcp.
FlatDHCPManager will start up one dhcp server to give out addresses. It never injects network settings into the guest. Otherwise it behaves like FlatDHCPManager.
Setup dhcp for this network.
Returns a fixed ip to the pool.
Do any initialization that needs to be run if this is a standalone service.
Sets up matching network for compute hosts.
Bases: nova.network.manager.NetworkManager
Basic network where no vlans are used.
FlatManager does not do any bridge or vlan creation. The user is responsible for setting up whatever bridge is specified in flat_network_bridge (br100 by default). This bridge needs to be created on all compute hosts.
The idea is to create a single network for the host with a command like: nova-manage network create 192.168.0.0/24 1 256. Creating multiple networks for for one manager is currently not supported, but could be added by modifying allocate_fixed_ip and get_network to get the a network with new logic instead of network_get_by_bridge. Arbitrary lists of addresses in a single network can be accomplished with manual db editing.
If flat_injected is True, the compute host will attempt to inject network config into the guest. It attempts to modify /etc/network/interfaces and currently only works on debian based systems. To support a wider range of OSes, some other method may need to be devised to let the guest know which ip it should be using so that it can configure itself. Perhaps an attached disk or serial device with configuration info.
Metadata forwarding must be handled by the gateway, and since nova does not do any setup in this mode, it must be done manually. Requests to 169.254.169.254 port 80 will need to be forwarded to the api server.
Do any initialization that needs to be run if this is a standalone service.
Network is created manually.
Bases: nova.manager.SchedulerDependentManager
Implements common network manager functionality.
This class must be subclassed to support specific topologies.
Gets a fixed ip from the pool.
Gets an floating ip from the pool.
Associates an floating ip to a fixed ip.
Create networks based on parameters.
Returns a fixed ip to the pool.
Returns an floating ip to the pool.
Disassociates a floating ip.
Get the network host for the current context.
Do any initialization that needs to be run if this is a standalone service.
Called by dhcp-bridge when ip is leased.
Tasks to be run at a periodic interval.
Called by dhcp-bridge when ip is released.
Safely sets the host of the network.
Sets up matching network for compute hosts.
Sets up rules for fixed ip.
Bases: nova.network.manager.NetworkManager
Vlan network with dhcp.
VlanManager is the most complicated. It will create a host-managed vlan for each project. Each project gets its own subnet. The networks and associated subnets are created with nova-manage using a command like: nova-manage network create 10.0.0.0/8 3 16. This will create 3 networks of 16 addresses from the beginning of the 10.0.0.0 range.
A dhcp server is run for each subnet, so each project will have its own. For this mode to be useful, each project will need a vpn to access the instances in its subnet.
Gets a fixed ip from the pool.
Create networks based on parameters.
Returns a fixed ip to the pool.
Get the network for the current context.
Do any initialization that needs to be run if this is a standalone service.
Sets up matching network for compute hosts.
Implements vlans, bridges, and iptables rules using linux utilities.
Bases: object
Wrapper for iptables
See IptablesTable for some usage docs
A number of chains are set up to begin with.
First, nova-filter-top. It’s added at the top of FORWARD and OUTPUT. Its name is not wrapped, so it’s shared between the various nova workers. It’s intended for rules that need to live at the top of the FORWARD and OUTPUT chains. It’s in both the ipv4 and ipv6 set of tables.
For ipv4 and ipv6, the builtin INPUT, OUTPUT, and FORWARD filter chains are wrapped, meaning that the “real” INPUT chain has a rule that jumps to the wrapped INPUT chain, etc. Additionally, there’s a wrapped chain named “local” which is jumped to from nova-filter-top.
For ipv4, the builtin PREROUTING, OUTPUT, and POSTROUTING nat chains are wrapped in the same was as the builtin filter chains. Additionally, there’s a snat chain that is applied after the POSTROUTING chain.
Apply the current in-memory set of iptables rules
This will blow away any rules left over from previous runs of the same component of Nova, and replace them with our current set of rules. This happens atomically, thanks to iptables-restore.
Bases: object
An iptables rule
You shouldn’t need to use this class directly, it’s only used by IptablesManager
Bases: object
An iptables table
Adds a named chain to the table
The chain name is wrapped to be unique for the component creating it, so different components of Nova can safely create identically named chains without interfering with one another.
At the moment, its wrapped name is <binary name>-<chain name>, so if nova-compute creates a chain named “OUTPUT”, it’ll actually end up named “nova-compute-OUTPUT”.
Add a rule to the table
This is just like what you’d feed to iptables, just without the “-A <chain name>” bit at the start.
However, if you need to jump to one of your wrapped chains, prepend its name with a ‘$’ which will ensure the wrapping is applied correctly.
Remove named chain
This removal “cascades”. All rule in the chain are removed, as are all rules in other chains that jump to it.
If the chain is not found, this is merely logged.
Remove a rule from a chain
Note: The rule must be exactly identical to the one that was added. You cannot switch arguments around like you can with the iptables CLI tool.
Bind ip to public interface
Create a bridge unless it already exists.
Parameters: |
|
---|
If net_attrs is set, it will add the net_attrs[‘gateway’] to the bridge using net_attrs[‘broadcast’] and net_attrs[‘cidr’]. It will also add the ip_v6 address specified in net_attrs[‘cidr_v6’] if use_ipv6 is set.
The code will attempt to move any ips that already exist on the interface onto the bridge and reset the default gateway if necessary.
Ensure floating ip forwarding rule
Sets up local metadata ip
Create a vlan unless it already exists
Create a vlan and bridge unless they already exist
Sets up forwarding rules for vlan
Get a string containing a network’s hosts config in dhcp-host format
Return a network’s hosts config in dnsmasq leasefile format
Basic networking setup goes here
Create forwarding rule for metadata
Remove forwarding for floating ip
Unbind a public ip from public interface
(Re)starts a dnsmasq server for a given network
if a dnsmasq instance is already running then send a HUP signal causing it to reload, otherwise spawn a new instance
The nova networking components manage private networks, public IP addressing, VPN connectivity, and firewall rules.
There are several key components:
Overview:
(PUBLIC INTERNET)
| \
/ \ / \
[RoutingNode] ... [RN] [TunnelingNode] ... [TN]
| \ / | |
| < AMQP > | |
[AddressingNode]-- (VLAN) ... | (VLAN)... (VLAN) --- [AddressingNode]
\ | \ /
/ \ / \ / \ / \
[BridgingNode] ... [BridgingNode]
[NetworkController] ... [NetworkController]
\ /
< AMQP >
|
/ \
[CloudController]...[CloudController]
While this diagram may not make this entirely clear, nodes and controllers communicate exclusively across the message bus (AMQP, currently).
Network State consists of the following facts:
While copies of this state exist in many places (expressed in IPTables rule chains, DHCP hosts files, etc), the controllers rely only on the distributed “fact engine” for state, queried over RPC (currently AMQP). The NetworkController inserts most records into this datastore (allocating addresses, etc) - however, individual nodes update state e.g. when running instances crash.
Public Traffic:
(PUBLIC INTERNET)
|
<NAT> <-- [RoutingNode]
|
[AddressingNode] --> |
( VLAN )
| <-- [BridgingNode]
|
<RUNNING INSTANCE>
The RoutingNode is currently implemented using IPTables rules, which implement both NATing of public IP addresses, and the appropriate firewall chains. We are also looking at using Netomata / Clusto to manage NATting within a switch or router, and/or to manage firewall rules within a hardware firewall appliance.
Similarly, the AddressingNode currently manages running DNSMasq instances for DHCP services. However, we could run an internal DHCP server (using Scapy ala Clusto), or even switch to static addressing by inserting the private address into the disk image the same way we insert the SSH keys. (See compute for more details).