Todo
Network Hosts are responsible for allocating ips and setting up network.
There are multiple backend drivers that handle specific types of networking topologies. All of the network commands are issued to a subclass of NetworkManager.
Related Flags
network_driver: | Driver to use for network creation |
---|---|
flat_network_bridge: | |
Bridge device for simple network instances | |
flat_interface: | FlatDhcp will bridge into this interface if set |
flat_network_dns: | |
Dns for simple network | |
vlan_start: | First VLAN for private networks |
vpn_ip: | Public IP for the cloudpipe VPN servers |
vpn_start: | First Vpn port for private networks |
cnt_vpn_clients: | |
Number of addresses reserved for vpn clients | |
network_size: | Number of addresses in each private subnet |
floating_range: | Floating IP address block |
fixed_range: | Fixed IP address block |
fixed_ip_disassociate_timeout: | |
Seconds after which a deallocated ip is disassociated | |
create_unique_mac_address_attempts: | |
Number of times to attempt creating a unique mac address |
Bases: nova.exception.NovaException
Address was already allocated.
Bases: nova.network.manager.RPCAllocateFixedIP, nova.network.manager.FloatingIP, nova.network.manager.NetworkManager
Flat networking with dhcp.
FlatDHCPManager will start up one dhcp server to give out addresses. It never injects network settings into the guest. It also manages bridges. Otherwise it behaves like FlatManager.
Do any initialization that needs to be run if this is a standalone service.
Bases: nova.network.manager.NetworkManager
Basic network where no vlans are used.
FlatManager does not do any bridge or vlan creation. The user is responsible for setting up whatever bridges are specified when creating networks through nova-manage. This bridge needs to be created on all compute hosts.
The idea is to create a single network for the host with a command like: nova-manage network create 192.168.0.0/24 1 256. Creating multiple networks for for one manager is currently not supported, but could be added by modifying allocate_fixed_ip and get_network to get the a network with new logic instead of network_get_by_bridge. Arbitrary lists of addresses in a single network can be accomplished with manual db editing.
If flat_injected is True, the compute host will attempt to inject network config into the guest. It attempts to modify /etc/network/interfaces and currently only works on debian based systems. To support a wider range of OSes, some other method may need to be devised to let the guest know which ip it should be using so that it can configure itself. Perhaps an attached disk or serial device with configuration info.
Metadata forwarding must be handled by the gateway, and since nova does not do any setup in this mode, it must be done manually. Requests to 169.254.169.254 port 80 will need to be forwarded to the api server.
Returns a fixed ip to the pool.
Returns a floating IP as a dict
Returns a floating IP as a dict
Returns the floating IPs associated with a fixed_address
Returns the floating IPs allocated to a project
Returns list of floating pools
Bases: object
Mixin class for adding floating IP functionality to a manager.
Gets a floating ip from the pool.
Handles allocating the floating IP resources for an instance.
calls super class allocate_for_instance() as well
rpc.called by network_api
Associates a floating ip with a fixed ip.
Makes sure everything makes sense then calls _associate_floating_ip, rpc’ing to correct host if i’m not it.
Returns an floating ip to the pool.
Handles deallocating floating IP resources for an instance.
calls super class deallocate_for_instance() as well.
rpc.called by network_api
Disassociates a floating ip from its fixed ip.
Makes sure everything makes sense then calls _disassociate_floating_ip, rpc’ing to correct host if i’m not it.
Returns a floating IP as a dict
Returns a floating IP as a dict
Returns the floating IPs associated with a fixed_address
Returns the floating IPs allocated to a project
Returns list of floating pools
Configures floating ips owned by host.
Bases: nova.manager.SchedulerDependentManager
Implements common network manager functionality.
This class must be subclassed to support specific topologies.
Adds a fixed ip to an instance from specified network.
Gets a fixed ip from the pool.
Handles allocating the various network resources for an instance.
rpc.called by network_api
Builds a NetworkInfo object containing all network information for an instance
Returns a fixed ip to the pool.
Handles deallocating various network resources for an instance.
rpc.called by network_api kwargs can contain fixed_ips to circumvent another db lookup
Broker the request to the driver to fetch the dhcp leases
Return a fixed ip
Returns the instance id a floating ip’s fixed ip is allocated to
Creates network info list for instance.
called by allocate_for_instance and network_api context needs to be elevated :returns: network info list [(network,info),(network,info)...] where network = dict containing pertinent data from a network db object and info = dict containing pertinent networking data
Returns the vifs record for the mac_address
Returns the vifs associated with an instance
Do any initialization that needs to be run if this is a standalone service.
Called by dhcp-bridge when ip is leased.
Called by dhcp-bridge when ip is released.
Removes a fixed ip from an instance from specified network.
Safely sets the host of the network.
calls setup/teardown on network hosts associated with an instance
check if the networks exists and host is set to each network.
Bases: object
Mixin class originally for FlatDCHP and VLAN network managers.
used since they share code to RPC.call allocate_fixed_ip on the correct network host to configure dnsmasq
Call the superclass deallocate_fixed_ip if i’m the correct host otherwise call to the correct host
Bases: nova.network.manager.RPCAllocateFixedIP, nova.network.manager.FloatingIP, nova.network.manager.NetworkManager
Vlan network with dhcp.
VlanManager is the most complicated. It will create a host-managed vlan for each project. Each project gets its own subnet. The networks and associated subnets are created with nova-manage using a command like: nova-manage network create 10.0.0.0/8 3 16. This will create 3 networks of 16 addresses from the beginning of the 10.0.0.0 range.
A dhcp server is run for each subnet, so each project will have its own. For this mode to be useful, each project will need a vpn to access the instances in its subnet.
Force adds another network to a project.
Gets a fixed ip from the pool.
Create networks based on parameters.
Do any initialization that needs to be run if this is a standalone service.
Check policy corresponding to the wrapped methods prior to execution
Implements vlans, bridges, and iptables rules using linux utilities.
Bases: object
Wrapper for iptables.
See IptablesTable for some usage docs
A number of chains are set up to begin with.
First, nova-filter-top. It’s added at the top of FORWARD and OUTPUT. Its name is not wrapped, so it’s shared between the various nova workers. It’s intended for rules that need to live at the top of the FORWARD and OUTPUT chains. It’s in both the ipv4 and ipv6 set of tables.
For ipv4 and ipv6, the built-in INPUT, OUTPUT, and FORWARD filter chains are wrapped, meaning that the “real” INPUT chain has a rule that jumps to the wrapped INPUT chain, etc. Additionally, there’s a wrapped chain named “local” which is jumped to from nova-filter-top.
For ipv4, the built-in PREROUTING, OUTPUT, and POSTROUTING nat chains are wrapped in the same was as the built-in filter chains. Additionally, there’s a snat chain that is applied after the POSTROUTING chain.
Bases: object
An iptables rule.
You shouldn’t need to use this class directly, it’s only used by IptablesManager.
Bases: object
An iptables table.
Adds a named chain to the table.
The chain name is wrapped to be unique for the component creating it, so different components of Nova can safely create identically named chains without interfering with one another.
At the moment, its wrapped name is <binary name>-<chain name>, so if nova-compute creates a chain named ‘OUTPUT’, it’ll actually end up named ‘nova-compute-OUTPUT’.
Add a rule to the table.
This is just like what you’d feed to iptables, just without the ‘-A <chain name>’ bit at the start.
However, if you need to jump to one of your wrapped chains, prepend its name with a ‘$’ which will ensure the wrapping is applied correctly.
Remove all rules from a chain.
Remove named chain.
This removal “cascades”. All rule in the chain are removed, as are all rules in other chains that jump to it.
If the chain is not found, this is merely logged.
Remove a rule from a chain.
Note: The rule must be exactly identical to the one that was added. You cannot switch arguments around like you can with the iptables CLI tool.
Bases: nova.network.linux_net.LinuxNetInterfaceDriver
Create a bridge unless it already exists.
Parameters: |
|
---|
If net_attrs is set, it will add the net_attrs[‘gateway’] to the bridge using net_attrs[‘broadcast’] and net_attrs[‘cidr’]. It will also add the ip_v6 address specified in net_attrs[‘cidr_v6’] if use_ipv6 is set.
The code will attempt to move any ips that already exist on the interface onto the bridge and reset the default gateway if necessary.
Create a vlan unless it already exists.
Create a vlan and bridge unless they already exist.
Bases: object
Abstract class that defines generic network host API
Get device name
Create Linux device, return device name
Destory Linux device, return device name
Bases: nova.network.linux_net.LinuxNetInterfaceDriver
Bases: nova.network.linux_net.LinuxNetInterfaceDriver
Bind ip to public interface.
Ensure floating ip forwarding rule.
Sets up local metadata ip.
Sets up forwarding rules for vlan.
Grab the name of the binary we’re running in.
Get network’s hosts config in dhcp-host format.
Return a network’s hosts config in dnsmasq leasefile format.
Get network’s hosts config in dhcp-opts format.
Basic networking setup goes here.
Create the filter accept rule for metadata.
Create forwarding rule for metadata.
Remove forwarding for floating ip.
(Re)starts a dnsmasq server for a given network.
If a dnsmasq instance is already running then send a HUP signal causing it to reload, otherwise spawn a new instance.
Unbind a public ip from public interface.
The nova networking components manage private networks, public IP addressing, VPN connectivity, and firewall rules.
There are several key components:
Overview:
(PUBLIC INTERNET)
| \
/ \ / \
[RoutingNode] ... [RN] [TunnelingNode] ... [TN]
| \ / | |
| < AMQP > | |
[AddressingNode]-- (VLAN) ... | (VLAN)... (VLAN) --- [AddressingNode]
\ | \ /
/ \ / \ / \ / \
[BridgingNode] ... [BridgingNode]
[NetworkController] ... [NetworkController]
\ /
< AMQP >
|
/ \
[CloudController]...[CloudController]
While this diagram may not make this entirely clear, nodes and controllers communicate exclusively across the message bus (AMQP, currently).
Network State consists of the following facts:
While copies of this state exist in many places (expressed in IPTables rule chains, DHCP hosts files, etc), the controllers rely only on the distributed “fact engine” for state, queried over RPC (currently AMQP). The NetworkController inserts most records into this datastore (allocating addresses, etc) - however, individual nodes update state e.g. when running instances crash.
Public Traffic:
(PUBLIC INTERNET)
|
<NAT> <-- [RoutingNode]
|
[AddressingNode] --> |
( VLAN )
| <-- [BridgingNode]
|
<RUNNING INSTANCE>
The RoutingNode is currently implemented using IPTables rules, which implement both NATing of public IP addresses, and the appropriate firewall chains. We are also looking at using Netomata / Clusto to manage NATting within a switch or router, and/or to manage firewall rules within a hardware firewall appliance.
Similarly, the AddressingNode currently manages running DNSMasq instances for DHCP services. However, we could run an internal DHCP server (using Scapy ala Clusto), or even switch to static addressing by inserting the private address into the disk image the same way we insert the SSH keys. (See compute for more details).