Virtualization

Compute

Documentation for the compute manager and related files. For reading about a specific virtualization backend, read Drivers.

The nova.compute.manager Module

Handles all processes relating to instances (guest vms).

The ComputeManager class is a nova.manager.Manager that handles RPC calls relating to creating instances. It is responsible for building a disk image, launching it via the underlying virtualization driver, responding to calls to check its state, attaching persistent storage, and terminating it.

class ComputeManager(compute_driver=None, *args, **kwargs)

Bases: nova.manager.Manager

Manages the running instances from creation to destruction.

ComputeManager.SHUTDOWN_RETRY_INTERVAL = 10
ComputeManager.add_aggregate_host(context, *args, **kwargs)

Notify hypervisor of change (for hypervisor pools).

ComputeManager.add_fixed_ip_to_instance(context, *args, **kwargs)

Calls network_api to add new fixed_ip to instance then injects the new network info and resets instance networking.

ComputeManager.attach_interface(context, *args, **kwargs)

Use hotplug to add an network adapter to an instance.

ComputeManager.attach_volume(context, *args, **kwargs)

Attach a volume to an instance.

ComputeManager.backup_instance(context, *args, **kw)

Backup an instance on this host.

Parameters:
  • backup_type – daily | weekly
  • rotation – int representing how many backups to keep around
ComputeManager.build_and_run_instance(context, *args, **kw)
ComputeManager.change_instance_metadata(context, *args, **kwargs)

Update the metadata published to the instance.

ComputeManager.check_can_live_migrate_destination(context, *args, **kw)

Check if it is possible to execute live migration.

This runs checks on the destination host, and then calls back to the source host to check the results.

Parameters:
  • context – security context
  • instance – dict of instance data
  • block_migration – if true, prepare for block migration
  • disk_over_commit – if true, allow disk over commit
Returns:

a dict containing migration info

ComputeManager.check_can_live_migrate_source(context, *args, **kw)

Check if it is possible to execute live migration.

This checks if the live migration can succeed, based on the results from check_can_live_migrate_destination.

Parameters:
  • context – security context
  • instance – dict of instance data
  • dest_check_data – result of check_can_live_migrate_destination
Returns:

a dict containing migration info

ComputeManager.check_instance_shared_storage(context, *args, **kw)

Check if the instance files are shared

Parameters:
  • context – security context
  • data – result of driver.check_instance_shared_storage_local

Returns True if instance disks located on shared storage and False otherwise.

ComputeManager.cleanup_host()
ComputeManager.confirm_resize(context, *args, **kw)
ComputeManager.detach_interface(context, *args, **kwargs)

Detach an network adapter from an instance.

ComputeManager.detach_volume(context, *args, **kw)

Detach a volume from an instance.

ComputeManager.external_instance_event(context, *args, **kw)
ComputeManager.finish_resize(context, *args, **kw)

Completes the migration process.

Sets up the newly transferred disk and turns on the instance at its new host machine.

ComputeManager.finish_revert_resize(context, *args, **kw)

Finishes the second half of reverting a resize.

Bring the original source instance state back (active/shutoff) and revert the resized attributes in the database.

ComputeManager.get_console_output(*args, **kwargs)
ComputeManager.get_console_pool_info(context, console_type)
ComputeManager.get_console_topic(context)

Retrieves the console host for a project on this host.

Currently this is just set in the flags for each compute host.

ComputeManager.get_diagnostics(context, *args, **kwargs)

Retrieve diagnostics for an instance on this host.

ComputeManager.get_host_uptime(context, *args, **kw)

Returns the result of calling “uptime” on the target host.

ComputeManager.get_rdp_console(context, *args, **kwargs)
ComputeManager.get_spice_console(context, *args, **kwargs)
ComputeManager.get_vnc_console(*args, **kwargs)
ComputeManager.handle_events(event)
ComputeManager.handle_lifecycle_event(event)
ComputeManager.host_maintenance_mode(context, *args, **kw)

Start/Stop host maintenance window. On start, it triggers guest VMs evacuation.

ComputeManager.host_power_action(context, *args, **kw)

Reboots, shuts down or powers up the host.

ComputeManager.init_host()

Initialization for a standalone compute service.

ComputeManager.init_virt_events()
ComputeManager.inject_file(context, *args, **kw)

Write a file to the specified path in an instance on this host.

ComputeManager.inject_network_info(context, *args, **kwargs)

Inject network info, but don’t return the info.

ComputeManager.live_migration(context, *args, **kw)

Executing live migration.

Parameters:
  • context – security context
  • instance – instance dict
  • dest – destination host
  • block_migration – if true, prepare for block migration
  • migrate_data – implementation specific params
ComputeManager.pause_instance(context, *args, **kw)

Pause an instance on this host.

ComputeManager.post_live_migration_at_destination(context, *args, **kwargs)

Post operations for live migration .

Parameters:
  • context – security context
  • instance – Instance dict
  • block_migration – if true, prepare for block migration
ComputeManager.pre_live_migration(context, *args, **kwargs)

Preparations for live migration at dest host.

Parameters:
  • context – security context
  • instance – dict of instance data
  • block_migration – if true, prepare for block migration

:param migrate_data : if not None, it is a dict which holds data required for live migration without shared storage.

ComputeManager.pre_start_hook()

After the service is initialized, but before we fully bring the service up by listening on RPC queues, make sure to update our available resources (and indirectly our available nodes).

ComputeManager.prep_resize(context, *args, **kw)

Initiates the process of moving a running instance to another host.

Possibly changes the RAM and disk size in the process.

ComputeManager.reboot_instance(context, *args, **kw)

Reboot an instance on this host.

ComputeManager.rebuild_instance(context, *args, **kwargs)
ComputeManager.refresh_instance_security_rules(context, *args, **kw)

Tell the virtualization driver to refresh security rules for an instance.

Passes straight through to the virtualization driver.

Synchronise the call because we may still be in the middle of creating the instance.

ComputeManager.refresh_provider_fw_rules(context, *args, **kw)

This call passes straight through to the virtualization driver.

ComputeManager.refresh_security_group_members(context, *args, **kw)

Tell the virtualization driver to refresh security group members.

Passes straight through to the virtualization driver.

ComputeManager.refresh_security_group_rules(context, *args, **kw)

Tell the virtualization driver to refresh security group rules.

Passes straight through to the virtualization driver.

ComputeManager.remove_aggregate_host(context, *args, **kwargs)

Removes a host from a physical hypervisor pool.

ComputeManager.remove_fixed_ip_from_instance(context, *args, **kwargs)

Calls network_api to remove existing fixed_ip from instance by injecting the altered network info and resetting instance networking.

ComputeManager.remove_volume_connection(context, *args, **kw)

Remove a volume connection using the volume api.

ComputeManager.rescue_instance(context, *args, **kwargs)

Rescue an instance on this host. :param rescue_password: password to set on rescue instance

ComputeManager.reserve_block_device_name(context, *args, **kwargs)
ComputeManager.reset_network(context, *args, **kwargs)

Reset networking on the given instance.

ComputeManager.resize_instance(context, *args, **kw)

Starts the migration of a running instance to another host.

ComputeManager.restore_instance(context, *args, **kwargs)

Restore a soft-deleted instance on this host.

ComputeManager.resume_instance(context, *args, **kw)

Resume the given suspended instance.

ComputeManager.revert_resize(context, *args, **kw)

Destroys the new instance on the destination machine.

Reverts the model changes, and powers on the old instance on the source machine.

ComputeManager.rollback_live_migration_at_destination(context, *args, **kw)

Cleaning up image directory that is created pre_live_migration.

Parameters:
  • context – security context
  • instance – a nova.objects.instance.Instance object sent over rpc
ComputeManager.run_instance(*args, **kwargs)
ComputeManager.set_admin_password(context, *args, **kwargs)

Set the root/admin password for an instance on this host.

This is generally only called by API password resets after an image has been built.

ComputeManager.set_host_enabled(context, *args, **kw)

Sets the specified host’s ability to accept new instances.

ComputeManager.shelve_instance(context, *args, **kw)

Shelve an instance.

This should be used when you want to take a snapshot of the instance. It also adds system_metadata that can be used by a periodic task to offload the shelved instance after a period of time.

Parameters:
  • context – request context
  • instance – an Instance object
  • image_id – an image id to snapshot to.
ComputeManager.shelve_offload_instance(context, *args, **kw)

Remove a shelved instance from the hypervisor.

This frees up those resources for use by other instances, but may lead to slower unshelve times for this instance. This method is used by volume backed instances since restoring them doesn’t involve the potentially large download of an image.

Parameters:
  • context – request context
  • instance – nova.objects.instance.Instance
ComputeManager.snapshot_instance(context, *args, **kw)

Snapshot an instance on this host.

Parameters:
  • context – security context
  • instance – a nova.objects.instance.Instance object
  • image_id – glance.db.sqlalchemy.models.Image.Id
ComputeManager.soft_delete_instance(context, *args, **kw)

Soft delete an instance on this host.

ComputeManager.start_instance(context, *args, **kw)

Starting an instance on this host.

ComputeManager.stop_instance(context, *args, **kw)

Stopping an instance on this host.

ComputeManager.suspend_instance(context, *args, **kw)

Suspend the given instance.

ComputeManager.swap_volume(context, *args, **kw)

Swap volume for an instance.

ComputeManager.target = <Target version=3.23>
ComputeManager.terminate_instance(context, *args, **kw)

Terminate an instance on this host.

ComputeManager.unpause_instance(context, *args, **kw)

Unpause a paused instance on this host.

ComputeManager.unrescue_instance(context, *args, **kwargs)

Rescue an instance on this host.

ComputeManager.unshelve_instance(context, *args, **kw)

Unshelve the instance.

Parameters:
  • context – request context
  • instance – a nova.objects.instance.Instance object
  • image – an image to build from. If None we assume a volume backed instance.
  • filter_properties – dict containing limits, retry info etc.
  • node – target compute node
ComputeManager.update_available_resource(context)

See driver.get_available_resource()

Periodic process that keeps that the compute host’s understanding of resource availability and usage in sync with the underlying hypervisor.

Parameters:context – security context
ComputeManager.validate_console_port(*args, **kwargs)
ComputeManager.volume_snapshot_create(context, *args, **kwargs)
ComputeManager.volume_snapshot_delete(context, *args, **kwargs)
class ComputeVirtAPI(compute)

Bases: nova.virt.virtapi.VirtAPI

ComputeVirtAPI.agent_build_get_by_triple(context, hypervisor, os, architecture)
ComputeVirtAPI.instance_update(context, instance_uuid, updates)
ComputeVirtAPI.provider_fw_rule_get_all(context)
ComputeVirtAPI.wait_for_instance_event(*args, **kwds)

Plan to wait for some events, run some code, then wait.

This context manager will first create plans to wait for the provided event_names, yield, and then wait for all the scheduled events to complete.

Note that this uses an eventlet.timeout.Timeout to bound the operation, so callers should be prepared to catch that failure and handle that situation appropriately.

If the event is not received by the specified timeout deadline, eventlet.timeout.Timeout is raised.

If the event is received but did not have a ‘completed’ status, a NovaException is raised. If an error_callback is provided, instead of raising an exception as detailed above for the failure case, the callback will be called with the event_name and instance, and can return True to continue waiting for the rest of the events, False to stop processing, or raise an exception which will bubble up to the waiter.

:param:instance: The instance for which an event is expected :param:event_names: A list of event names. Each element can be a

string event name or tuple of strings to indicate (name, tag).
:param:deadline: Maximum number of seconds we should wait for all
of the specified events to arrive.

:param:error_callback: A function to be called if an event arrives

class InstanceEvents

Bases: object

InstanceEvents.clear_events_for_instance(instance)

Remove all pending events for an instance.

This will remove all events currently pending for an instance and return them (indexed by event name).

Parameters:instance – the instance for which events should be purged
Returns:a dictionary of {event_name: eventlet.event.Event}
InstanceEvents.pop_instance_event(instance, event)

Remove a pending event from the wait list.

This will remove a pending event from the wait list so that it can be used to signal the waiters to wake up.

Parameters:
  • instance – the instance for which the event was generated
  • event – the nova.objects.external_event.InstanceExternalEvent that describes the event
Returns:

the eventlet.event.Event object on which the waiters are blocked

InstanceEvents.prepare_for_instance_event(instance, event_name)

Prepare to receive an event for an instance.

This will register an event for the given instance that we will wait on later. This should be called before initiating whatever action will trigger the event. The resulting eventlet.event.Event object should be wait()’d on to ensure completion.

Parameters:
  • instance – the instance for which the event will be generated
  • event_name – the name of the event we’re expecting
Returns:

an event object that should be wait()’d on

aggregate_object_compat(function)

Wraps a method that expects a new-world aggregate.

delete_image_on_error(f)

Used for snapshot related method to ensure the image created in compute.api is deleted when an error occurs.

errors_out_migration(f)

Decorator to error out migration on failure.

object_compat(function)

Wraps a method that expects a new-world instance

This provides compatibility for callers passing old-style dict instances.

reverts_task_state(f)

Decorator to revert task_state on failure.

wrap_instance_event(f)

Wraps a method to log the event taken on the instance, and result.

This decorator wraps a method to log the start and result of an event, as part of an action taken on an instance.

wrap_instance_fault(f)

Wraps a method to catch exceptions related to instances.

This decorator wraps a method to catch any exceptions having to do with an instance that may get thrown. It then logs an instance fault in the db.

The nova.virt.connection Module

The nova.compute.disk Module

The nova.virt.images Module

Handling of VM disk images.

convert_image(source, dest, out_format, run_as_root=False)

Convert image to other format.

fetch(context, image_href, path, _user_id, _project_id, max_size=0)
fetch_to_raw(context, image_href, path, user_id, project_id, max_size=0)
qemu_img_info(path)

Return an object containing the parsed output from qemu-img info.

The nova.compute.flavors Module

Built-in instance properties.

add_flavor_access(flavorid, projectid, ctxt=None)

Add flavor access for project.

create(name, memory, vcpus, root_gb, ephemeral_gb=0, flavorid=None, swap=0, rxtx_factor=1.0, is_public=True)

Creates flavors.

delete_flavor_info(metadata, *prefixes)

Delete flavor instance_type information from instance’s system_metadata by prefix.

destroy(name)

Marks flavor as deleted.

extract_flavor(instance, prefix='')

Create an InstanceType-like object from instance’s system_metadata information.

get_all_flavors(ctxt=None, inactive=False, filters=None)

Get all non-deleted flavors as a dict.

Pass true as argument if you want deleted flavors returned also.

get_all_flavors_sorted_list(ctxt=None, inactive=False, filters=None, sort_key='flavorid', sort_dir='asc', limit=None, marker=None)

Get all non-deleted flavors as a sorted list.

Pass true as argument if you want deleted flavors returned also.

get_default_flavor()

Get the default flavor.

get_flavor(instance_type_id, ctxt=None, inactive=False)

Retrieves single flavor by id.

get_flavor_access_by_flavor_id(flavorid, ctxt=None)

Retrieve flavor access list by flavor id.

get_flavor_by_flavor_id(flavorid, ctxt=None, read_deleted='yes')

Retrieve flavor by flavorid.

Raises :FlavorNotFound
get_flavor_by_name(name, ctxt=None)

Retrieves single flavor by name.

remove_flavor_access(flavorid, projectid, ctxt=None)

Remove flavor access for project.

save_flavor_info(metadata, instance_type, prefix='')

Save properties from instance_type into instance’s system_metadata, in the format of:

[prefix]instance_type_[key]

This can be used to update system_metadata in place from a type, as well as stash information about another instance_type for later use (such as during resize).

validate_extra_spec_keys(key_names_list)

The nova.compute.power_state Module

Power state is the state we get by calling virt driver on a particular domain. The hypervisor is always considered the authority on the status of a particular VM, and the power_state in the DB should be viewed as a snapshot of the VMs’s state in the (recent) past. It can be periodically updated, and should also be updated at the end of a task if the task is supposed to affect power_state.

Drivers

The nova.virt.libvirt_conn Driver

The nova.virt.xenapi Driver

xenapi – Nova support for XenServer and XCP through XenAPI

The nova.virt.fake Driver

A fake (in-memory) hypervisor+api.

Allows nova testing w/o a hypervisor. This module also documents the semantics of real hypervisor connections.

class FakeDriver(virtapi, read_only=False)

Bases: nova.virt.driver.ComputeDriver

FakeDriver.attach_interface(instance, image_meta, vif)
FakeDriver.attach_volume(context, connection_info, instance, mountpoint, disk_bus=None, device_type=None, encryption=None)

Attach the disk to the instance at mountpoint using info.

FakeDriver.block_stats(instance_name, disk_id)
FakeDriver.capabilities = {'supports_recreate': True, 'has_imagecache': True}

Fake hypervisor driver.

FakeDriver.check_can_live_migrate_destination(ctxt, instance_ref, src_compute_info, dst_compute_info, block_migration=False, disk_over_commit=False)
FakeDriver.check_can_live_migrate_destination_cleanup(ctxt, dest_check_data)
FakeDriver.check_can_live_migrate_source(ctxt, instance_ref, dest_check_data, block_device_info=None)
FakeDriver.cleanup(context, instance, network_info, block_device_info=None, destroy_disks=True)
FakeDriver.confirm_migration(migration, instance, network_info)
FakeDriver.destroy(context, instance, network_info, block_device_info=None, destroy_disks=True)
FakeDriver.detach_interface(instance, vif)
FakeDriver.detach_volume(connection_info, instance, mountpoint, encryption=None)

Detach the disk attached to the instance.

FakeDriver.ensure_filtering_rules_for_instance(instance_ref, network_info)
FakeDriver.finish_migration(context, migration, instance, disk_info, network_info, image_meta, resize_instance, block_device_info=None, power_on=True)
FakeDriver.finish_revert_migration(context, instance, network_info, block_device_info=None, power_on=True)
FakeDriver.get_all_bw_counters(instances)

Return bandwidth usage counters for each interface on each running VM.

FakeDriver.get_all_volume_usage(context, compute_host_bdms)

Return usage info for volumes attached to vms on a given host.

FakeDriver.get_available_nodes(refresh=False)
FakeDriver.get_available_resource(nodename)

Updates compute manager resource info on ComputeNode table.

Since we don’t have a real hypervisor, pretend we have lots of disk and ram.

FakeDriver.get_console_output(context, instance)
FakeDriver.get_console_pool_info(console_type)
FakeDriver.get_diagnostics(instance_name)
FakeDriver.get_disk_available_least()
FakeDriver.get_host_cpu_stats()
static FakeDriver.get_host_ip_addr()
FakeDriver.get_host_stats(refresh=False)

Return fake Host Status of ram, disk, network.

FakeDriver.get_info(instance)
FakeDriver.get_instance_disk_info(instance_name)
FakeDriver.get_rdp_console(context, instance)
FakeDriver.get_spice_console(context, instance)
FakeDriver.get_vnc_console(context, instance)
FakeDriver.get_volume_connector(instance)
FakeDriver.host_maintenance_mode(host, mode)

Start/Stop host maintenance window. On start, it triggers guest VMs evacuation.

FakeDriver.host_power_action(host, action)

Reboots, shuts down or powers up the host.

FakeDriver.init_host(host)
FakeDriver.inject_file(instance, b64_path, b64_contents)
FakeDriver.instance_on_disk(instance)
FakeDriver.interface_stats(instance_name, iface_id)
FakeDriver.list_instance_uuids()
FakeDriver.list_instances()
FakeDriver.live_migration(context, instance_ref, dest, post_method, recover_method, block_migration=False, migrate_data=None)
FakeDriver.migrate_disk_and_power_off(context, instance, dest, flavor, network_info, block_device_info=None)
FakeDriver.pause(instance)
FakeDriver.plug_vifs(instance, network_info)

Plug VIFs into networks.

FakeDriver.poll_rebooting_instances(timeout, instances)
FakeDriver.post_live_migration_at_destination(context, instance, network_info, block_migration=False, block_device_info=None)
FakeDriver.power_off(instance, shutdown_timeout=0, shutdown_attempts=0)
FakeDriver.power_on(context, instance, network_info, block_device_info)
FakeDriver.pre_live_migration(context, instance_ref, block_device_info, network_info, disk, migrate_data=None)
FakeDriver.reboot(context, instance, network_info, reboot_type, block_device_info=None, bad_volumes_callback=None)
FakeDriver.refresh_instance_security_rules(instance)
FakeDriver.refresh_provider_fw_rules()
FakeDriver.refresh_security_group_members(security_group_id)
FakeDriver.refresh_security_group_rules(security_group_id)
FakeDriver.rescue(context, instance, network_info, image_meta, rescue_password)
FakeDriver.restore(instance)
FakeDriver.resume(context, instance, network_info, block_device_info=None)
FakeDriver.resume_state_on_host_boot(context, instance, network_info, block_device_info=None)
FakeDriver.set_admin_password(instance, new_pass)
FakeDriver.set_host_enabled(host, enabled)

Sets the specified host’s ability to accept new instances.

FakeDriver.snapshot(context, instance, name, update_task_state)
FakeDriver.soft_delete(instance)
FakeDriver.spawn(context, instance, image_meta, injected_files, admin_password, network_info=None, block_device_info=None)
FakeDriver.suspend(instance)
FakeDriver.swap_volume(old_connection_info, new_connection_info, instance, mountpoint)

Replace the disk attached to the instance.

FakeDriver.test_remove_vm(instance_name)

Removes the named VM, as if it crashed. For testing.

FakeDriver.unfilter_instance(instance_ref, network_info)
FakeDriver.unpause(instance)
FakeDriver.unplug_vifs(instance, network_info)

Unplug VIFs from networks.

FakeDriver.unrescue(instance, network_info)
class FakeInstance(name, state)

Bases: object

class FakeVirtAPI

Bases: nova.virt.virtapi.VirtAPI

FakeVirtAPI.agent_build_get_by_triple(context, hypervisor, os, architecture)
FakeVirtAPI.instance_update(context, instance_uuid, updates)
FakeVirtAPI.provider_fw_rule_get_all(context)
FakeVirtAPI.wait_for_instance_event(*args, **kwds)
restore_nodes()

Resets FakeDriver’s node list modified by set_nodes().

Usually called from tearDown().

set_nodes(nodes)

Sets FakeDriver’s node.list.

It has effect on the following methods:
get_available_nodes() get_available_resource get_host_stats()

To restore the change, call restore_nodes()

Tests

The compute_unittest Module

The virt_unittest Module