File kvm-supported.txt of Package kvm
KVM SUPPORT STATUS FOR SLES 11 SP1
SLES 11 GA included kvm at a technical preview status level.
Kvm has matured in the SLE 11 SP1 development time frame to the level of being supportable
for select guests and virtualization features. This document provides information about
kvm supportability for use by the customer support team, quality engineering, end user,
and other interested parties.
Kvm consists of two main components:
* A set of kernel modules (kvm.ko, kvm-intel.ko, and kvm-amd.ko) that provides the core
virtualization infrastructure and processor specific drivers.
* A userspace program (qemu-kvm) that provides emulation for virtual devices and
control to manage virtual machines
The term kvm more properly refers to the kernel level virtualization functionality, but
is in practice more commonly used to reference the userspace component.
Originally the kvm package also provided the kvm kernel modules, but these modules are
included with the kernel in SP1, and only userspace components are included
in the current kvm package.
KVM Host Status
The qemu-kvm version currently included in SLES 11 SP1 is 0.12.5. In addition to the
qemu-kvm program the kvm package provides a monitoring utility, firmware components,
key-mapping files, scripts, and windows drivers. These components, along with the kvm
kernel modules are the focus of this support document.
Interoperability with other virtualization tools has been tested and is an essential
part of Novell's support stance. These tools include: virt-manager, vm-install, qemu-img,
virt-viewer and the libvirt daemon and shell.
KVM Host Configuration
KVM supports a number of different architectures, but we will only support x86_64 hosts.
KVM is designed around hardware virtualization features included in both AMD (AMD-V) and
Intel (VT-x) cpus produced within the past few years, as well as other virtualization
features in even more recent pc chipsets and pci devices. Examples are IOMMU and SR-IOV.
The following websites identify processors which support hardware virtualization:
The kvm kernel modules will not load if the basic hardware virtualization features are not
present or are not enabled in the BIOS. Qemu-kvm can run guests without the kvm kernel
modules loaded, but we do not support this mode of operation.
Kvm allows for both memory and disk space overcommit. It is up to the user to understand
the implications of doing so however, as hard errors resulting from actually exceeding
available resources will result in guest failures. Cpu overcommit is also supported
but carries performance implications.
Guest Hardware Details
The following table lists guest operating systems tested, and the support status:
All guests OSs listed include both 32 and 64 bit x86 versions. For a supportable
configuration, the same minimum memory requirements as for a physical installation is assumed.
Most guests require some additional support for accurate time keeping. Where available,
kvm-clock is to be used. NTP or similar network based time keeping protocols are also highly
recommended (in host as well as guest) to help maintain stable time. When using the kvm-clock
running NTP inside the guest is not recommended.
Be aware that guest nics which don't have an explicit mac address specified (on qemu-kvm command
line) will be assigned a default mac address, resulting in networking problems if more than one
such instance is visible on the same network segment.
Guest images created under pre-SLES 11 SP1 kvm are not assumed to be compatible.
Guest OS Virt Type PV Drivers Available Support Status Notes
SLES11 SP1 FV kvm-clock, Fully Supported
SLES10 SP3 FV kvm-clock, Fully Supported
SLES9 SP4 FV Fully Supported 32 bit kernel: specify clock=pmtmr on linux boot line
64 bit kernel: specify ignore_lost_ticks on linux boot line
SLED11 SP1 FV kvm-clock, Tech Preview
RHEL 4.x FV Yes see RedHat Best Effort See footnote
RHEL 5.x website for details
Win XP SP3+ FV virtio-net, Best Effort
Win 2K3 SP2+ virtio-blk,
Win 2K8+ virtio-balloon
Win Vista SP1+
Refer to http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.4/html/Virtualization_Guide/chap-Virtualization-KVM_guest_timing_management.html
The following limits have been tested, and are supported:
Host RAM and CPU Same with kvm modules loaded as without. Refer to the SLES Release Notes for specifics
Guest RAM size 512 GB
Virtual CPUs per guest 16
NICs per guest 8
Block devices per guest 4 emulated, 20 para-virtual (virtio-blk)
Maximum number of guests Limit is defined as the total number of vcpus in all guests being no greater than 8 times the number of cpu cores in the host
General KVM Features
- vm-install interoperability
Define and Install guest via vm-install
This includes specifying RAM, disk type and location, video type, keyboard mapping,
NIC type, binding and mac address, and boot method
Restrictions: Raw disk format only, Realtek or virtio NICs only
- virt-manager interoperability
Manage guests via virt-manager
This includes autostart, start, stop, restart, pause, unpause, save, restore,
clone, migrate, special key sequence insertion, guest console viewers,
performance monitoring, cpu pinning, and static modification of cpu, RAM,
boot method, disk, nic, mouse, display, video and host PCI assignments
Restrictions: No sound devices, vmvga (vmware), xen video, or USB physical devices added
- virsh interoperability
Manage guests via virsh
Guest XML descriptions are as created by vm-install/virt-manager
Restrictions: Only "read only" and vm lifecycle functions supported
Manage guests via direct invocation of qemu-kvm. It's generally preferred to use
virt-manager, but for greater flexibility, qemu-kvm may be directly invoked
Restrictions: See restrictions in Appendix A
- Live migration
Migration of guests between hosts
Restrictions: Source and target machines are essentially the same type,
guest storage is accessible from both machines (shared), guest timekeeping
is properly controlled, compatible guest "definition" on source and target,
no physical devices in guest.
Host kernel with kernel-samepage-merging enabled allows for automatic sharing of memory
pages between guests, freeing some host memory.
- PCI Passthrough
An AMD IOMMU or Intel VT-d is required for this feature. The respective feature needs to be
enabled in the BIOS. For VT-d, passing the kernel parameter "intel_iommu=on" is mandatory.
Many PCIe cards from major vendors should be supportable. Refer to systems level
certifications for specific details, or contact the vendor in question for support statements.
- Memory ballooning
Dynamically changing the amount of memory allocated to a guest
Restrictions: This requires a balloon driver operating in the guest.
Other KVM Features
- Hotplug cpu
Dynamically changing the number of vcpus assigned to the guest
This is not supported.
- Hotplug devices
Dynamically adding or removing devices in the guest.
This is not supported.
- user for kvm
What users may invoke qemu-kvm or the management tools (which in turn invoke qemu-kvm)
The user must be root when using SUSE included management tools. Otherwise, it must be
in the kvm group.
- Host Suspend/Hibernate
Suspending/hibernating host with KVM installed or with guests running
Suspending or Hibernating the host with guests running is not supported.
Merely having kvm installed however is supported.
- Power Management
Changing power states in the host while guests are running
A properly functioning constant_tsc machine is required.
Support for KVM on NUMA machines
NUMA machines are supported. Using numactl to pin qemu-kvm processes to specific nodes is recommended.
- Kvm module parameters
Specifying parameters for the kvm kernel modules
This is not supported unless done under the direction of Novell support personnel.
- Qemu only mode
Qemu-kvm can be used in non-kvm mode, where the guest cpu instructions are emulated instead of being executed
directly by the processor.
This mode is enabled by using the -no-kvm parameter.
This mode is not supported, but may be useful for problem resolution.
Our effort to provide kvm virtualization to our customers is to allow workloads
designed for physical installations to be virtualized and thus inherit the benefits
of modern virtualization techniques. Some trade-offs present with virtualizing
workloads are a slight to moderate performance impact and the need to stage the
workload to verify its behavior in a virtualized environment (esoteric software
and rare, but possible corner cases can behave differently.) Although every reasonable
effort is made to provide a broad virtualization solution to meet disparate needs,
there will be cases where the workload itself is unsuited for kvm virtualization.
In these cases creating an L3 incident would be inappropriate.
We therefore propose the following performance expectations for guests performance
to be used as a guideline in determining if a reported performance issue should be
investigated as a bug (values given are rough approximations at this point - more
validation is required):
Category Fully Virtualized Paravirtualized Host-Passthrough
CPU, MMU 7% (QEMU emulation not applicable 97%
(unsupported)) (Hardware Virt. + EPT/NPT)
(Hardware Virt. + Shadow Pagetables)
Network I/O 20% 75% 95%
(1Gb LAN) Realtek emulated NIC Virtio NIC
Disk I/O 40% 85% 95%
IDE emulation Virtio block
Graphics 50% not applicable
(non-accelerated) VGA or Cirrus
Time accuracy 95% - 105% 100% not applicable
(worst case, using kvm-clock
and before ntp assist),
where 100% = accurate
timekeeping, 150% = time
runs fast by 50%, etc.
Percentage values are a comparison of performance achieved with the same workload under non-virtualized conditions. Novell does not guarantee performance numbers.
To improve the performance of the guest OS, paravirtualized drivers are provided when available. It is recommended that they be used, but are not generally required for support.
One of the more difficult aspects of virtualization is correct timekeeping, and we are still evaluating proposed guidelines for the best configuration to achieve that goal.
As mentioned previously, if a pv timekeeping option is available (eg: kvm-clock) it should be used.
The memory ballooning driver is provided to help manage memory resources among competing demands. Management tools for example can take advantage of that feature.
The following qemu-kvm command line options are supported:
NOTE: only raw disk images (file), logical volumes, physical partitions or physical disks may be used.
(iSCSI and various virt-manager managed storage solutions are included.)
-cpu [?|qemu64 ]
-drive ... (if specified if=[ide|floppy|virtio] and format=raw and snapshot=off)
-device [isa-serial|isa-parallel|isa-fdc|ide-drive|VGA|cirrus-vga|rtl8139|virtio-net-pci|virtio-blk-pci|virtio-balloon-pci] ...
-net [nic|user|tap|none] ... (for model= only rtl8139 and virtio are supported)
The following qemu-kvm monitor commands are supported:
change device ...
balloon target ...
Conversely, the following qemu-kvm command line options are not supported:
-drive ,if=scsi|mtd|pflash], snapshot=on, format=[anything besides raw]
-device driver (where driver is not in [isa-serial|isa-parallel|isa-fdc|ide-drive|
-net socket ...
-net dump ...
The following qemu-kvm monitor commands are not supported: