We will dive into KubeVirt and see how we could create and manage VMs in Kubernetes In this session we will talk about what is KubeVirt and how it works on a kubernetes platform. KubeVirt allows users to create and manage virtual machines within a Kubernetes Cluster.
This session will be covering the following topics:
KubeVirt Installation
Basic KubeVirt objects and components
How to deploy and manage virtual machines
KubeVirt Storage
KubeVirt Networking
Benefits :
Kubernetes is a well established container platform, but migrating applications/services to containers is not always easy. KubeVirt allows in such situations to migrate virtual machine based workloads to the same platform where the containers are already running, thus helping converge IT Infrastructure into one single platform, Kubernetes.
OSDC 2019 | KubeVirt: Converge IT infrastructure into one single Kubernetes platform by Kedar Bidarkar
1. 1
KubeVirt: Converge IT Infra into one
single k8s platform
Kedar Bidarkar
@kbidarka
Senior Quality Engineer @ Red Hat
2. 2
Agenda
●Why KubeVirt?
●What is KubeVirt?
●Basic KubeVirt objects and components
●Deployment and management Virtual Machines
●KubeVirt Storage
●KubeVirt Networking
●Q & A
3. 3
Currently
●We have On-premises solutions like Openstack, oVirt
●We have public clouds AWS, GCP, Azure.
●So why KubeVirt and why VM management stuff again?
4. 4
Infrastructure Convergence
Old way... Multiple Workloads - Multiple Stacks
VM Workload
VM Platform
Operating System
Bare Metal
Container Workload
Kubernetes
Operating System
Bare Metal
Scheduling, Storage, Network
Logging, Metrics, Monitoring
Knowledge
2x
5. 5
Infrastructure Convergence
KubeVirt way… Multiple Workloads - Single stack
Container Workload
Kubernetes
Operating System
Bare Metal
VM Workload
Logging, Metrics, Monitoring
Knowledge
Scheduling, Storage, Network
1x
6. 6
Infrastructure Convergence
●Environments will coexist over time
–Many new workloads will move to containers.
–But virtualization will still remain for foreseeable future.
●Business reasons ( cost, time to market, app towards EOL )
●Technical reasons ( custom kernel, hard-to-containerize apps )
●Unified infra, should be easier to maintain, operate and reduce costs.
●Migration Path: Migration of workloads from VM to Containers will be on same Infra.
●VMs can benefit from kubernetes concepts (load balancing, rolling deployment, etc.)
7. 7
What is KubeVirt?
KubeVirt is a Kubernetes addon and enables scheduling of
traditional VM workloads side by side with container
workloads on Kubernetes.
–https://kubevirt.io/
●Makes use of Custom Resource Definitions(CRD) and bunch
of controllers
–A custom resource is an extension of k8s API, not available by default
with k8s.
●Extends existing k8s clusters by providing set of Virt APIs.
●Works by running libvirt (KVM) in a container
9. 9
Benefits with KubeVirt
●Drops directly into existing Kubernetes Clusters
–No additional host setup required
–Manage VMs like pods
●Enables a transition path where vms can make use of k8s
–Infra, tools and Management
●Hard to containerize apps can be deployed in k8s as VM’s.
●Lowers the entry load for migration. No need to containerize app before migrating.
●Provides infra convergence and workflow convergence.
11. 11
Components of KubeVirt
●Virt-operator: Handles install, removal and upgrade of kubeVirt application.
●Virt-api: apiserver ( validation, defaults of VMs and entry point for all Virt flows)
●Virt-controller: controller-manager ( where all the controllers and logic lives )
●Virt-handler: Kubelet ( node daemon, managing VMIs which run inside Pods, which are managed by
kubelet)
●Virt-launcher: ( Provides cgroups and namespaces. For every VMI object one pod is created and uses a
local libvirt instance)
18. 18
VM mgmt with virtctl
●Kubectl still used for basic VMI operations, virtctl binary required for advanced features such as :
–Serial and graphical console access.
–Start, Stop and Restart Vms.
●Virtctl is deployed and used from the client side.
–Typical virtctl commands:
●Virtctl stop testvm
●Virtctl restart testvm
●Virtctl console testvm
●Virtctl vnc testvm
20. 20
containerDisk
●Disks are pulled from container registry and reside on local node
hosting the VMs.
●They are ephemeral storage devices
●Push VM disks to container registry using KubeVirt base container
image kubevirt/container-disk-v1alpha
Example:
metadata:
name: testvmi-containerdisk
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachineInstance
spec:
domain:
resources:
requests:
memory: 64M
devices:
disks:
- name: containerdisk
disk: {}
volumes:
- name: containerdisk
containerDisk:
image: vmidisks/fedora25:latest
cat << END > Dockerfile
FROM kubevirt/container-disk-v1alpha
ADD fedora25.qcow2 /disk
END
docker build -t vmidisks/fedora25:latest .
docker push vmidisks/fedora25:latest
21. 21
Containerized Data Importer
●Persistent storage mgmt add-on for k8s.
●Primary goal is to build VM disks on PVCs for KubeVirt VMs.
●Use cases:
–Import disk image from a URL to PVC ( HTTP/S3)
–Upload a local disk image to a PVC
–Clone an existing PVC
22. 22
persistentVolumeClaim
●Used when VMI disk needs to persist after the VM terminates.
–Suitable when persistent storage is required.
●A PV can be in Filesystem or block mode.
–Filesystem: Disk must be named disk.img and placed under root path.
–Block: For consuming raw block devices (Block Volume feature gate)
Example:
metadata:
name: testvmi-pvc
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachineInstance
spec:
domain:
resources:
requests:
memory: 64M
devices:
disks:
- name: fedora-standard-6g
disk: {}
volumes:
- name: mypvcdisk
persistentVolumeClaim:
claimName: fedora-standard-6g
23. 23
DataVolume
●DataVolume is a custom resource provided by the Containerized Data Importer (CDI) project.
●DataVolume provides integration between KubeVirt and CDI, it automates both PVC creation and
importing of a VM disk on PVC during the VM launch flow.
●VM is NOT SCHEDULED until the DataVolume is in success state.
26. 26
KubeVirt Networking
●Connecting a VM to networks consists of two parts.
●Interface defines a virtual network interface of a VM, which is frontend
●A network specifies the backend of an interface
●Each interface must have a corresponding network with same name.
Example:
kind: VirtualMachineInstance
spec:
domain:
devices:
interfaces:
- name: default
bridge: {}
networks:
- name: default
pod: {} # Stock pod network
27. 27
KubeVirt Networking
●Virtual Machines are connected to regular pod network.
●From the outside no difference between a VM and a pod.
●KubeVirt does not bring additional network plugins.
–But allows to utilize existing plugins.
28. 28
Network Interfaces (frontend)
●Describe properties of virtual interfaces as seen inside VM instance.
●Each interface should declare its type:
–Bridge ( default )
–masquerade
–sriov
–slirp ( non production )
30. 30
Other KubeVirt Features
●Live Migration:
–Migration to other compute nodes.
●KubeVirt web-ui:
–Extension of the OpenShift Console for Virtualization View.
–https://github.com/kubevirt/web-ui-operator
●Foreman KubeVirt Plugin
–Kubevirt as compute resource for Foreman
– https://github.com/theforeman/foreman_kubevirt