Saturday, September 6, 2025

OpenShift Virtualization with KVM on IBM Servers

Introduction

OpenShift Virtualization, built on the upstream KubeVirt project, enables the seamless integration of virtual machines (VMs) into Kubernetes-native environments. It allows organizations to run containerized and traditional VM-based workloads side by side, using the same OpenShift platform. This is especially powerful on IBM infrastructure, including IBM Power, IBM Z, and LinuxONE systems, where enterprise-grade virtualization is a key requirement.

source


OpenShift Virtualization allows you to run VMs inside a Kubernetes cluster. It uses KVM (Kernel-based Virtual Machine) as the hypervisor and wraps VM processes inside Kubernetes Pods. The process is orchestrated using several components like virt-controllervirt-handler, and virt-launcher.

Step 1: Create VM

  • A user or automation tool submits a VirtualMachine (VM) object to the OpenShift API server.
  • This is a Custom Resource Definition (CRD) that describes the desired VM configuration (CPU, memory, disk, etc.).

Step 2: Create VMI

  • The virt-controller watches for new VM objects.
  • It creates a VirtualMachineInstance (VMI) object, which represents the actual running VM.
  • This VMI is also a CRD and is used to track the VM’s runtime state.

Step 3: Create virt-launcher Pod

  • The virt-controller instructs Kubernetes to create a Pod called virt-launcher.
  • This Pod is responsible for running the VM process.
  • It contains the libvirtd and qemu-kvm binaries needed to start the VM.

Step 4: Signal to Start VM

  • On the node where the Pod is scheduled, the virt-handler (a DaemonSet running on every node) receives a signal.
  • It prepares the environment and communicates with libvirt inside the virt-launcher Pod.

Step 5: Start VM

  • Inside the virt-launcher Pod, libvirtd uses qemu-kvm to start the VM.
  • The VM runs as a process on the host node, isolated inside the Pod.
  • Other containers (like container 1 and container 2) may run alongside the VM for networking or monitoring.

Component :                   Role:

API ServerReceives VM definitions
virt-controllerCreates VMI and virt-launcher Pod 
virt-handlerManages VM lifecycle on each node
virt-launcher PodRuns the VM using libvirt and qemu
libvirtd + qemu-kvmActual VM execution
KVM kernel moduleHypervisor that runs the VM

Where the VM Actually Runs

  • The VM runs inside the virt-launcher Pod, but it’s not a container.
  • It’s a process managed by libvirt and qemu, using the KVM hypervisor on the host Linux kernel.

Key Terminologies and Components

  • VirtualMachine (VM) CRD: Defines the VM object.
  • VirtualMachineInstance (VMI) CRD: Represents a running VM instance.
  • virt-controller: Running on master node -Watches for new VMIs and creates corresponding pods.
  • virt-handler: Runs as a DaemonSet on each worker node, manages VM lifecycle.
  • virt-launcher Pod: Encapsulates the VM process using libvirt and qemu-kvm.
  • libvirtd: Embedded in virt-launcher, interfaces with KVM for VM operations.
  • CDI (Containerized Data Importer): Handles disk image imports into PVCs.
  • Multus CNI: Enables multiple network interfaces for VMs.
  • HyperConverged CR: Central configuration point for OpenShift Virtualization.

VM Lifecycle Flow

  1. User defines a VMI via YAML or GUI.
  2. virt-controller creates a pod for the VM.
  3. virt-handler on the target node configures the VM using libvirt.
  4. virt-launcher runs the VM inside the pod.
  5. KVM executes the VM as a process on the host.

Networking in OpenShift Virtualization

OpenShift uses Multus to attach VMs to multiple networks. This is crucial for legacy workloads that require direct Layer 2 access or static IPs.

  • VMs can bypass SDN and connect directly to external networks.
  • SR-IOV, MACVLAN, and bridge interfaces are supported.
  • nmstate operator helps configure physical NICs on worker nodes.

Storage Integration

VM disks are managed using Kubernetes-native storage constructs:

  • PersistentVolumeClaims (PVCs) and StorageClasses.
  • CDI allows importing disk images via annotations.
  • Integration with OpenShift Data Foundation (Ceph) enables RWX and block storage.

Example PVC with CDI:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: fedora-disk0
  annotations:
    cdi.kubevirt.io/storage.import.endpoint: "http://10.0.0.1/images/Fedora.qcow2"
spec:
  storageClassName: ocs-gold
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi
-------------------------

IBM Server Support

OpenShift Virtualization is supported on IBM infrastructure including:

  • IBM Power Virtual Server
  • IBM Z and LinuxONE
  • IBM x86 platforms

Deployment Highlights

  • Uses RHEL KVM as the hypervisor.
  • VMs are configured with static IPs using macvtap bridges.
  • OpenShift clusters run on RHEL CoreOS nodes.
  • Supports live migration, high availability, and secure networking.

Operator Lifecycle and Control Plane

OpenShift Virtualization is managed via several operators:

  • virt-operator: Deploys and upgrades virtualization components.
  • cdi-operator: Manages disk imports.
  • ssp-operator: Handles VM templates and validation.
  • tekton-tasks-operator: Enables VM creation via pipelines.
  • cluster-network-addons-operator: Manages extended networking.
  • hostpath-provisioner-operator: Provides CSI-based storage provisioning.

Each operator creates resources like DaemonSets, ConfigMaps, and CRDs to manage the virtualization lifecycle.

Best Practices for VM Workloads

  • Use security-hardened images.
  • Monitor resource usage with Prometheus/Grafana.
  • Apply affinity/anti-affinity rules for VM placement.
  • Enable live migration for high availability.
  • Use templates for consistent VM provisioning.

-----------------------------------------------------------------------------------------

KubeVirt :  is a tool that lets you run virtual machines (VMs) inside a Kubernetes cluster. This means you can manage both containers and VMs using the same platform.

KubeVirt is designed using a service-oriented architecture, which means different parts of the system handle different tasks. It also follows a choreography pattern, meaning each component knows its role and works independently without needing a central controller to tell it what to do.

  1. User Request
    A user (or automation tool) sends a request to create a VM using the KubeVirt Virtualization API.

  2. API Talks to Kubernetes
    The Virtualization API communicates with the Kubernetes API Server to schedule the VM.

  3. Kubernetes Handles the Basics
    Kubernetes takes care of:

    • Scheduling: Deciding which node the VM should run on.
    • Networking: Connecting the VM to the network.
    • Storage: Attaching disks or volumes to the VM.
  4. KubeVirt Adds Virtualization
    While Kubernetes handles the infrastructure, KubeVirt provides the virtualization layer. It uses components like:

    • virt-controller: Watches for VM requests and creates VM instances.
    • virt-handler: Manages VM lifecycle on each node.
    • virt-launcher: Runs the actual VM inside a pod.
    • virt-api: Exposes the virtualization API.
  • Kubernetes = handles scheduling, networking, and storage.
  • KubeVirt = adds the ability to run VMs
  • NOTE: Together, they let you run containers and VMs side-by-side in a cloud-native way.
----------------------------------------------------------------------------
Note basic terminology/names if you are a beginner :

 Containers → Pods → ReplicaSets

1. Container
A container is a lightweight, standalone executable package that includes everything needed to run an application (code, runtime, libraries). Examples: Nginx container, Python app container.
2. Pod
A Pod wraps one or more containers. A pod with a single container running a web server . A pod with two containers -one running the app and another running helper process(like logging or monitoring) 
All containers in a Pod share:The same network IP. The same storage volumes.The same lifecycle
Most Pods contain just one container, but you can have multiple if they need to work closely together.
3. ReplicaSet
A ReplicaSet ensures that a specified number of identical Pods are running at all times.
If a Pod crashes or is deleted, the ReplicaSet creates a new one to maintain the desired count.
It’s usually managed by a Deployment, which adds features like rolling updates.
-----------------------------------------------------------------------------

Conclusion

OpenShift Virtualization bridges the gap between traditional VMs and cloud-native containers. On IBM servers, it offers a robust, scalable, and secure platform for hybrid workloads. With KVM as the backbone and Kubernetes as the orchestrator, enterprises can modernize without abandoning legacy applications.

No comments:

Post a Comment