Getting started
vNode is a multi-tenancy container runtime that provides strong isolation between workloads using Linux user namespaces and seccomp filters. It runs in Kubernetes environments and supports privileged workloads such as Docker-in-Docker and Kubernetes-in-Kubernetes. vNode ensures every container runs as a non-root user inside a sandbox. For more details, see the vNode architecture.
vNode uses Linux kernel features to isolate containers. It does not rely on virtualization technologies such as KVM or Hyper-V, and it runs containers at native bare-metal speed with minimal overhead.
Benefits of using vNode include:
-
Strong isolation between workloads
vNode uses Linux user namespaces and seccomp filters to separate workloads. It enforces boundaries without relying on a hypervisor. -
Rootless by default
Containers run as non-root users inside a sandbox. This reduces the risk of privilege escalation and meets Kubernetes security best practices. No extra setup is required. -
Safe use of privileged features
vNode supports features likehostPID
,hostNetwork
,hostPaths
, and Docker-in-Docker. These features run securely without exposing the host or other containers. -
Fast performance with low overhead
vNode does not use virtualization technologies like KVM or Hyper-V. Containers run directly on the host kernel at near-native speeds. -
Works across Kubernetes platforms
vNode runs on EKS, GKE, AKS, and self-managed clusters. It integrates using the standardRuntimeClass
resource. -
Simple and scalable
vNode avoids the complexity of virtual machines. It scales easily and works with GitOps tools like Helm and Argo CD.
Before you begin
To deploy vNode, ensure you have the following:
-
An existing vCluster platform installation.
-
A license plan that includes vNode.
noteTo check your plan, in the platform navigate to Admin → License and Billing. If your plan doesn't include vNode, contact support@loft.sh.
-
A valid platform access key.
-
A release version from vNode releases.
Set your environment variables
# Platform host
PLATFORM_HOST=https://platform-your-domain.com
# Platform access key
PLATFORM_ACCESS_KEY=your-access-key
Install vNode
You can install and deploy vNode using one of several Kubernetes distributions. Before starting the installation process, ensure you have:
- Ensure Helm is installed on your local machine.
- Use
containerd
as your container runtime.
- kind
- EKS
- GKE
- AKS
- Other Distros
Kubernetes in Docker (kind)
Kubernetes in Docker (kind) enables you to run vNode locally by creating Kubernetes clusters inside Docker containers, which works well for development and testing environments.
Create a kind cluster
Create a local kind cluster:
kind create cluster
This command creates a single-node Kubernetes cluster with default configurations. For more complex setups, consider using a configuration file with --config
.
Deploy vNode
After your kind cluster is running, deploy vNode using Helm:
# Deploy the Helm chart
helm upgrade --install vnode-runtime vnode-runtime -n vnode-runtime \
--repo https://charts.loft.sh --create-namespace \
--set "config.platform.host=$PLATFORM_HOST" \
--set "config.platform.accessKey=$PLATFORM_ACCESS_KEY"
# Optionally, you can only set the following if your platform is using a self-signed certificate
# --set "config.platform.insecure=true"
This installs vNode in its own namespace and connects it to your platform using the provided host and access key.
Amazon Elastic Kubernetes Service (EKS)
Amazon Elastic Kubernetes Service (EKS) provides a managed Kubernetes service on AWS, which allows you to deploy and manage vNode without installing or operating your own control plane or nodes.
Configure and create an EKS cluster
Use the following, to set your cluster configuration variables:
Set EKS_AMI_FAMILY=AmazonLinux2023
to ensure your cluster runs kernel 6.1 or later, which vNode requires. Standard AmazonLinux AMIs use older, incompatible kernels.
# EKS settings
EKS_CLUSTER_NAME=vnode-runtime-test
EKS_REGION=eu-west-1
EKS_NUM_NODES=1
EKS_VERSION=1.30
EKS_MACHINE_TYPE=t3.xlarge
EKS_AMI_FAMILY=AmazonLinux2023
Create the EKS cluster with the eksctl
command:
eksctl create cluster \
--name $EKS_CLUSTER_NAME \
--version $EKS_VERSION \
--region $EKS_REGION \
--node-type $EKS_MACHINE_TYPE \
--node-ami-family $EKS_AMI_FAMILY \
--nodes $EKS_NUM_NODES \
--managed
This process might take 15-20 minutes to complete as AWS provisions the required resources.
Deploy vNode
After the cluster is ready and your kubectl context is set to the new cluster, deploy vNode:
# Deploy the Helm chart
helm upgrade vnode-runtime vnode-runtime -n vnode-runtime \
--repo https://charts.loft.sh --install --create-namespace \
--set "config.platform.host=$PLATFORM_HOST" \
--set "config.platform.accessKey=$PLATFORM_ACCESS_KEY"
# Optionally, you can only set the following if your platform is using a self-signed certificate
# --set "config.platform.insecure=true"
The Helm chart creates all necessary resources including deployments, services, and RBAC permissions for vNode to function properly in your EKS environment.
Google Kubernetes Engine (GKE)
Google Kubernetes Engine (GKE) a managed Kubernetes platform on Google Cloud to help with the deployment and scaling of vNode with integrated Google services.
Configure and create a GKE cluster
Define your GKE cluster configuration:
# GKE settings
GKE_CLUSTER_NAME=vnode-runtime-test
GKE_ZONE=europe-west2-a
GKE_MACHINE_TYPE=n4-standard-4
GKE_NUM_NODES=1
These settings define a basic single-node cluster. The n4-standard-4
machine type provides sufficient resources for most vNode runtime workloads. You can adjust these settings based on your specific requirements.
Create the GKE cluster using the gcloud CLI:
# Create the GKE cluster
gcloud container clusters create $GKE_CLUSTER_NAME \
--zone $GKE_ZONE \
--machine-type $GKE_MACHINE_TYPE \
--num-nodes $GKE_NUM_NODES \
--release-channel "regular"
The "regular"
release channel ensures you get an up-to-date Kubernetes version with tested stability.
After cluster creation, configure kubectl to use your new cluster:
# Fetch cluster credentials for kubectl
gcloud container clusters get-credentials $GKE_CLUSTER_NAME --zone $GKE_ZONE
Deploy vNode
After your GKE cluster is ready and kubectl is configured, deploy vNode:
# Deploy the Helm chart
helm upgrade vnode-runtime vnode-runtime -n vnode-runtime \
--repo https://charts.loft.sh --install --create-namespace \
--set "config.platform.host=$PLATFORM_HOST" \
--set "config.platform.accessKey=$PLATFORM_ACCESS_KEY"
# Optionally, you can only set the following if your platform is using a self-signed certificate
# --set "config.platform.insecure=true"
This deploys all necessary components for vNode runtime operation on your GKE cluster.
Azure Kubernetes Service (AKS)
Azure Kubernetes Service (AKS) provides a managed Kubernetes environment on Microsoft Azure for deploying vNode and connecting to Azure services.
Configure and create an AKS Cluster
Define your AKS cluster parameters:
vNode requires kernel 6.1 or later. On AKS, use Kubernetes version 1.32.0 or later with AzureLinux to get this kernel version.
AKS_RESOURCE_GROUP=my-resource-group
AKS_CLUSTER_NAME=my-cluster
AKS_NODE_COUNT=1
AKS_MACHINE_TYPE=Standard_D4s_v6
AKS_LOCATION=westeurope
AKS_KUBERNETES_VERSION=1.32.0
There are some important considerations for these settings:
- The resource group must exist before creating the cluster.
Standard_D4s_v6
provides a good balance of CPU and memory for most workloads.- Kubernetes version 1.32.0 is required to get access to Azure Linux with kernel 6.1 or later.
- The
--os-sku AzureLinux
flag is critical for vNode compatibility.
Create the AKS cluster with the Azure CLI:
# Create the AKS cluster
az aks create --yes \
--resource-group $AKS_RESOURCE_GROUP \
--name $AKS_CLUSTER_NAME \
--node-count $AKS_NODE_COUNT \
--node-vm-size $AKS_MACHINE_TYPE \
--location $AKS_LOCATION \
--kubernetes-version $AKS_KUBERNETES_VERSION \
--os-sku AzureLinux \
--generate-ssh-keys
After creating the cluster, configure kubectl to access your new cluster:
# Get the credentials for the AKS cluster
az aks get-credentials --overwrite-existing --resource-group $AKS_RESOURCE_GROUP --name $AKS_CLUSTER_NAME
Deploy vNode
After your AKS cluster is ready and kubectl configured, deploy vNode:
# Deploy the Helm chart
helm upgrade vnode-runtime vnode-runtime -n vnode-runtime \
--repo https://charts.loft.sh --install --create-namespace \
--set "config.platform.host=$PLATFORM_HOST" \
--set "config.platform.accessKey=$PLATFORM_ACCESS_KEY"
# Optionally, you can only set the following if your platform is using a self-signed certificate
# --set "config.platform.insecure=true"
The deployment creates all necessary components and configures them to work with the Azure networking, storage, and security infrastructure.
Other distributions
Many other Kubernetes distributions can run vNode successfully. The following distributions typically work with vNode:
- K3s - Lightweight Kubernetes for resource-constrained environments
- K3d - K3s in Docker, useful for local development
- Rancher RKE - Rancher Kubernetes Engine
- Custom self-managed Kubernetes installations
Not all Kubernetes distributions are compatible with vNode. vNode does not work with Docker Desktop or Orbstack Kubernetes because they use cri-dockerd, which is not supported. vNode runtime only supports containerd as the container runtime. Currently, OpenShift with cri-o is not supported.
Before installing vNode on any distribution not explicitly mentioned in this documentation, verify that:
- The distribution uses containerd as its container runtime.
- The kernel version is 6.1 or later.
- The Kubernetes version is 1.24 or later.
Install other distributions
Deploy the vNode Helm chart to your existing cluster:
# Deploy the Helm chart
helm upgrade vnode-runtime vnode-runtime -n vnode-runtime \
--repo https://charts.loft.sh --install --create-namespace \
--set "config.platform.host=$PLATFORM_HOST" \
--set "config.platform.accessKey=$PLATFORM_ACCESS_KEY"
# Optionally, you can only set the following if your platform is using a self-signed certificate
# --set "config.platform.insecure=true"
Unsupported distributions
While vNode works with many Kubernetes distributions, some environments are explicitly unsupported due to technical limitations.
Incompatible distributions
The following Kubernetes environments do not work with vNode runtime:
- Docker Desktop - Uses
cri-dockerd
instead ofcontainerd
- Orbstack - Uses
cri-dockerd
instead ofcontainerd
- OpenShift - Uses
cri-o
instead ofcontainerd
- Any cluster with kernel version 6.0 or earlier
- Any cluster using container runtimes other than
containerd
vNode has some limitations due to its reliance on specific kernel features and container runtime capabilities available only in newer Linux kernels and the containerd runtime. For more details, see the Limitations documentation.
Custom path configuration
If your Kubernetes cluster uses non-standard paths for containerd, kubelet, or CNI components, you can customize the vNode configuration. This works with the following:
- Custom Kubernetes distributions
- Specialized enterprise environments
- Kubernetes clusters with modified directory structures
Add the following configuration to your Helm install command with your specified values:
config:
# The root directory of containerd. Typically this is /var/lib/containerd
containerdRoot: ""
# The state directory of containerd. Typically this is /run/containerd
containerdState: ""
# The config path for containerd. Typically this is /etc/containerd/config.toml
containerdConfig: ""
# The directory where to copy the shims to. Typically this is /usr/local/bin
containerdShimDir: ""
# The root path for the kubelet. Typically this is /var/lib/kubelet
kubeletRoot: ""
# The root path for the kubelet pod logs. Typically this is /var/log
kubeletLogRoot: ""
# The directory where the cni configuration is stored. Typically this is /etc/cni/net.d
cniConfDir: ""
# The directory where the cni binaries are stored. Typically this is /opt/cni/bin
cniBinDir: ""
To use these custom settings, add them to your Helm command with --values
or --set
options, or add them into your existing values file.
Use vNode
Use vCluster
You can use vNode to securely run privileged workloads inside a vCluster, which adds an extra layer of isolation and control. To configure vNode as the runtime for vCluster workloads, apply the following configuration:
sync:
toHost:
pods:
runtimeClassName: vnode
Requires vCluster version 0.23 or later.
Use Nvidia GPU Operator
vNode is compatible with the Nvidia GPU Operator and can be used to run GPU workloads. The only requirement is that vNode requires enabling CDI in the GPU Operator.
You can do this during the GPU Operator installation:
helm upgrade gpu-operator nvidia/gpu-operator --install \
-n gpu-operator --create-namespace \
--set cdi.enabled=true # Enable CDI
Or after the GPU Operator is already installed:
kubectl patch clusterpolicies.nvidia.com/cluster-policy --type='json' \
-p='[{"op": "replace", "path": "/spec/cdi/enabled", "value":true}]'
For more information on how to enable CDI, refer to the Nvidia GPU Operator documentation.
Example using vNode
Create a new privileged workload pod that uses the vNode runtime:
# Create a new privileged pod that uses shared host pid
echo "apiVersion: v1
kind: Pod
metadata:
name: bad-boy
spec:
runtimeClassName: vnode # This is the runtime class name for vNode
hostPID: true
terminationGracePeriodSeconds: 1
containers:
- image: ubuntu:jammy
name: bad-boy
command: ['tail', '-f', '/dev/null']
securityContext:
privileged: true" | kubectl apply -f -
# Wait for the pod to start
kubectl wait --for=condition=ready pod bad-boy
# Get a shell into the bad-boy
kubectl exec -it bad-boy -- bash
Within the privileged pod, running a process listing command displays only the processes of the current container and its shim — not those of other containers or pods on the host.
# ps -ef --forest
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 10:20 ? 00:00:00 /var/lib/vnode/bin/vnode-init
root 53 1 0 10:20 ? 00:00:00 /var/lib/vnode/bin/vnode-containerd-shim-runc-v
65535 75 53 0 10:20 ? 00:00:00 \_ /pause
root 185 53 0 10:20 ? 00:00:00 \_ tail -f /dev/null
root 248 53 0 10:20 pts/0 00:00:00 \_ bash
root 256 248 0 10:20 pts/0 00:00:00 \_ ps -ef --forest
Running the container without the vNode runtime exposes all host processes to the container.
# ps -ef --forest
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 10:19 ? 00:00:00 /sbin/init
root 88 1 0 10:19 ? 00:00:00 /lib/systemd/systemd-journald
root 308 1 0 10:19 ? 00:00:00 /usr/local/bin/containerd-shim-runc-v2 -namespa
65535 389 308 0 10:19 ? 00:00:00 \_ /pause
root 663 308 2 10:19 ? 00:00:04 \_ etcd --advertise-client-urls=https://192.16
root 309 1 0 10:19 ? 00:00:00 /usr/local/bin/containerd-shim-runc-v2 -namespa
65535 404 309 0 10:19 ? 00:00:00 \_ /pause
root 540 309 0 10:19 ? 00:00:00 \_ kube-scheduler --authentication-kubeconfig=
root 318 1 0 10:19 ? 00:00:00 /usr/local/bin/containerd-shim-runc-v2 -namespa
65535 411 318 0 10:19 ? 00:00:00 \_ /pause
...