Skip to main content

Getting started

vNode is a multi-tenancy container runtime that provides strong isolation between workloads using Linux user namespaces and seccomp filters. It runs in Kubernetes environments and supports privileged workloads such as Docker-in-Docker and Kubernetes-in-Kubernetes. vNode ensures every container runs as a non-root user inside a sandbox. For more details, see the vNode architecture.

info

vNode uses Linux kernel features to isolate containers. It does not rely on virtualization technologies such as KVM or Hyper-V, and it runs containers at native bare-metal speed with minimal overhead.

Benefits of using vNode include:

  • Strong isolation between workloads
    vNode uses Linux user namespaces and seccomp filters to separate workloads. It enforces boundaries without relying on a hypervisor.

  • Rootless by default
    Containers run as non-root users inside a sandbox. This reduces the risk of privilege escalation and meets Kubernetes security best practices. No extra setup is required.

  • Safe use of privileged features
    vNode supports features like hostPID, hostNetwork, hostPaths, and Docker-in-Docker. These features run securely without exposing the host or other containers.

  • Fast performance with low overhead
    vNode does not use virtualization technologies like KVM or Hyper-V. Containers run directly on the host kernel at near-native speeds.

  • Works across Kubernetes platforms
    vNode runs on EKS, GKE, AKS, and self-managed clusters. It integrates using the standard RuntimeClass resource.

  • Simple and scalable
    vNode avoids the complexity of virtual machines. It scales easily and works with GitOps tools like Helm and Argo CD.

Before you begin

To deploy vNode, ensure you have the following:

Set your environment variables

# Platform host
PLATFORM_HOST=https://platform-your-domain.com

# Platform access key
PLATFORM_ACCESS_KEY=your-access-key

Install vNode

You can install and deploy vNode using one of several Kubernetes distributions. Before starting the installation process, ensure you have:

  • Ensure Helm is installed on your local machine.
  • Use containerd as your container runtime.

Kubernetes in Docker (kind)

Kubernetes in Docker (kind) enables you to run vNode locally by creating Kubernetes clusters inside Docker containers, which works well for development and testing environments.



Create a kind cluster

Create a local kind cluster:

kind create cluster

This command creates a single-node Kubernetes cluster with default configurations. For more complex setups, consider using a configuration file with --config.



Deploy vNode

After your kind cluster is running, deploy vNode using Helm:

# Deploy the Helm chart
helm upgrade --install vnode-runtime vnode-runtime -n vnode-runtime \
--repo https://charts.loft.sh --create-namespace \
--set "config.platform.host=$PLATFORM_HOST" \
--set "config.platform.accessKey=$PLATFORM_ACCESS_KEY"

# Optionally, you can only set the following if your platform is using a self-signed certificate
# --set "config.platform.insecure=true"

This installs vNode in its own namespace and connects it to your platform using the provided host and access key.

Use vNode

Use vCluster

You can use vNode to securely run privileged workloads inside a vCluster, which adds an extra layer of isolation and control. To configure vNode as the runtime for vCluster workloads, apply the following configuration:

sync:
toHost:
pods:
runtimeClassName: vnode
info

Requires vCluster version 0.23 or later.

Use Nvidia GPU Operator

vNode is compatible with the Nvidia GPU Operator and can be used to run GPU workloads. The only requirement is that vNode requires enabling CDI in the GPU Operator.

You can do this during the GPU Operator installation:

helm upgrade gpu-operator nvidia/gpu-operator --install \
-n gpu-operator --create-namespace \
--set cdi.enabled=true # Enable CDI

Or after the GPU Operator is already installed:

kubectl patch clusterpolicies.nvidia.com/cluster-policy --type='json' \
-p='[{"op": "replace", "path": "/spec/cdi/enabled", "value":true}]'

For more information on how to enable CDI, refer to the Nvidia GPU Operator documentation.

Example using vNode

Create a new privileged workload pod that uses the vNode runtime:

# Create a new privileged pod that uses shared host pid
echo "apiVersion: v1
kind: Pod
metadata:
name: bad-boy
spec:
runtimeClassName: vnode # This is the runtime class name for vNode
hostPID: true
terminationGracePeriodSeconds: 1
containers:
- image: ubuntu:jammy
name: bad-boy
command: ['tail', '-f', '/dev/null']
securityContext:
privileged: true" | kubectl apply -f -

# Wait for the pod to start
kubectl wait --for=condition=ready pod bad-boy

# Get a shell into the bad-boy
kubectl exec -it bad-boy -- bash

Within the privileged pod, running a process listing command displays only the processes of the current container and its shim — not those of other containers or pods on the host.

# ps -ef --forest
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 10:20 ? 00:00:00 /var/lib/vnode/bin/vnode-init
root 53 1 0 10:20 ? 00:00:00 /var/lib/vnode/bin/vnode-containerd-shim-runc-v
65535 75 53 0 10:20 ? 00:00:00 \_ /pause
root 185 53 0 10:20 ? 00:00:00 \_ tail -f /dev/null
root 248 53 0 10:20 pts/0 00:00:00 \_ bash
root 256 248 0 10:20 pts/0 00:00:00 \_ ps -ef --forest

Running the container without the vNode runtime exposes all host processes to the container.

# ps -ef --forest
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 10:19 ? 00:00:00 /sbin/init
root 88 1 0 10:19 ? 00:00:00 /lib/systemd/systemd-journald
root 308 1 0 10:19 ? 00:00:00 /usr/local/bin/containerd-shim-runc-v2 -namespa
65535 389 308 0 10:19 ? 00:00:00 \_ /pause
root 663 308 2 10:19 ? 00:00:04 \_ etcd --advertise-client-urls=https://192.16
root 309 1 0 10:19 ? 00:00:00 /usr/local/bin/containerd-shim-runc-v2 -namespa
65535 404 309 0 10:19 ? 00:00:00 \_ /pause
root 540 309 0 10:19 ? 00:00:00 \_ kube-scheduler --authentication-kubeconfig=
root 318 1 0 10:19 ? 00:00:00 /usr/local/bin/containerd-shim-runc-v2 -namespa
65535 411 318 0 10:19 ? 00:00:00 \_ /pause
...