1 - Running Kubelet in Standalone Mode

This tutorial shows you how to run a standalone kubelet instance.

You may have different motivations for running a standalone kubelet. This tutorial is aimed at introducing you to Kubernetes, even if you don't have much experience with it. You can follow this tutorial and learn about node setup, basic (static) Pods, and how Kubernetes manages containers.

Once you have followed this tutorial, you could try using a cluster that has a control plane to manage pods and nodes, and other types of objects. For example, Hello, minikube.

You can also run the kubelet in standalone mode to suit production use cases, such as to run the control plane for a highly available, resiliently deployed cluster. This tutorial does not cover the details you need for running a resilient control plane.

Objectives

  • Install cri-o, and kubelet on a Linux system and run them as systemd services.
  • Launch a Pod running nginx that listens to requests on TCP port 80 on the Pod's IP address.
  • Learn how the different components of the solution interact among themselves.

Before you begin

  • Admin (root) access to a Linux system that uses systemd and iptables (or nftables with iptables emulation).
  • Access to the Internet to download the components needed for the tutorial, such as:

Prepare the system

Swap configuration

By default, kubelet fails to start if swap memory is detected on a node. This means that swap should either be disabled or tolerated by kubelet.

If you have swap memory enabled, either disable it or add failSwapOn: false to the kubelet configuration file.

To check if swap is enabled:

sudo swapon --show 

If there is no output from the command, then swap memory is already disabled.

To disable swap temporarily:

sudo swapoff -a 

To make this change persistent across reboots:

Make sure swap is disabled in either /etc/fstab or systemd.swap, depending on how it was configured on your system.

Enable IPv4 packet forwarding

To check if IPv4 packet forwarding is enabled:

cat /proc/sys/net/ipv4/ip_forward 

If the output is 1, it is already enabled. If the output is 0, then follow next steps.

To enable IPv4 packet forwarding, create a configuration file that sets the net.ipv4.ip_forward parameter to 1:

sudo tee /etc/sysctl.d/k8s.conf <<EOF net.ipv4.ip_forward = 1 EOF 

Apply the changes to the system:

sudo sysctl --system 

The output is similar to:

... * Applying /etc/sysctl.d/k8s.conf ... net.ipv4.ip_forward = 1 * Applying /etc/sysctl.conf ... 

Download, install, and configure the components

Install a container runtime

Download the latest available versions of the required packages (recommended).

This tutorial suggests installing the CRI-O container runtime (external link).

There are several ways to install the CRI-O container runtime, depending on your particular Linux distribution. Although CRI-O recommends using either deb or rpm packages, this tutorial uses the static binary bundle script of the CRI-O Packaging project, both to streamline the overall process, and to remain distribution agnostic.

The script installs and configures additional required software, such as cni-plugins, for container networking, and crun and runc, for running containers.

The script will automatically detect your system's processor architecture (amd64 or arm64) and select and install the latest versions of the software packages.

Set up CRI-O

Visit the releases page (external link).

Download the static binary bundle script:

curl https://raw.githubusercontent.com/cri-o/packaging/main/get > crio-install 

Run the installer script:

sudo bash crio-install 

Enable and start the crio service:

sudo systemctl daemon-reload sudo systemctl enable --now crio.service 

Quick test:

sudo systemctl is-active crio.service 

The output is similar to:

active 

Detailed service check:

sudo journalctl -f -u crio.service 

Install network plugins

The cri-o installer installs and configures the cni-plugins package. You can verify the installation running the following command:

/opt/cni/bin/bridge --version 

The output is similar to:

CNI bridge plugin v1.5.1 CNI protocol versions supported: 0.1.0, 0.2.0, 0.3.0, 0.3.1, 0.4.0, 1.0.0 

To check the default configuration:

cat /etc/cni/net.d/11-crio-ipv4-bridge.conflist 

The output is similar to:

{  "cniVersion": "1.0.0",  "name": "crio",  "plugins": [  {  "type": "bridge",  "bridge": "cni0",  "isGateway": true,  "ipMasq": true,  "hairpinMode": true,  "ipam": {  "type": "host-local",  "routes": [  { "dst": "0.0.0.0/0" }  ],  "ranges": [  [{ "subnet": "10.85.0.0/16" }]  ]  }  }  ] } 

Download and set up the kubelet

Download the latest stable release of the kubelet.

 curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubelet" 

 curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubelet" 

Configure:

sudo mkdir -p /etc/kubernetes/manifests 
sudo tee /etc/kubernetes/kubelet.yaml <<EOF apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication:  webhook:  enabled: false # Do NOT use in production clusters! authorization:  mode: AlwaysAllow # Do NOT use in production clusters! enableServer: false logging:  format: text address: 127.0.0.1 # Restrict access to localhost readOnlyPort: 10255 # Do NOT use in production clusters! staticPodPath: /etc/kubernetes/manifests containerRuntimeEndpoint: unix:///var/run/crio/crio.sock EOF 

Install:

chmod +x kubelet sudo cp kubelet /usr/bin/ 

Create a systemd service unit file:

sudo tee /etc/systemd/system/kubelet.service <<EOF [Unit] Description=Kubelet  [Service] ExecStart=/usr/bin/kubelet \  --config=/etc/kubernetes/kubelet.yaml Restart=always  [Install] WantedBy=multi-user.target EOF 

The command line argument --kubeconfig has been intentionally omitted in the service configuration file. This argument sets the path to a kubeconfig file that specifies how to connect to the API server, enabling API server mode. Omitting it, enables standalone mode.

Enable and start the kubelet service:

sudo systemctl daemon-reload sudo systemctl enable --now kubelet.service 

Quick test:

sudo systemctl is-active kubelet.service 

The output is similar to:

active 

Detailed service check:

sudo journalctl -u kubelet.service 

Check the kubelet's API /healthz endpoint:

curl http://localhost:10255/healthz?verbose 

The output is similar to:

[+]ping ok [+]log ok [+]syncloop ok healthz check passed 

Query the kubelet's API /pods endpoint:

curl http://localhost:10255/pods | jq '.' 

The output is similar to:

{  "kind": "PodList",  "apiVersion": "v1",  "metadata": {},  "items": null } 

Run a Pod in the kubelet

In standalone mode, you can run Pods using Pod manifests. The manifests can either be on the local filesystem, or fetched via HTTP from a configuration source.

Create a manifest for a Pod:

cat <<EOF > static-web.yaml apiVersion: v1 kind: Pod metadata:  name: static-web spec:  containers:  - name: web  image: nginx  ports:  - name: web  containerPort: 80  protocol: TCP EOF 

Copy the static-web.yaml manifest file to the /etc/kubernetes/manifests directory.

sudo cp static-web.yaml /etc/kubernetes/manifests/ 

Find out information about the kubelet and the Pod

The Pod networking plugin creates a network bridge (cni0) and a pair of veth interfaces for each Pod (one of the pair is inside the newly made Pod, and the other is at the host level).

Query the kubelet's API endpoint at http://localhost:10255/pods:

curl http://localhost:10255/pods | jq '.' 

To obtain the IP address of the static-web Pod:

curl http://localhost:10255/pods | jq '.items[].status.podIP' 

The output is similar to:

"10.85.0.4" 

Connect to the nginx server Pod on http://<IP>:<Port> (port 80 is the default), in this case:

curl http://10.85.0.4 

The output is similar to:

<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> ... 

Where to look for more details

If you need to diagnose a problem getting this tutorial to work, you can look within the following directories for monitoring and troubleshooting:

/var/lib/cni /var/lib/containers /var/lib/kubelet /var/log/containers /var/log/pods 

Clean up

kubelet

sudo systemctl disable --now kubelet.service sudo systemctl daemon-reload sudo rm /etc/systemd/system/kubelet.service sudo rm /usr/bin/kubelet sudo rm -rf /etc/kubernetes sudo rm -rf /var/lib/kubelet sudo rm -rf /var/log/containers sudo rm -rf /var/log/pods 

Container Runtime

sudo systemctl disable --now crio.service sudo systemctl daemon-reload sudo rm -rf /usr/local/bin sudo rm -rf /usr/local/lib sudo rm -rf /usr/local/share sudo rm -rf /usr/libexec/crio sudo rm -rf /etc/crio sudo rm -rf /etc/containers 

Network Plugins

sudo rm -rf /opt/cni sudo rm -rf /etc/cni sudo rm -rf /var/lib/cni 

Conclusion

This page covered the basic aspects of deploying a kubelet in standalone mode. You are now ready to deploy Pods and test additional functionality.

Notice that in standalone mode the kubelet does not support fetching Pod configurations from the control plane (because there is no control plane connection).

You also cannot use a ConfigMap or a Secret to configure the containers in a static Pod.

What's next

2 - Configuring swap memory on Kubernetes nodes

This page provides an example of how to provision and configure swap memory on a Kubernetes node using kubeadm.

Objectives

  • Provision swap memory on a Kubernetes node using kubeadm.
  • Learn to configure both encrypted and unencrypted swap.
  • Learn to enable swap on boot.

Before you begin

You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:

Your Kubernetes server must be at or later than version 1.33.

To check the version, enter kubectl version.

You need at least one worker node in your cluster which needs to run a Linux operating system. It is required for this demo that the kubeadm tool be installed, following the steps outlined in the kubeadm installation guide.

On each worker node where you will configure swap use, you need:

  • fallocate

  • mkswap

  • swapon

  • For encrypted swap space (recommended), you also need:

  • cryptsetup

Install a swap-enabled cluster with kubeadm

Create a swap file and turn swap on

If swap is not enabled, there's a need to provision swap on the node. The following sections demonstrate creating 4GiB of swap, both in the encrypted and unencrypted case.

An encrypted swap file can be set up as follows. Bear in mind that this example uses the cryptsetup binary (which is available on most Linux distributions).

# Allocate storage and restrict access fallocate --length 4GiB /swapfile chmod 600 /swapfile  # Create an encrypted device backed by the allocated storage cryptsetup --type plain --cipher aes-xts-plain64 --key-size 256 -d /dev/urandom open /swapfile cryptswap  # Format the swap space mkswap /dev/mapper/cryptswap  # Activate the swap space for paging swapon /dev/mapper/cryptswap 

An unencrypted swap file can be set up as follows.

# Allocate storage and restrict access fallocate --length 4GiB /swapfile chmod 600 /swapfile  # Format the swap space mkswap /swapfile  # Activate the swap space for paging swapon /swapfile 

Verify that swap is enabled

Swap can be verified to be enabled with both swapon -s command or the free command.

Using swapon -s:

Filename Type	Size	Used	Priority /dev/dm-0 partition	4194300	0	-2 

Using free -h:

 total used free shared buff/cache available Mem: 3.8Gi 1.3Gi 249Mi 25Mi 2.5Gi 2.5Gi Swap: 4.0Gi 0B 4.0Gi 

Enable swap on boot

After setting up swap, to start the swap file at boot time, you typically either set up a systemd unit to activate (encrypted) swap, or you add a line similar to /swapfile swap swap defaults 0 0 into /etc/fstab.

Using systemd for swap activation allows the system to delay kubelet start until swap is available, if that is something you want to ensure. In a similar way, using systemd allows your server to leave swap active until kubelet (and, typically, your container runtime) have shut down.

Set up kubelet configuration

After enabling swap on the node, kubelet needs to be configured to use it. You need to select a swap behavior for this node. You'll configure LimitedSwap behavior for this tutorial.

Find and edit the kubelet configuration file, and:

  • set failSwapOn to false
  • set memorySwap.swapBehavior to LimitedSwap
 # this fragment goes into the kubelet's configuration file  failSwapOn: false  memorySwap:  swapBehavior: LimitedSwap 

In order for these configurations to take effect, kubelet needs to be restarted. Typically you do that by running:

systemctl restart kubelet.service 

You should find that the kubelet is now healthy, and that you can run Pods that use swap memory as needed.

3 - Install Drivers and Allocate Devices with DRA

FEATURE STATE: Kubernetes v1.34 [stable](enabled by default)

This tutorial shows you how to install Dynamic Resource Allocation (DRA) drivers in your cluster and how to use them in conjunction with the DRA APIs to allocate devices to Pods. This page is intended for cluster administrators.

Dynamic Resource Allocation (DRA) lets a cluster manage availability and allocation of hardware resources to satisfy Pod-based claims for hardware requirements and preferences. To support this, a mixture of Kubernetes built-in components (like the Kubernetes scheduler, kubelet, and kube-controller-manager) and third-party drivers from device owners (called DRA drivers) share the responsibility to advertise, allocate, prepare, mount, healthcheck, unprepare, and cleanup resources throughout the Pod lifecycle. These components share information via a series of DRA specific APIs in the resource.k8s.io API group including DeviceClasses, ResourceSlices, ResourceClaims, as well as new fields in the Pod spec itself.

Objectives

  • Deploy an example DRA driver
  • Deploy a Pod requesting a hardware claim using DRA APIs
  • Delete a Pod that has a claim

Before you begin

Your cluster should support RBAC. You can try this tutorial with a cluster using a different authorization mechanism, but in that case you will have to adapt the steps around defining roles and permissions.

You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:

This tutorial has been tested with Linux nodes, though it may also work with other types of nodes.

Your Kubernetes server must be version v1.34.

To check the version, enter kubectl version.

If your cluster is not currently running Kubernetes 1.34 then please check the documentation for the version of Kubernetes that you plan to use.

Explore the initial cluster state

You can spend some time to observe the initial state of a cluster with DRA enabled, especially if you have not used these APIs extensively before. If you set up a new cluster for this tutorial, with no driver installed and no Pod claims yet to satisfy, the output of these commands won't show any resources.

  1. Get a list of DeviceClasses:

    kubectl get deviceclasses 

    The output is similar to this:

    No resources found 
  2. Get a list of ResourceSlices:

    kubectl get resourceslices 

    The output is similar to this:

    No resources found 
  3. Get a list of ResourceClaims and ResourceClaimTemplates

    kubectl get resourceclaims -A kubectl get resourceclaimtemplates -A 

    The output is similar to this:

    No resources found No resources found 

At this point, you have confirmed that DRA is enabled and configured properly in the cluster, and that no DRA drivers have advertised any resources to the DRA APIs yet.

Install an example DRA driver

DRA drivers are third-party applications that run on each node of your cluster to interface with the hardware of that node and Kubernetes' built-in DRA components. The installation procedure depends on the driver you choose, but is likely deployed as a DaemonSet to all or a selection of the nodes (using selectors or similar mechanisms) in your cluster.

Check your driver's documentation for specific installation instructions, which might include a Helm chart, a set of manifests, or other deployment tooling.

This tutorial uses an example driver which can be found in the kubernetes-sigs/dra-example-driver repository to demonstrate driver installation. This example driver advertises simulated GPUs to Kubernetes for your Pods to interact with.

Prepare your cluster for driver installation

To simplify cleanup, create a namespace named dra-tutorial:

  1. Create the namespace:

    kubectl create namespace dra-tutorial 

In a production environment, you would likely be using a previously released or qualified image from the driver vendor or your own organization, and your nodes would need to have access to the image registry where the driver image is hosted. In this tutorial, you will use a publicly released image of the dra-example-driver to simulate access to a DRA driver image.

  1. Confirm your nodes have access to the image by running the following from within one of your cluster's nodes:

    docker pull registry.k8s.io/dra-example-driver/dra-example-driver:v0.2.0 

Deploy the DRA driver components

For this tutorial, you will install the critical example resource driver components individually with kubectl.

  1. Create the DeviceClass representing the device types this DRA driver supports:

    apiVersion: resource.k8s.io/v1 kind: DeviceClass metadata:  name: gpu.example.com spec:  selectors:  - cel:  expression: "device.driver == 'gpu.example.com'"
    kubectl apply --server-side -f http://k8s.io/examples/dra/driver-install/deviceclass.yaml 
  2. Create the ServiceAccount, ClusterRole and ClusterRoleBinding that will be used by the driver to gain permissions to interact with the Kubernetes API on this cluster:

    1. Create the Service Account:

      apiVersion: v1 kind: ServiceAccount metadata:  name: dra-example-driver-service-account  namespace: dra-tutorial  labels:  app.kubernetes.io/name: dra-example-driver  app.kubernetes.io/instance: dra-example-driver
      kubectl apply --server-side -f http://k8s.io/examples/dra/driver-install/serviceaccount.yaml 
    2. Create the ClusterRole:

      apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata:  name: dra-example-driver-role rules: - apiGroups: ["resource.k8s.io"]  resources: ["resourceclaims"]  verbs: ["get"] - apiGroups: [""]  resources: ["nodes"]  verbs: ["get"] - apiGroups: ["resource.k8s.io"]  resources: ["resourceslices"]  verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
      kubectl apply --server-side -f http://k8s.io/examples/dra/driver-install/clusterrole.yaml 
    3. Create the ClusterRoleBinding:

      apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata:  name: dra-example-driver-role-binding subjects: - kind: ServiceAccount  name: dra-example-driver-service-account  namespace: dra-tutorial roleRef:  kind: ClusterRole  name: dra-example-driver-role  apiGroup: rbac.authorization.k8s.io
      kubectl apply --server-side -f http://k8s.io/examples/dra/driver-install/clusterrolebinding.yaml 
  3. Create a PriorityClass for the DRA driver. The PriorityClass prevents preemption of th DRA driver component, which is responsible for important lifecycle operations for Pods with claims. Learn more about pod priority and preemption here.

    apiVersion: scheduling.k8s.io/v1 kind: PriorityClass metadata:  name: dra-driver-high-priority value: 1000000 globalDefault: false description: "This priority class should be used for DRA driver pods only."
    kubectl apply --server-side -f http://k8s.io/examples/dra/driver-install/priorityclass.yaml 
  4. Deploy the actual DRA driver as a DaemonSet configured to run the example driver binary with the permissions provisioned above. The DaemonSet has the permissions that you granted to the ServiceAccount in the previous steps.

    apiVersion: apps/v1 kind: DaemonSet metadata:  name: dra-example-driver-kubeletplugin  namespace: dra-tutorial  labels:  app.kubernetes.io/name: dra-example-driver spec:  selector:  matchLabels:  app.kubernetes.io/name: dra-example-driver  updateStrategy:  type: RollingUpdate  template:  metadata:  labels:  app.kubernetes.io/name: dra-example-driver  spec:  priorityClassName: dra-driver-high-priority  serviceAccountName: dra-example-driver-service-account  securityContext:  {}  containers:  - name: plugin  securityContext:  privileged: true  image: registry.k8s.io/dra-example-driver/dra-example-driver:v0.2.0  imagePullPolicy: IfNotPresent  command: ["dra-example-kubeletplugin"]  resources:  {}  # Production drivers should always implement a liveness probe  # For the tutorial we simply omit it  # livenessProbe:  # grpc:  # port: 51515  # service: liveness  # failureThreshold: 3  # periodSeconds: 10  env:  - name: CDI_ROOT  value: /var/run/cdi  - name: KUBELET_REGISTRAR_DIRECTORY_PATH  value: "/var/lib/kubelet/plugins_registry"  - name: KUBELET_PLUGINS_DIRECTORY_PATH  value: "/var/lib/kubelet/plugins"  - name: NODE_NAME  valueFrom:  fieldRef:  fieldPath: spec.nodeName  - name: NAMESPACE  valueFrom:  fieldRef:  fieldPath: metadata.namespace  # Simulated number of devices the example driver will pretend to have.  - name: NUM_DEVICES  value: "9"  - name: HEALTHCHECK_PORT  value: "51515"  volumeMounts:  - name: plugins-registry  mountPath: "/var/lib/kubelet/plugins_registry"  - name: plugins  mountPath: "/var/lib/kubelet/plugins"  - name: cdi  mountPath: /var/run/cdi  volumes:  - name: plugins-registry  hostPath:  path: "/var/lib/kubelet/plugins_registry"  - name: plugins  hostPath:  path: "/var/lib/kubelet/plugins"  - name: cdi  hostPath:  path: /var/run/cdi
    kubectl apply --server-side -f http://k8s.io/examples/dra/driver-install/daemonset.yaml 

    The DaemonSet is configured with the volume mounts necessary to interact with the underlying Container Device Interface (CDI) directory, and to expose its socket to kubelet via the kubelet/plugins directory.

Verify the DRA driver installation

  1. Get a list of the Pods of the DRA driver DaemonSet across all worker nodes:

    kubectl get pod -l app.kubernetes.io/name=dra-example-driver -n dra-tutorial 

    The output is similar to this:

    NAME READY STATUS RESTARTS AGE dra-example-driver-kubeletplugin-4sk2x 1/1 Running 0 13s dra-example-driver-kubeletplugin-cttr2 1/1 Running 0 13s 
  2. The initial responsibility of each node's local DRA driver is to update the cluster with what devices are available to Pods on that node, by publishing its metadata to the ResourceSlices API. You can check that API to see that each node with a driver is advertising the device class it represents.

    Check for available ResourceSlices:

    kubectl get resourceslices 

    The output is similar to this:

    NAME NODE DRIVER POOL AGE kind-worker-gpu.example.com-k69gd kind-worker gpu.example.com kind-worker 19s kind-worker2-gpu.example.com-qdgpn kind-worker2 gpu.example.com kind-worker2 19s 

At this point, you have successfully installed the example DRA driver, and confirmed its initial configuration. You're now ready to use DRA to schedule Pods.

Claim resources and deploy a Pod

To request resources using DRA, you create ResourceClaims or ResourceClaimTemplates that define the resources that your Pods need. In the example driver, a memory capacity attribute is exposed for mock GPU devices. This section shows you how to use Common Expression Language to express your requirements in a ResourceClaim, select that ResourceClaim in a Pod specification, and observe the resource allocation.

This tutorial showcases only one basic example of a DRA ResourceClaim. Read Dynamic Resource Allocation to learn more about ResourceClaims.

Create the ResourceClaim

In this section, you create a ResourceClaim and reference it in a Pod. Whatever the claim, the deviceClassName is a required field, narrowing down the scope of the request to a specific device class. The request itself can include a Common Expression Language expression that references attributes that may be advertised by the driver managing that device class.

In this example, you will create a request for any GPU advertising over 10Gi memory capacity. The attribute exposing capacity from the example driver takes the form device.capacity['gpu.example.com'].memory. Note also that the name of the claim is set to some-gpu.

apiVersion: resource.k8s.io/v1 kind: ResourceClaim metadata:  name: some-gpu  namespace: dra-tutorial spec:  devices:  requests:  - name: some-gpu  exactly:  deviceClassName: gpu.example.com  selectors:  - cel:  expression: "device.capacity['gpu.example.com'].memory.compareTo(quantity('10Gi')) >= 0"
kubectl apply --server-side -f http://k8s.io/examples/dra/driver-install/example/resourceclaim.yaml 

Create the Pod that references that ResourceClaim

Below is the Pod manifest referencing the ResourceClaim you just made, some-gpu, in the spec.resourceClaims.resourceClaimName field. The local name for that claim, gpu, is then used in the spec.containers.resources.claims.name field to allocate the claim to the Pod's underlying container.

apiVersion: v1 kind: Pod metadata:  name: pod0  namespace: dra-tutorial  labels:  app: pod spec:  containers:  - name: ctr0  image: ubuntu:24.04  command: ["bash", "-c"]  args: ["export; trap 'exit 0' TERM; sleep 9999 & wait"]  resources:  claims:  - name: gpu  resourceClaims:  - name: gpu  resourceClaimName: some-gpu
kubectl apply --server-side -f http://k8s.io/examples/dra/driver-install/example/pod.yaml 
  1. Confirm the pod has deployed:

    kubectl get pod pod0 -n dra-tutorial 

    The output is similar to this:

    NAME READY STATUS RESTARTS AGE pod0 1/1 Running 0 9s 

Explore the DRA state

After you create the Pod, the cluster tries to schedule that Pod to a node where Kubernetes can satisfy the ResourceClaim. In this tutorial, the DRA driver is deployed on all nodes, and is advertising mock GPUs on all nodes, all of which have enough capacity advertised to satisfy the Pod's claim, so Kubernetes can schedule this Pod on any node and can allocate any of the mock GPUs on that node.

When Kubernetes allocates a mock GPU to a Pod, the example driver adds environment variables in each container it is allocated to in order to indicate which GPUs would have been injected into them by a real resource driver and how they would have been configured, so you can check those environment variables to see how the Pods have been handled by the system.

  1. Check the Pod logs, which report the name of the mock GPU that was allocated:

    kubectl logs pod0 -c ctr0 -n dra-tutorial | grep -E "GPU_DEVICE_[0-9]+=" | grep -v "RESOURCE_CLAIM" 

    The output is similar to this:

    declare -x GPU_DEVICE_0="gpu-0" 
  2. Check the state of the ResourceClaim object:

    kubectl get resourceclaims -n dra-tutorial 

    The output is similar to this:

    NAME STATE AGE some-gpu allocated,reserved 34s 

    In this output, the STATE column shows that the ResourceClaim is allocated and reserved.

  3. Check the details of the some-gpu ResourceClaim. The status stanza of the ResourceClaim has information about the allocated device and the Pod it has been reserved for:

    kubectl get resourceclaim some-gpu -n dra-tutorial -o yaml 

    The output is similar to this:

     1apiVersion: resource.k8s.io/v1  2kind: ResourceClaim  3metadata:  4 creationTimestamp: "2025-08-20T18:17:31Z"  5 finalizers:  6 - resource.kubernetes.io/delete-protection  7 name: some-gpu  8 namespace: dra-tutorial  9 resourceVersion: "2326" 10 uid: d3e48dbf-40da-47c3-a7b9-f7d54d1051c3 11spec: 12 devices: 13 requests: 14 - exactly: 15 allocationMode: ExactCount 16 count: 1 17 deviceClassName: gpu.example.com 18 selectors: 19 - cel: 20 expression: device.capacity['gpu.example.com'].memory.compareTo(quantity('10Gi')) 21 >= 0 22 name: some-gpu 23status: 24 allocation: 25 devices: 26 results: 27 - device: gpu-0 28 driver: gpu.example.com 29 pool: kind-worker 30 request: some-gpu 31 nodeSelector: 32 nodeSelectorTerms: 33 - matchFields: 34 - key: metadata.name 35 operator: In 36 values: 37 - kind-worker 38 reservedFor: 39 - name: pod0 40 resource: pods 41 uid: c4dadf20-392a-474d-a47b-ab82080c8bd7

  4. To check how the driver handled device allocation, get the logs for the driver DaemonSet Pods:

    kubectl logs -l app.kubernetes.io/name=dra-example-driver -n dra-tutorial 

    The output is similar to this:

    I0820 18:17:44.131324 1 driver.go:106] PrepareResourceClaims is called: number of claims: 1 I0820 18:17:44.135056 1 driver.go:133] Returning newly prepared devices for claim 'd3e48dbf-40da-47c3-a7b9-f7d54d1051c3': [{[some-gpu] kind-worker gpu-0 [k8s.gpu.example.com/gpu=common k8s.gpu.example.com/gpu=d3e48dbf-40da-47c3-a7b9-f7d54d1051c3-gpu-0]}] 

You have now successfully deployed a Pod that claims devices using DRA, verified that the Pod was scheduled to an appropriate node, and saw that the associated DRA APIs kinds were updated with the allocation status.

Delete a Pod that has a claim

When a Pod with a claim is deleted, the DRA driver deallocates the resource so it can be available for future scheduling. To validate this behavior, delete the Pod that you created in the previous steps and watch the corresponding changes to the ResourceClaim and driver.

  1. Delete the pod0 Pod:

    kubectl delete pod pod0 -n dra-tutorial 

    The output is similar to this:

    pod "pod0" deleted 

Observe the DRA state

When the Pod is deleted, the driver deallocates the device from the ResourceClaim and updates the ResourceClaim resource in the Kubernetes API. The ResourceClaim has a pending state until it's referenced in a new Pod.

  1. Check the state of the some-gpu ResourceClaim:

    kubectl get resourceclaims -n dra-tutorial 

    The output is similar to this:

    NAME STATE AGE some-gpu pending 76s 
  2. Verify that the driver has processed unpreparing the device for this claim by checking the driver logs:

    kubectl logs -l app.kubernetes.io/name=dra-example-driver -n dra-tutorial 

    The output is similar to this:

    I0820 18:22:15.629376 1 driver.go:138] UnprepareResourceClaims is called: number of claims: 1 

You have now deleted a Pod that had a claim, and observed that the driver took action to unprepare the underlying hardware resource and update the DRA APIs to reflect that the resource is available again for future scheduling.

Cleaning up

To clean up the resources that you created in this tutorial, follow these steps:

kubectl delete namespace dra-tutorial kubectl delete deviceclass gpu.example.com kubectl delete clusterrole dra-example-driver-role kubectl delete clusterrolebinding dra-example-driver-role-binding kubectl delete priorityclass dra-driver-high-priority 

What's next

4 - Namespaces Walkthrough

Kubernetes namespaces help different projects, teams, or customers to share a Kubernetes cluster.

It does this by providing the following:

  1. A scope for Names.
  2. A mechanism to attach authorization and policy to a subsection of the cluster.

Use of multiple namespaces is optional.

This example demonstrates how to use Kubernetes namespaces to subdivide your cluster.

Before you begin

You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:

To check the version, enter kubectl version.

Prerequisites

This example assumes the following:

  1. You have an existing Kubernetes cluster.
  2. You have a basic understanding of Kubernetes Pods, Services, and Deployments.

Understand the default namespace

By default, a Kubernetes cluster will instantiate a default namespace when provisioning the cluster to hold the default set of Pods, Services, and Deployments used by the cluster.

Assuming you have a fresh cluster, you can inspect the available namespaces by doing the following:

kubectl get namespaces 
NAME STATUS AGE default Active 13m 

Create new namespaces

For this exercise, we will create two additional Kubernetes namespaces to hold our content.

Let's imagine a scenario where an organization is using a shared Kubernetes cluster for development and production use cases.

The development team would like to maintain a space in the cluster where they can get a view on the list of Pods, Services, and Deployments they use to build and run their application. In this space, Kubernetes resources come and go, and the restrictions on who can or cannot modify resources are relaxed to enable agile development.

The operations team would like to maintain a space in the cluster where they can enforce strict procedures on who can or cannot manipulate the set of Pods, Services, and Deployments that run the production site.

One pattern this organization could follow is to partition the Kubernetes cluster into two namespaces: development and production.

Let's create two new namespaces to hold our work.

Use the file namespace-dev.yaml which describes a development namespace:

apiVersion: v1 kind: Namespace metadata:  name: development  labels:  name: development 

Create the development namespace using kubectl.

kubectl create -f https://k8s.io/examples/admin/namespace-dev.yaml 

Save the following contents into file namespace-prod.yaml which describes a production namespace:

apiVersion: v1 kind: Namespace metadata:  name: production  labels:  name: production 

And then let's create the production namespace using kubectl.

kubectl create -f https://k8s.io/examples/admin/namespace-prod.yaml 

To be sure things are right, let's list all of the namespaces in our cluster.

kubectl get namespaces --show-labels 
NAME STATUS AGE LABELS default Active 32m <none> development Active 29s name=development production Active 23s name=production 

Create pods in each namespace

A Kubernetes namespace provides the scope for Pods, Services, and Deployments in the cluster.

Users interacting with one namespace do not see the content in another namespace.

To demonstrate this, let's spin up a simple Deployment and Pods in the development namespace.

We first check what is the current context:

kubectl config view 
apiVersion: v1 clusters: - cluster:  certificate-authority-data: REDACTED  server: https://130.211.122.180  name: lithe-cocoa-92103_kubernetes contexts: - context:  cluster: lithe-cocoa-92103_kubernetes  user: lithe-cocoa-92103_kubernetes  name: lithe-cocoa-92103_kubernetes current-context: lithe-cocoa-92103_kubernetes kind: Config preferences: {} users: - name: lithe-cocoa-92103_kubernetes  user:  client-certificate-data: REDACTED  client-key-data: REDACTED  token: 65rZW78y8HbwXXtSXuUw9DbP4FLjHi4b - name: lithe-cocoa-92103_kubernetes-basic-auth  user:  password: h5M0FtUUIflBSdI7  username: admin 
kubectl config current-context 
lithe-cocoa-92103_kubernetes 

The next step is to define a context for the kubectl client to work in each namespace. The value of "cluster" and "user" fields are copied from the current context.

kubectl config set-context dev --namespace=development \  --cluster=lithe-cocoa-92103_kubernetes \  --user=lithe-cocoa-92103_kubernetes  kubectl config set-context prod --namespace=production \  --cluster=lithe-cocoa-92103_kubernetes \  --user=lithe-cocoa-92103_kubernetes 

By default, the above commands add two contexts that are saved into file .kube/config. You can now view the contexts and alternate against the two new request contexts depending on which namespace you wish to work against.

To view the new contexts:

kubectl config view 
apiVersion: v1 clusters: - cluster:  certificate-authority-data: REDACTED  server: https://130.211.122.180  name: lithe-cocoa-92103_kubernetes contexts: - context:  cluster: lithe-cocoa-92103_kubernetes  user: lithe-cocoa-92103_kubernetes  name: lithe-cocoa-92103_kubernetes - context:  cluster: lithe-cocoa-92103_kubernetes  namespace: development  user: lithe-cocoa-92103_kubernetes  name: dev - context:  cluster: lithe-cocoa-92103_kubernetes  namespace: production  user: lithe-cocoa-92103_kubernetes  name: prod current-context: lithe-cocoa-92103_kubernetes kind: Config preferences: {} users: - name: lithe-cocoa-92103_kubernetes  user:  client-certificate-data: REDACTED  client-key-data: REDACTED  token: 65rZW78y8HbwXXtSXuUw9DbP4FLjHi4b - name: lithe-cocoa-92103_kubernetes-basic-auth  user:  password: h5M0FtUUIflBSdI7  username: admin 

Let's switch to operate in the development namespace.

kubectl config use-context dev 

You can verify your current context by doing the following:

kubectl config current-context 
dev 

At this point, all requests we make to the Kubernetes cluster from the command line are scoped to the development namespace.

Let's create some contents.

apiVersion: apps/v1 kind: Deployment metadata:  labels:  app: snowflake  name: snowflake spec:  replicas: 2  selector:  matchLabels:  app: snowflake  template:  metadata:  labels:  app: snowflake  spec:  containers:  - image: registry.k8s.io/serve_hostname  imagePullPolicy: Always  name: snowflake 

Apply the manifest to create a Deployment

kubectl apply -f https://k8s.io/examples/admin/snowflake-deployment.yaml 

We have created a deployment whose replica size is 2 that is running the pod called snowflake with a basic container that serves the hostname.

kubectl get deployment 
NAME READY UP-TO-DATE AVAILABLE AGE snowflake 2/2 2 2 2m 
kubectl get pods -l app=snowflake 
NAME READY STATUS RESTARTS AGE snowflake-3968820950-9dgr8 1/1 Running 0 2m snowflake-3968820950-vgc4n 1/1 Running 0 2m 

And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace.

Let's switch to the production namespace and show how resources in one namespace are hidden from the other.

kubectl config use-context prod 

The production namespace should be empty, and the following commands should return nothing.

kubectl get deployment kubectl get pods 

Production likes to run cattle, so let's create some cattle pods.

kubectl create deployment cattle --image=registry.k8s.io/serve_hostname --replicas=5  kubectl get deployment 
NAME READY UP-TO-DATE AVAILABLE AGE cattle 5/5 5 5 10s 
kubectl get pods -l app=cattle 
NAME READY STATUS RESTARTS AGE cattle-2263376956-41xy6 1/1 Running 0 34s cattle-2263376956-kw466 1/1 Running 0 34s cattle-2263376956-n4v97 1/1 Running 0 34s cattle-2263376956-p5p3i 1/1 Running 0 34s cattle-2263376956-sxpth 1/1 Running 0 34s 

At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.

As the policy support in Kubernetes evolves, we will extend this scenario to show how you can provide different authorization rules for each namespace.