Memory bandwidth benchmarking

4 min read · Intermediate


This guide walks you through benchmarking memory bandwidth in Edera zones versus standard containers. You’ll run single-threaded and multi-threaded memory tests and measure overhead.

By the end, you’ll have concrete numbers showing memory bandwidth under Edera compared to baseline.

Prerequisites

Before you begin:

  • Edera is installed and running on your cluster (installation guide)
  • kubectl is configured and can access your cluster
  • You have at least one node with the Edera runtime
  • You have a baseline node running standard containers (no Edera)

Why thread count matters

Memory bandwidth benchmarks are sensitive to thread count. Tools like sysbench and STREAM use multiple threads to saturate memory buses, and results scale with the number of available CPUs.

⚠️
Always match --threads to the vCPU count on both runtimes. If your Edera zone has 8 vCPUs but the benchmark runs 16 threads, the extra threads compete for CPUs and artificially reduce throughput. This produces misleading overhead numbers that don’t reflect actual memory performance.

Key annotations

Same as CPU benchmarking—set dev.edera/cpu to match your pod’s CPU request:

metadata:
  annotations:
    dev.edera/cpu: "8"     # Match this to your CPU request

Step 1: Deploy benchmark pods

Deploy identical pods on both runtimes. This example uses 8 CPUs:

kubectl apply -f - <<'EOF'
apiVersion: v1
kind: Pod
metadata:
  name: membench-edera
  labels:
    app: mem-bench
  annotations:
    dev.edera/cpu: "8"
spec:
  runtimeClassName: edera
  restartPolicy: Never
  containers:
    - name: bench
      image: alpine:3.21
      command: ["sh", "-c", "apk add --no-cache sysbench > /dev/null 2>&1 && sleep infinity"]
      resources:
        requests:
          cpu: "8"
          memory: "2Gi"
        limits:
          cpu: "8"
          memory: "2Gi"
---
apiVersion: v1
kind: Pod
metadata:
  name: membench-baseline
  labels:
    app: mem-bench
spec:
  restartPolicy: Never
  containers:
    - name: bench
      image: alpine:3.21
      command: ["sh", "-c", "apk add --no-cache sysbench > /dev/null 2>&1 && sleep infinity"]
      resources:
        requests:
          cpu: "8"
          memory: "2Gi"
        limits:
          cpu: "8"
          memory: "2Gi"
EOF

Wait for both pods:

kubectl wait --for=condition=Ready pod/membench-edera --timeout=120s
kubectl wait --for=condition=Ready pod/membench-baseline --timeout=120s

Step 2: Run single-threaded benchmark

Start with a single-threaded test. This isolates raw memory bandwidth from any threading effects:

# Edera zone
kubectl exec membench-edera -- \
  sysbench memory --memory-block-size=1M --memory-total-size=10G --threads=1 run

# Baseline
kubectl exec membench-baseline -- \
  sysbench memory --memory-block-size=1M --memory-total-size=10G --threads=1 run

Reading the results

Look for the transferred line:

102400.00 MiB transferred (17458.31 MiB/sec)

The MiB/sec value is your primary metric. Higher is better.

Expected result: Single-threaded memory bandwidth should be within 1-2% of baseline.

Step 3: Run multi-threaded benchmark

Run with threads matching your vCPU count. Use the same thread count on both runtimes:

# Edera zone — 8 threads for 8 vCPUs
kubectl exec membench-edera -- \
  sysbench memory --memory-block-size=1M --memory-total-size=10G --threads=8 run

# Baseline — same 8 threads
kubectl exec membench-baseline -- \
  sysbench memory --memory-block-size=1M --memory-total-size=10G --threads=8 run

Expected result: Multi-threaded memory bandwidth should be within 2-3% of baseline.

Step 4: Run multiple iterations

For reliable results, run each configuration 5 times:

echo "=== Edera (single-threaded) ==="
for i in 1 2 3 4 5; do
  echo -n "Run $i: "
  kubectl exec membench-edera -- \
    sysbench memory --memory-block-size=1M --memory-total-size=10G --threads=1 run \
    | grep "transferred"
done

echo "=== Baseline (single-threaded) ==="
for i in 1 2 3 4 5; do
  echo -n "Run $i: "
  kubectl exec membench-baseline -- \
    sysbench memory --memory-block-size=1M --memory-total-size=10G --threads=1 run \
    | grep "transferred"
done
echo "=== Edera (multi-threaded) ==="
for i in 1 2 3 4 5; do
  echo -n "Run $i: "
  kubectl exec membench-edera -- \
    sysbench memory --memory-block-size=1M --memory-total-size=10G --threads=8 run \
    | grep "transferred"
done

echo "=== Baseline (multi-threaded) ==="
for i in 1 2 3 4 5; do
  echo -n "Run $i: "
  kubectl exec membench-baseline -- \
    sysbench memory --memory-block-size=1M --memory-total-size=10G --threads=8 run \
    | grep "transferred"
done

Calculate overhead:

Overhead = (baseline - edera) / baseline × 100

Cleanup

kubectl delete pod membench-edera membench-baseline

Scaling the test

Adjust thread count for your environment. The critical rule: threads must equal vCPU count on both runtimes.

Your workloadCPU requestdev.edera/cpu--threads
Small service"2""2"2
Medium app"4""4"4
CPU-heavy app"8""8"8
Large compute"15""15"15

Troubleshooting

Unexpectedly high overhead

If Edera shows significantly higher overhead than expected:

  1. Check thread count. Running more threads than vCPUs is the most common cause of inflated overhead numbers. Verify --threads matches dev.edera/cpu.

  2. Check the annotation. Verify dev.edera/cpu is set:

    kubectl get pod membench-edera -o jsonpath='{.metadata.annotations.dev\.edera/cpu}'
  3. Check for outliers. Occasional runs may show ~50% of expected throughput due to scheduling contention. Run 5+ iterations and discard statistical outliers.

Pod stuck in ContainerCreating

Edera zones take longer to start than standard containers. Wait up to 120 seconds before investigating.

Next steps

Last updated on