Operations integration suite

5 min read · Intermediate


This suite validates that Edera integrates with existing operational tools and workflows. The tests verify observability, automation, and workflow compatibility.

ℹ️
Before you begin: Complete the prerequisites including cloning the learn repo and running make setup.

All manifests are available in the learn repository.

Test 1: Observability with Prometheus and Grafana

Edera Protect exposes Prometheus metrics at port 3035 on each Edera node. This test validates that you can scrape and visualize zone-level metrics.

Why this matters

Edera provides detailed metrics about each zone including CPU usage, memory allocation, and zone lifecycle timestamps. These metrics include Kubernetes labels (k8s_namespace, k8s_pod) so you can correlate zone metrics with your workloads.

Install monitoring stack

make grafana-install

Configure Edera metrics scraping

Get your Edera node’s internal IP:

NODE_IP=$(kubectl get nodes -l runtime=edera -o jsonpath='{.items[0].status.addresses[?(@.type=="InternalIP")].address}')
echo $NODE_IP

Create the Service, Endpoints, and ServiceMonitor:

sed "s/NODE_IP/${NODE_IP}/g" operations/edera-servicemonitor.yaml | kubectl apply -f -

Verify metrics are being scraped

Check that Prometheus can reach the Edera metrics endpoint:

kubectl run curl-test --rm -it --restart=Never --image=curlimages/curl \
  -- curl -s http://$NODE_IP:3035/metrics | grep zone_memory

Example output:

zone_memory_total_bytes{k8s_namespace="default",k8s_pod="welcome-to-edera",zone_id="..."} 215666688
zone_memory_used_bytes{k8s_namespace="default",k8s_pod="welcome-to-edera",zone_id="..."} 89153536
zone_memory_free_bytes{k8s_namespace="default",k8s_pod="welcome-to-edera",zone_id="..."} 122712064

Access Grafana

Port-forward to access Grafana (uses port 3001 to avoid conflicts with local dev servers):

kubectl port-forward -n monitoring svc/prometheus-grafana 3001:80

If accessing from a bastion host, use an SSH tunnel with your bastion SSH key:

ssh -L 3001:localhost:3001 -i <your-bastion-key> ivy@<bastion-ip> "kubectl port-forward -n monitoring svc/prometheus-grafana 3001:80"

Get the admin password:

kubectl get secret -n monitoring prometheus-grafana -o jsonpath='{.data.admin-password}' | base64 -d && echo

Open http://localhost:3001 and log in with username admin and the password from the command above.

Edera metrics reference

MetricDescription
zone_cpu_usage_percentCPU usage per zone (per vCPU)
zone_memory_total_bytesTotal memory allocated to zone
zone_memory_used_bytesMemory used by zone
zone_memory_free_bytesFree memory in zone
zone_create_timestamp_millisecondsZone creation time
zone_ready_timestamp_millisecondsZone ready time
hypervisor_cpu_usage_seconds_totalHypervisor CPU time per zone
hypervisor_memory_max_bytesMax memory limit per zone
host_cpu_usage_percentHost CPU usage (dom0)
host_memory_used_bytesHost memory usage

Success criteria:

CriteriaExpected result
Metrics endpoint accessiblecurl returns Prometheus metrics
Zone metrics availablezone_* metrics show per-pod data
Kubernetes labels presentMetrics include k8s_namespace, k8s_pod

Cleanup:

kubectl delete -f operations/edera-servicemonitor.yaml
helm uninstall prometheus -n monitoring
kubectl delete namespace monitoring

Test 2: RuntimeClass automation with Kyverno

Kyverno is a policy engine for Kubernetes that can validate, mutate, and generate resources. This test demonstrates using Kyverno to automatically assign the Edera RuntimeClass to pods in designated namespaces—reducing adoption friction by eliminating the need to modify existing deployment manifests.

What this test validates

This test shows that platform teams can enforce Edera isolation through policy rather than requiring application teams to modify their deployments. Pods deployed to secured namespaces automatically run in Edera zones without any manifest changes.

Install Kyverno

make kyverno-install

Or manually:

helm repo add kyverno https://kyverno.github.io/kyverno/
helm repo update
helm upgrade --install kyverno kyverno/kyverno --namespace kyverno --create-namespace

Create auto-assignment policy

Apply the Kyverno policy:

kubectl apply -f operations/kyverno-edera-policy.yaml

The policy mutates pods in production and secure-workloads namespaces to add runtimeClassName: edera:

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: assign-edera-runtime
spec:
  rules:
  - name: assign-runtime-class
    match:
      resources:
        kinds:
        - Pod
        namespaces:
        - production
        - secure-workloads
    mutate:
      patchStrategicMerge:
        spec:
          runtimeClassName: edera

Test the policy

Create the namespace and deploy a test pod without any RuntimeClass specified:

kubectl create namespace secure-workloads
kubectl apply -f operations/auto-edera-test.yaml

Verify RuntimeClass was assigned

kubectl get pod auto-edera-test -n secure-workloads -o jsonpath='{.spec.runtimeClassName}' && echo

Expected output:

edera

The pod’s manifest didn’t specify a RuntimeClass, but Kyverno automatically assigned edera because the pod was created in the secure-workloads namespace.

Success criteria:

CriteriaExpected result
Policy applies to target namespacesPods in secure-workloads get RuntimeClass
No manifest changes requiredTest pod has no runtimeClassName in its spec
Pod runs in Edera zonekubectl get pod -o wide shows Edera node

Cleanup:

kubectl delete pod auto-edera-test -n secure-workloads
kubectl delete namespace secure-workloads
kubectl delete clusterpolicy assign-edera-runtime

Test 3: Workflow compatibility validation

Edera zones are transparent to Kubernetes tooling. This test validates that standard kubectl operations, Helm deployments, and GitOps workflows work unchanged with Edera workloads.

Why workflow compatibility matters

Operations teams need confidence that adopting Edera won’t break existing workflows. This test confirms that all standard Kubernetes operations work identically whether a pod runs in a standard container or an Edera zone.

kubectl operations

Deploy a test pod (or use any running Edera pod):

make welcome

Test kubectl exec:

kubectl exec welcome-to-edera -- uname -a

Expected output shows the Edera zone kernel:

Linux ... 6.12.62 #1 SMP PREEMPT_DYNAMIC ... x86_64 GNU/Linux

Test kubectl logs:

kubectl logs welcome-to-edera

Expected output shows container logs normally.

Test kubectl cp:

kubectl cp welcome-to-edera:/etc/hostname ./hostname-from-pod
cat ./hostname-from-pod

Expected output shows the copied file contents.

Test kubectl port-forward:

kubectl port-forward welcome-to-edera 8080:80 &
curl localhost:8080

Expected output shows the nginx welcome page.

Helm chart deployment

If you have existing Helm charts, add Edera by setting the RuntimeClass value (if your chart supports it):

helm install my-app <my-chart> --set podSpec.runtimeClassName=edera

Or patch an existing deployment to add Edera:

kubectl patch deployment <my-app> \
  -p '{"spec":{"template":{"spec":{"runtimeClassName":"edera"}}}}'

The specific values path depends on your chart’s values schema.

GitOps compatibility

Edera works with ArgoCD, Flux, and other GitOps tools. Add RuntimeClass to your manifests:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  template:
    spec:
      runtimeClassName: edera  # Only change needed
      containers:
      - name: my-app
        image: my-app:latest

Success criteria:

OperationExpected result
kubectl execShell access works, shows Edera kernel
kubectl logsContainer logs visible
kubectl cpFile copy works
kubectl port-forwardPort forwarding works
Helm deployCharts deploy with RuntimeClass
GitOps syncArgoCD/Flux sync normally

Summary

TestWhat it validatesSuccess criteria
Grafana observabilityMetrics visibilityDashboard shows zone metrics
Kyverno automationRuntimeClass auto-assignmentPolicy applies RuntimeClass
Workflow compatibilityExisting tools workNo workflow changes required

Cleanup

Remove all operations test resources:

make clean-operations

Next steps

Last updated on