Operations integration suite
This suite validates that Edera integrates with existing operational tools and workflows. The tests verify observability, automation, and workflow compatibility.
make setup.All manifests are available in the learn repository.
Test 1: Observability with Prometheus and Grafana
Edera Protect exposes Prometheus metrics at port 3035 on each Edera node. This test validates that you can scrape and visualize zone-level metrics.
Why this matters
Edera provides detailed metrics about each zone including CPU usage, memory allocation, and zone lifecycle timestamps. These metrics include Kubernetes labels (k8s_namespace, k8s_pod) so you can correlate zone metrics with your workloads.
Install monitoring stack
make grafana-installConfigure Edera metrics scraping
Get your Edera node’s internal IP:
NODE_IP=$(kubectl get nodes -l runtime=edera -o jsonpath='{.items[0].status.addresses[?(@.type=="InternalIP")].address}')
echo $NODE_IPCreate the Service, Endpoints, and ServiceMonitor:
sed "s/NODE_IP/${NODE_IP}/g" operations/edera-servicemonitor.yaml | kubectl apply -f -Verify metrics are being scraped
Check that Prometheus can reach the Edera metrics endpoint:
kubectl run curl-test --rm -it --restart=Never --image=curlimages/curl \
-- curl -s http://$NODE_IP:3035/metrics | grep zone_memoryExample output:
zone_memory_total_bytes{k8s_namespace="default",k8s_pod="welcome-to-edera",zone_id="..."} 215666688
zone_memory_used_bytes{k8s_namespace="default",k8s_pod="welcome-to-edera",zone_id="..."} 89153536
zone_memory_free_bytes{k8s_namespace="default",k8s_pod="welcome-to-edera",zone_id="..."} 122712064Access Grafana
Port-forward to access Grafana (uses port 3001 to avoid conflicts with local dev servers):
kubectl port-forward -n monitoring svc/prometheus-grafana 3001:80If accessing from a bastion host, use an SSH tunnel with your bastion SSH key:
ssh -L 3001:localhost:3001 -i <your-bastion-key> ivy@<bastion-ip> "kubectl port-forward -n monitoring svc/prometheus-grafana 3001:80"Get the admin password:
kubectl get secret -n monitoring prometheus-grafana -o jsonpath='{.data.admin-password}' | base64 -d && echoOpen http://localhost:3001 and log in with username admin and the password from the command above.
Edera metrics reference
| Metric | Description |
|---|---|
zone_cpu_usage_percent | CPU usage per zone (per vCPU) |
zone_memory_total_bytes | Total memory allocated to zone |
zone_memory_used_bytes | Memory used by zone |
zone_memory_free_bytes | Free memory in zone |
zone_create_timestamp_milliseconds | Zone creation time |
zone_ready_timestamp_milliseconds | Zone ready time |
hypervisor_cpu_usage_seconds_total | Hypervisor CPU time per zone |
hypervisor_memory_max_bytes | Max memory limit per zone |
host_cpu_usage_percent | Host CPU usage (dom0) |
host_memory_used_bytes | Host memory usage |
Success criteria:
| Criteria | Expected result |
|---|---|
| Metrics endpoint accessible | curl returns Prometheus metrics |
| Zone metrics available | zone_* metrics show per-pod data |
| Kubernetes labels present | Metrics include k8s_namespace, k8s_pod |
Cleanup:
kubectl delete -f operations/edera-servicemonitor.yaml
helm uninstall prometheus -n monitoring
kubectl delete namespace monitoringTest 2: RuntimeClass automation with Kyverno
Kyverno is a policy engine for Kubernetes that can validate, mutate, and generate resources. This test demonstrates using Kyverno to automatically assign the Edera RuntimeClass to pods in designated namespaces—reducing adoption friction by eliminating the need to modify existing deployment manifests.
What this test validates
This test shows that platform teams can enforce Edera isolation through policy rather than requiring application teams to modify their deployments. Pods deployed to secured namespaces automatically run in Edera zones without any manifest changes.
Install Kyverno
make kyverno-installOr manually:
helm repo add kyverno https://kyverno.github.io/kyverno/
helm repo update
helm upgrade --install kyverno kyverno/kyverno --namespace kyverno --create-namespaceCreate auto-assignment policy
Apply the Kyverno policy:
kubectl apply -f operations/kyverno-edera-policy.yamlThe policy mutates pods in production and secure-workloads namespaces to add runtimeClassName: edera:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: assign-edera-runtime
spec:
rules:
- name: assign-runtime-class
match:
resources:
kinds:
- Pod
namespaces:
- production
- secure-workloads
mutate:
patchStrategicMerge:
spec:
runtimeClassName: ederaTest the policy
Create the namespace and deploy a test pod without any RuntimeClass specified:
kubectl create namespace secure-workloads
kubectl apply -f operations/auto-edera-test.yamlVerify RuntimeClass was assigned
kubectl get pod auto-edera-test -n secure-workloads -o jsonpath='{.spec.runtimeClassName}' && echoExpected output:
ederaThe pod’s manifest didn’t specify a RuntimeClass, but Kyverno automatically assigned edera because the pod was created in the secure-workloads namespace.
Success criteria:
| Criteria | Expected result |
|---|---|
| Policy applies to target namespaces | Pods in secure-workloads get RuntimeClass |
| No manifest changes required | Test pod has no runtimeClassName in its spec |
| Pod runs in Edera zone | kubectl get pod -o wide shows Edera node |
Cleanup:
kubectl delete pod auto-edera-test -n secure-workloads
kubectl delete namespace secure-workloads
kubectl delete clusterpolicy assign-edera-runtimeTest 3: Workflow compatibility validation
Edera zones are transparent to Kubernetes tooling. This test validates that standard kubectl operations, Helm deployments, and GitOps workflows work unchanged with Edera workloads.
Why workflow compatibility matters
Operations teams need confidence that adopting Edera won’t break existing workflows. This test confirms that all standard Kubernetes operations work identically whether a pod runs in a standard container or an Edera zone.
kubectl operations
Deploy a test pod (or use any running Edera pod):
make welcomeTest kubectl exec:
kubectl exec welcome-to-edera -- uname -aExpected output shows the Edera zone kernel:
Linux ... 6.12.62 #1 SMP PREEMPT_DYNAMIC ... x86_64 GNU/LinuxTest kubectl logs:
kubectl logs welcome-to-ederaExpected output shows container logs normally.
Test kubectl cp:
kubectl cp welcome-to-edera:/etc/hostname ./hostname-from-pod
cat ./hostname-from-podExpected output shows the copied file contents.
Test kubectl port-forward:
kubectl port-forward welcome-to-edera 8080:80 &
curl localhost:8080Expected output shows the nginx welcome page.
Helm chart deployment
If you have existing Helm charts, add Edera by setting the RuntimeClass value (if your chart supports it):
helm install my-app <my-chart> --set podSpec.runtimeClassName=ederaOr patch an existing deployment to add Edera:
kubectl patch deployment <my-app> \
-p '{"spec":{"template":{"spec":{"runtimeClassName":"edera"}}}}'The specific values path depends on your chart’s values schema.
GitOps compatibility
Edera works with ArgoCD, Flux, and other GitOps tools. Add RuntimeClass to your manifests:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
template:
spec:
runtimeClassName: edera # Only change needed
containers:
- name: my-app
image: my-app:latestSuccess criteria:
| Operation | Expected result |
|---|---|
| kubectl exec | Shell access works, shows Edera kernel |
| kubectl logs | Container logs visible |
| kubectl cp | File copy works |
| kubectl port-forward | Port forwarding works |
| Helm deploy | Charts deploy with RuntimeClass |
| GitOps sync | ArgoCD/Flux sync normally |
Summary
| Test | What it validates | Success criteria |
|---|---|---|
| Grafana observability | Metrics visibility | Dashboard shows zone metrics |
| Kyverno automation | RuntimeClass auto-assignment | Policy applies RuntimeClass |
| Workflow compatibility | Existing tools work | No workflow changes required |
Cleanup
Remove all operations test resources:
make clean-operationsNext steps
- Security demonstration suite - Validate container isolation
- Performance validation suite - Benchmark performance