Securing Edera
This guide walks through the steps required to harden an Edera deployment for production. It covers Kubernetes admission policies, network policies, RBAC, secrets management, and Edera-specific configuration.
All manifests are available in the learn repository.
For design rationale behind these requirements, see the Security reference architecture.
Test on EKS
If you want to run the full end-to-end hardening test on a real cluster before applying to production, the learn repository includes a complete EKS setup with hardening wired in:
git clone https://github.com/edera-dev/learn.git
# Deploy EKS cluster with Edera nodes (~15 min)
cd learn/getting-started/eks-terraform
cp terraform.tfvars.example terraform.tfvars
# edit terraform.tfvars with your Edera account ID
make deploy
make test
# Apply and verify hardening policies
make harden
# When done
make harden-clean
make destroymake harden installs Kyverno, applies the admission policies and network policy, and runs the verification tests against the cluster. See the EKS install guide for prerequisites and configuration options.
Prerequisites
- Edera installed on your cluster nodes
kubectlaccess with cluster-admin permissions- Kyverno installed for admission policies (or OPA Gatekeeper, or PodSecurity admission)
- Familiarity with Kubernetes NetworkPolicy resources
Step 1: Enforce pod security admission
Apply admission policies to deny configurations that weaken zone isolation. These must be in place before tenant workloads are deployed.
Using Kyverno
Clone the learn repository and apply the hardening policy:
git clone https://github.com/edera-dev/learn.git
cd learn/harden
make apply-policiesOr apply directly:
kubectl apply -f https://raw.githubusercontent.com/edera-dev/learn/main/harden/policies/kyverno-edera-hardening.yamlThe policy enforces these controls:
| Control | Setting |
|---|---|
| hostPath volumes | Deny |
| hostNetwork | Deny |
| hostPID | Deny |
| hostIPC | Deny |
| Privileged mode | Deny by default |
| Capabilities | Drop ALL, add only what workloads require |
| Volume types | Allow configMap, secret, emptyDir, projected, downwardAPI, PVC. Deny all host-local types. |
ClusterPolicy applies cluster-wide by default. If you run Edera as a node pool alongside standard runc nodes, scope the policy to Edera tenant namespaces using match.resources.namespaces in the Kyverno policy to avoid blocking workloads that legitimately need host access on non-Edera nodes.NET_BIND_SERVICE for services binding to ports below 1024). Audit your workloads before enforcing this rule cluster-wide – check current capability usage with kubectl exec <pod> -- capsh --print. Add back only the specific capabilities each workload requires.Using Kubernetes PodSecurity admission
As an alternative to Kyverno, label tenant namespaces to enforce the built-in restricted profile:
kubectl label namespace <tenant-namespace> \
pod-security.kubernetes.io/enforce=restricted \
pod-security.kubernetes.io/enforce-version=latestVerify enforcement
make verifyThis runs dry-run tests to confirm hostPath and hostNetwork are denied.
Step 2: Apply network policies
Zones can reach other pods and node services by default. Apply a default-deny egress policy to each tenant namespace, then add explicit allow rules for required destinations only.
make apply-network-policy NAMESPACE=<tenant-namespace>Or apply directly:
kubectl apply -f https://raw.githubusercontent.com/edera-dev/learn/main/harden/policies/network-policy-default-deny-egress.yamlPorts that must not be reachable from zones:
| Port | Service | Risk |
|---|---|---|
| 10250 | Kubelet API | Pod listing, exec, log access |
| 10255 | Kubelet read-only | Pod and node info |
| 22 | SSH | Direct host access |
| 10249 | kube-proxy metrics | Service topology disclosure |
| 6443 | kube-apiserver | Full cluster API access |
Step 3: Lock down RBAC
Disable default ServiceAccount token mounting
kubectl patch serviceaccount default \
-n <tenant-namespace> \
-p '{"automountServiceAccountToken": false}'Create workload-specific ServiceAccounts
Do not reuse the default ServiceAccount. Create one per workload with minimum required permissions:
kubectl create serviceaccount <workload-name> -n <tenant-namespace>Audit for privilege escalation paths
Check whether any ServiceAccount can create pods with elevated permissions:
kubectl auth can-i create pods \
--as=system:serviceaccount:<namespace>:<serviceaccount>Step 4: Harden secrets management
Audit mounted credentials
Find existing workloads with mounted secrets:
kubectl get pods -n <tenant-namespace> -o json | \
jq '.items[].spec.volumes[] | select(.secret != null) | .secret.secretName'Replace mounted credentials with runtime-fetched secrets:
| Credential type | Recommended approach |
|---|---|
| Cloud IAM (AWS) | IRSA – injected via OIDC, no credentials mounted |
| Cloud IAM (GCP) | Workload Identity |
| Database passwords | External Secrets Operator with short-lived tokens |
| API keys | HashiCorp Vault with AppRole auth and TTL-limited tokens |
| TLS certificates | cert-manager with short rotation periods |
| Encryption keys | Cloud KMS – never in-zone |
/var/lib/edera/protect/state/<UUID>/mounts/. Use runtime fetch patterns to limit exposure to the token’s TTL.Scope ServiceAccount tokens
For workloads that need Kubernetes API access, use audience-scoped tokens:
kubectl create token <serviceaccount-name> \
--audience=<your-service> \
--duration=1h \
-n <namespace>Step 5: Configure Edera-specific settings
Pin zone kernels
Set the zone kernel annotation on workload pods to track the latest patched kernel:
kubectl annotate pod <pod-name> dev.edera/kernel=ghcr.io/edera-dev/zone-kernel:latestOr add it directly to the pod spec:
metadata:
annotations:
dev.edera/kernel: "ghcr.io/edera-dev/zone-kernel:latest":latest means the zone kernel may update when the pod restarts. This is usually desirable for security patching, but workloads with tight kernel ABI dependencies may be affected. If you need stability guarantees, pin to a specific version tag and update it deliberately as part of your node patching process.AI agent workloads
Deploy the hardened AI agent example from the learn repository:
make apply-ai-agentThe example manifest configures:
automountServiceAccountToken: false– agent does not need Kubernetes API accessprivileged: falseandcapabilities: drop: ALL– minimal in-zone attack surfacereadOnlyRootFilesystem: true– prevents in-zone persistencerestartPolicy: Never– prevents automatic restart of a potentially compromised agent
Step 6: Deploy monitoring
Install Falco with the Edera plugin
Standard Falco cannot see inside zones because each zone has its own kernel. The Edera plugin bridges this.
See the Falco integration guide for setup instructions.
Key alerts to configure
| Signal | What it indicates |
|---|---|
/dev/xen/* device opens | Hypercall or grant table probing |
| Xenbus protocol activity | Xenstore manipulation attempt |
mount syscall with 9p type | Unexpected 9p filesystem mount |
| Connections to ports 10250, 22, 10249 | Attempt to reach kubelet or SSH |
| Zone crash or restart cycling | DoS or active exploit attempt |
Step 7: Verify your configuration
Work through this checklist before moving tenant workloads to production.
Admission policies
- hostPath volumes denied in all tenant namespaces
- hostNetwork denied
- hostPID denied
- hostIPC denied
- Privileged pods denied by default
- ALL capabilities dropped by default
automountServiceAccountToken: falseon tenant namespace default ServiceAccounts
Network policies
- Default-deny egress applied to tenant namespaces
- Node ports (10250, 10255, 22, 10249, 6443) not reachable from zones
- Cross-namespace traffic denied by default
- Internet egress restricted to required destinations only
Secrets management
- No long-lived credentials mounted as volumes
- Cloud IAM using IRSA or Workload Identity
- Database and API credentials fetched at runtime
- ServiceAccount tokens scoped and short-lived
Edera configuration
- Zone kernels set to
:latestor actively managed version - Falco deployed with Edera zone-aware plugin
- Dom0 access restricted to infrastructure operators only
Monitoring
- Falco rules deployed for Xen device access
- Zone crash and restart frequency monitored
- Network connections to node ports logged and alerted
- Xenstore write pattern alerts configured
Next steps
- Security reference architecture – design rationale and multi-tenancy architecture details
- Incident response procedures – zone compromise and XSA response playbooks
- Falco integration – detailed Falco setup and rule configuration
- Security validation suite – hands-on tests to verify isolation is working