Security reference architecture
Prescriptive configuration, policy, and operational requirements for deploying Edera in enterprise environments.
Design principles
The zone boundary is the security boundary. Everything inside a zone is untrusted. Do not rely on in-zone security controls (seccomp, AppArmor, capabilities) for multi-tenancy—they add defense-in-depth but the zone itself is the isolation primitive.
Kubernetes configuration is the operator’s responsibility. Edera isolates the workload, not the configuration. Pod specs that weaken isolation (hostPath, hostNetwork) must be prevented by policy, not trusted to good intentions.
Network access is not isolated by default. Zones can reach other pods, node services, and the internet. Network policy is mandatory, not optional.
Secrets must not traverse the zone boundary unnecessarily. Anything mounted into a zone is accessible to the workload. Mount only the secrets the workload actually needs.
Mandatory Kubernetes policies
These policies must be enforced for any multi-tenant or AI-agent deployment.
Pod security
Use PodSecurity admission (Restricted profile baseline) or OPA Gatekeeper/Kyverno with the following rules:
| Control | Setting | Rationale |
|---|---|---|
| hostPath volumes | Deny | hostPath facilitates access to the hypervisor binaries. This must be restricted |
| hostNetwork | Deny | Edera rejects it, but prevent at admission too |
| hostPID | Deny | Edera ignores it, but prevent at admission to be explicit |
| hostIPC | Deny | Same as hostPID |
| privileged | Deny (default), Allow (per-workload exception) | Privileged is safe in zones but creates a wider in-zone attack surface for future escapes |
| Capabilities | Drop ALL, add only what’s needed | Reduces in-zone attack surface |
| Volume types | Allow: configMap, secret, emptyDir, projected, downwardAPI, PVC. Deny: hostPath, nfs, iscsi, fc, any host-local type | Prevent host filesystem access |
protect CLI command, is outside Edera’s security boundary. Operators are responsible for ensuring that zone configurations do not undermine isolation. There is no safe way to use hostPath with untrusted workloads./dev/mem access, and Xen device access inside the zone. These dramatically increase the attack surface for future exploits (hypercall access, grant table probing, etc.). Non-privileged pods lack these interfaces.Network policies
Apply a default-deny egress policy that allows only DNS and intra-namespace traffic:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-node-access
spec:
podSelector: {}
policyTypes:
- Egress
egress:
# Allow DNS
- to:
- namespaceSelector: {}
ports:
- port: 53
protocol: UDP
- port: 53
protocol: TCP
# Allow pod-to-pod within namespace
- to:
- podSelector: {}
# Deny: node kubelet (10250), SSH (22), kube-proxy (10249)Specific ports to block from zones:
| Port | Service | Risk |
|---|---|---|
| 10250 | Kubelet API | Pod listing, exec, log access |
| 10255 | Kubelet read-only | Pod and node info |
| 22 | SSH | Direct host access |
| 10249 | kube-proxy metrics | Service topology disclosure |
| 6443 | kube-apiserver (if on node) | Full cluster API |
Kubernetes RBAC
- Default ServiceAccount must have no permissions (enforce with
automountServiceAccountToken: falseon namespaces) - Workload ServiceAccounts must have minimum required permissions
- No ServiceAccount should be able to create pods with hostPath volumes (this is the RBAC equivalent of the hostPath escape)
- Audit log all pod creation events with hostPath or privileged settings
Secrets management
What not to do
Do not mount long-lived credentials into zones. Everything mounted from the host is stored in plaintext on the host filesystem at /var/lib/edera/protect/state/<UUID>/mounts/. If the host is compromised through any means (unrelated Edera bug, host-level vulnerability, physical access), all secrets mounted into all active zones are readable from a single directory.
Do not use Kubernetes Secrets projected into pods for high-value credentials (database passwords, API keys, cloud IAM credentials). The ServiceAccount token is always projected and is an acceptable risk (it can be scoped and rotated), but additional secrets increase exposure.
What to do instead
| Secret type | Recommended approach |
|---|---|
| Cloud IAM credentials | IRSA/Pod Identity (never mounted, injected via OIDC) |
| Database passwords | External secrets operator + short-lived tokens |
| API keys | Vault with AppRole auth, TTL-limited tokens |
| TLS certificates | cert-manager with short rotation periods |
| Application config | ConfigMaps for non-sensitive config, external KMS for sensitive values |
| Encryption keys | KMS (AWS KMS, GCP KMS, Azure Key Vault)—never in-zone |
The key principle: secrets should be fetched by the workload at runtime, not mounted into the pod spec. This limits exposure to the duration of the token’s TTL and prevents secrets from sitting on the host filesystem.
ServiceAccount token handling
The default projected ServiceAccount token is mounted at /run/secrets/kubernetes.io/serviceaccount/. This is acceptable for workloads that need Kubernetes API access, but:
- Set
automountServiceAccountToken: falseon namespaces and pods that don’t need K8s API access - Use BoundServiceAccountTokens (audience-scoped, short-lived)
- Monitor token usage and alert on unexpected API calls
Multi-tenancy architecture
What Edera isolates (hardware level)
- Kernel: each zone has its own kernel instance
- Memory: hypervisor-partitioned, not accessible across zones
- CPU: hypervisor-scheduled vCPUs
- Block storage: separate virtual disks per zone
- Xenstore: ACL-enforced per-domain isolation
- PID namespace: always isolated regardless of pod spec
- IPC namespace: always isolated regardless of pod spec
What operators must isolate (policy level)
- Network: NetworkPolicies between tenant namespaces
- Kubernetes API: RBAC per-tenant, no cross-tenant resource access
- Secrets: separate secret stores per tenant, never shared volumes
- Node access: no hostPath, no hostNetwork (admission policy)
- Resource limits: ResourceQuotas per namespace to prevent noisy-neighbor
Residual cross-tenant risks
Even with Edera zones, the following are shared between tenants on the same node and represent residual risk:
| Shared resource | Risk | Mitigation |
|---|---|---|
| Dom0 kernel | Dom0 compromise affects all zones | Keep Edera updated |
| Physical hardware | Side-channel attacks (Spectre, etc.) | Hardware mitigations, dedicated node pools |
| CNI / network fabric | Network-level attacks between pods | NetworkPolicies, encryption (WireGuard/mTLS) |
| Node kubelet | Kubelet compromise affects all pods | Restrict kubelet anonymous auth, use webhook authn |
AI agent isolation
For deploying production AI agents that execute arbitrary code:
Zone configuration
apiVersion: v1
kind: Pod
metadata:
name: ai-agent-workload
annotations:
# Pin zone kernel to latest for auto-patching
dev.edera/kernel: "ghcr.io/edera-dev/zone-kernel:latest"
spec:
runtimeClassName: edera
automountServiceAccountToken: false # Agent doesn't need K8s API
containers:
- name: agent
image: <agent-image>
securityContext:
privileged: false # Minimize in-zone attack surface
capabilities:
drop: ["ALL"] # No extra caps
readOnlyRootFilesystem: true # Prevent in-zone persistence
runAsNonRoot: true
restartPolicy: Never # Don't auto-restart compromised agentsAgent-specific risks
| Risk | Description | Mitigation |
|---|---|---|
| Prompt injection code execution | Agent runs attacker-supplied code | Zone isolates blast radius |
| Tool abuse | Agent calls dangerous tools (file write, network) | Application-layer tool restrictions + zone isolation |
| Credential theft | Agent accesses mounted secrets | Don’t mount secrets; use runtime fetch with short TTLs |
| Persistence | Agent writes backdoor to disk | readOnlyRootFilesystem + ephemeral zones |
| Exfiltration | Agent sends data to external servers | NetworkPolicy egress restrictions |
| Resource exhaustion | Agent consumes excessive CPU/memory | Resource limits enforced by hypervisor |
| Lateral movement | Agent accesses other services | NetworkPolicy + no SA token |
Ephemeral zone pattern
For maximum isolation of AI agent workloads, use ephemeral zones that are destroyed after each task:
- Create zone—mount only the task-specific input data
- Run agent—agent processes task
- Extract output—copy results to persistent storage
- Destroy zone—all in-zone state is lost
This pattern ensures:
- No persistence across tasks (malware cannot survive zone destruction)
- No accumulated state (each task starts clean)
- Minimal secret exposure (only task-specific data mounted)
- Clean forensic boundary (zone destruction is atomic)
Monitoring and detection
What to monitor
| Signal | Source | Indicates |
|---|---|---|
| Xenstore write patterns | Dom0 xenstore audit | Unusual device config modification |
| 9p operations on sensitive files | Edera daemon logs | Attempted path traversal or unusual file access |
| Hypercall frequency | Xen hypercall tracing | Automated exploit probing |
| Grant table operations | Xen grant table audit | Memory mapping attempts |
| Zone crash/restart frequency | Kubernetes events | Rapid cycling (DoS or exploit attempts) |
| Network connections to node ports | NetworkPolicy audit logs | Attempts to reach kubelet/SSH |
| Unexpected processes in zone | Zone-level monitoring (Falco) | Exploit payload execution |
Falco integration
Falco with the Edera plugin provides kernel-level syscall monitoring inside zones. Deploy Falco on Edera nodes with the zone-aware plugin.
Key rules for AI agent monitoring:
- Alert on
/dev/xen/*device opens (hypercall/grant probing) - Alert on xenbus protocol activity (xenstore manipulation)
- Alert on
/dev/memreads beyond expected ranges (memory scanning) - Alert on
mountsyscalls with9pfilesystem type (additional 9p mounts) - Alert on network connections to node IP on ports 10250, 22, 10249
Incident response
Zone compromise (suspected escape attempt)
- Do not delete the pod immediately. The zone’s state is forensic evidence. The 9p mounts directory on Dom0 (
/var/lib/edera/protect/state/<UUID>/mounts/) contains any files the workload created on shared volumes. - Capture zone xenstore state:
protect host idm-snoopand xenstore dump for the domain. - Capture zone console logs:
protect zone logs <zone>for kernel dmesg and workload output. - Check Dom0 integrity: Verify no unexpected files in
/var/lib/edera/protect/state/outside the zone’s UUID directory. - Rotate node: If escape is confirmed or suspected, drain and terminate the node. Do not reuse it.
Hypervisor vulnerability (Xen XSA)
- Assess zone exposure: Check if the XSA affects PV guests (Edera’s mode). Many XSAs are HVM-only.
- Emergency patching path: Edera’s hypervisor is bundled with the host image. Patching requires a node image update and node replacement.
- Zone kernel as compensating control: If the XSA requires a specific guest kernel capability, updating the zone kernel annotation to a patched version can mitigate without node replacement.
Hardening checklist
Validate these for every Edera enterprise deployment.
Kubernetes admission policies
- hostPath volumes denied for all tenant namespaces
- hostNetwork denied (Edera also rejects)
- hostPID denied (Edera ignores)
- hostIPC denied (Edera ignores)
- Privileged pods denied by default (exception list maintained)
- All capabilities dropped by default
automountServiceAccountToken: falseon tenant namespaces
Network policies
- Default deny egress in tenant namespaces
- Node port access (10250, 22, 10249, 6443) explicitly denied
- Cross-namespace traffic denied by default
- Egress to internet restricted to required destinations
Secrets management
- No long-lived credentials mounted as volumes
- IRSA/Pod Identity for cloud IAM
- External secrets operator for database/API credentials
- ServiceAccount tokens scoped and short-lived
Edera configuration
- Zone kernel set to
:latestor actively managed version - Falco deployed on Edera nodes with zone-aware plugin
- Dom0 access restricted to infrastructure operators only
Monitoring
- Xenstore write alerts configured
- Zone crash/restart frequency monitored
- Network connection to node ports logged
- Falco rules for Xen device access deployed