Securing Edera

6 min read · Advanced


This guide walks through the steps required to harden an Edera deployment for production. It covers Kubernetes admission policies, network policies, RBAC, secrets management, and Edera-specific configuration.

All manifests are available in the learn repository.

For design rationale behind these requirements, see the Security reference architecture.

Test on EKS

If you want to run the full end-to-end hardening test on a real cluster before applying to production, the learn repository includes a complete EKS setup with hardening wired in:

git clone https://github.com/edera-dev/learn.git

# Deploy EKS cluster with Edera nodes (~15 min)
cd learn/getting-started/eks-terraform
cp terraform.tfvars.example terraform.tfvars
# edit terraform.tfvars with your Edera account ID
make deploy
make test

# Apply and verify hardening policies
make harden

# When done
make harden-clean
make destroy

make harden installs Kyverno, applies the admission policies and network policy, and runs the verification tests against the cluster. See the EKS install guide for prerequisites and configuration options.

Prerequisites

  • Edera installed on your cluster nodes
  • kubectl access with cluster-admin permissions
  • Kyverno installed for admission policies (or OPA Gatekeeper, or PodSecurity admission)
  • Familiarity with Kubernetes NetworkPolicy resources

Step 1: Enforce pod security admission

Apply admission policies to deny configurations that weaken zone isolation. These must be in place before tenant workloads are deployed.

Using Kyverno

Clone the learn repository and apply the hardening policy:

git clone https://github.com/edera-dev/learn.git
cd learn/harden
make apply-policies

Or apply directly:

kubectl apply -f https://raw.githubusercontent.com/edera-dev/learn/main/harden/policies/kyverno-edera-hardening.yaml

The policy enforces these controls:

ControlSetting
hostPath volumesDeny
hostNetworkDeny
hostPIDDeny
hostIPCDeny
Privileged modeDeny by default
CapabilitiesDrop ALL, add only what workloads require
Volume typesAllow configMap, secret, emptyDir, projected, downwardAPI, PVC. Deny all host-local types.
⚠️
hostPath is the critical rule. It allows workloads to access hypervisor binaries on the host. There is no safe way to use hostPath with untrusted workloads.
ℹ️
Mixed Edera/runc clusters: ClusterPolicy applies cluster-wide by default. If you run Edera as a node pool alongside standard runc nodes, scope the policy to Edera tenant namespaces using match.resources.namespaces in the Kyverno policy to avoid blocking workloads that legitimately need host access on non-Edera nodes.
ℹ️
Capabilities: “Drop ALL” is the right hardened default, but real workloads may depend on specific capabilities (for example, NET_BIND_SERVICE for services binding to ports below 1024). Audit your workloads before enforcing this rule cluster-wide – check current capability usage with kubectl exec <pod> -- capsh --print. Add back only the specific capabilities each workload requires.

Using Kubernetes PodSecurity admission

As an alternative to Kyverno, label tenant namespaces to enforce the built-in restricted profile:

kubectl label namespace <tenant-namespace> \
  pod-security.kubernetes.io/enforce=restricted \
  pod-security.kubernetes.io/enforce-version=latest

Verify enforcement

make verify

This runs dry-run tests to confirm hostPath and hostNetwork are denied.


Step 2: Apply network policies

Zones can reach other pods and node services by default. Apply a default-deny egress policy to each tenant namespace, then add explicit allow rules for required destinations only.

make apply-network-policy NAMESPACE=<tenant-namespace>

Or apply directly:

kubectl apply -f https://raw.githubusercontent.com/edera-dev/learn/main/harden/policies/network-policy-default-deny-egress.yaml
ℹ️
The policy allows DNS and intra-namespace pod communication. Add egress rules only for destinations your workloads actually need. The default-deny blocks node ports automatically – no additional deny rules are required.
ℹ️
Layer 7 policies: Some CNIs (Cilium, Istio) support network policies that filter at the application layer – by HTTP path, gRPC method, or DNS name. These provide more granular control than standard Kubernetes NetworkPolicy but are CNI-specific. See your CNI’s documentation if L7 filtering is a requirement.

Ports that must not be reachable from zones:

PortServiceRisk
10250Kubelet APIPod listing, exec, log access
10255Kubelet read-onlyPod and node info
22SSHDirect host access
10249kube-proxy metricsService topology disclosure
6443kube-apiserverFull cluster API access

Step 3: Lock down RBAC

Disable default ServiceAccount token mounting

kubectl patch serviceaccount default \
  -n <tenant-namespace> \
  -p '{"automountServiceAccountToken": false}'

Create workload-specific ServiceAccounts

Do not reuse the default ServiceAccount. Create one per workload with minimum required permissions:

kubectl create serviceaccount <workload-name> -n <tenant-namespace>

Audit for privilege escalation paths

Check whether any ServiceAccount can create pods with elevated permissions:

kubectl auth can-i create pods \
  --as=system:serviceaccount:<namespace>:<serviceaccount>

Step 4: Harden secrets management

Audit mounted credentials

Find existing workloads with mounted secrets:

kubectl get pods -n <tenant-namespace> -o json | \
  jq '.items[].spec.volumes[] | select(.secret != null) | .secret.secretName'

Replace mounted credentials with runtime-fetched secrets:

Credential typeRecommended approach
Cloud IAM (AWS)IRSA – injected via OIDC, no credentials mounted
Cloud IAM (GCP)Workload Identity
Database passwordsExternal Secrets Operator with short-lived tokens
API keysHashiCorp Vault with AppRole auth and TTL-limited tokens
TLS certificatescert-manager with short rotation periods
Encryption keysCloud KMS – never in-zone
ℹ️
Secrets mounted into pods are stored in plaintext on the host at /var/lib/edera/protect/state/<UUID>/mounts/. Use runtime fetch patterns to limit exposure to the token’s TTL.

Scope ServiceAccount tokens

For workloads that need Kubernetes API access, use audience-scoped tokens:

kubectl create token <serviceaccount-name> \
  --audience=<your-service> \
  --duration=1h \
  -n <namespace>

Step 5: Configure Edera-specific settings

Pin zone kernels

Set the zone kernel annotation on workload pods to track the latest patched kernel:

kubectl annotate pod <pod-name> dev.edera/kernel=ghcr.io/edera-dev/zone-kernel:latest

Or add it directly to the pod spec:

metadata:
  annotations:
    dev.edera/kernel: "ghcr.io/edera-dev/zone-kernel:latest"
ℹ️
ABI stability: Tracking :latest means the zone kernel may update when the pod restarts. This is usually desirable for security patching, but workloads with tight kernel ABI dependencies may be affected. If you need stability guarantees, pin to a specific version tag and update it deliberately as part of your node patching process.

AI agent workloads

Deploy the hardened AI agent example from the learn repository:

make apply-ai-agent

The example manifest configures:

  • automountServiceAccountToken: false – agent does not need Kubernetes API access
  • privileged: false and capabilities: drop: ALL – minimal in-zone attack surface
  • readOnlyRootFilesystem: true – prevents in-zone persistence
  • restartPolicy: Never – prevents automatic restart of a potentially compromised agent

Step 6: Deploy monitoring

Install Falco with the Edera plugin

Standard Falco cannot see inside zones because each zone has its own kernel. The Edera plugin bridges this.

See the Falco integration guide for setup instructions.

Key alerts to configure

SignalWhat it indicates
/dev/xen/* device opensHypercall or grant table probing
Xenbus protocol activityXenstore manipulation attempt
mount syscall with 9p typeUnexpected 9p filesystem mount
Connections to ports 10250, 22, 10249Attempt to reach kubelet or SSH
Zone crash or restart cyclingDoS or active exploit attempt

Step 7: Verify your configuration

Work through this checklist before moving tenant workloads to production.

Admission policies

  • hostPath volumes denied in all tenant namespaces
  • hostNetwork denied
  • hostPID denied
  • hostIPC denied
  • Privileged pods denied by default
  • ALL capabilities dropped by default
  • automountServiceAccountToken: false on tenant namespace default ServiceAccounts

Network policies

  • Default-deny egress applied to tenant namespaces
  • Node ports (10250, 10255, 22, 10249, 6443) not reachable from zones
  • Cross-namespace traffic denied by default
  • Internet egress restricted to required destinations only

Secrets management

  • No long-lived credentials mounted as volumes
  • Cloud IAM using IRSA or Workload Identity
  • Database and API credentials fetched at runtime
  • ServiceAccount tokens scoped and short-lived

Edera configuration

  • Zone kernels set to :latest or actively managed version
  • Falco deployed with Edera zone-aware plugin
  • Dom0 access restricted to infrastructure operators only

Monitoring

  • Falco rules deployed for Xen device access
  • Zone crash and restart frequency monitored
  • Network connections to node ports logged and alerted
  • Xenstore write pattern alerts configured

Next steps

Last updated on