Installing Edera with AWS EKS
This guide walks you through a fast setup of Edera on AWS EKS. For users who want to get up and running quickly, we provide a complete, tested Terraform example that creates everything from scratch.
🚀 Ready to deploy? Jump to our complete EKS example for a production-ready deployment.
Quick start with complete example
The fastest way to get started is with our complete, tested example:
git clone https://github.com/edera-dev/learn.git
cd learn/getting-started/eks-terraform
cp terraform.tfvars.example terraform.tfvars
# Edit terraform.tfvars with the Edera account ID
make deploy
make testThat’s it. Your EKS cluster with Edera protection will be ready in ~15 minutes.
Available make commands
The example includes a comprehensive set of make commands:
make deploy- Deploy the complete EKS cluster with Ederamake test- Test and verify the deployment with automated checksmake verify- Quick verification of cluster statusmake clean- Remove only test resources (keeps cluster running)make delete- Destroy the entire infrastructure (with confirmation)make destroy- Alias for delete (same functionality)
Quick cleanup: When you’re done testing, simply run make delete to tear everything down safely.
Prerequisites
Before you begin, ensure you have:
- AWS CLI configured with appropriate permissions
- Terraform or OpenTofu (version 1.3 or later)
- kubectl for testing and verification
- Edera account access - Contact support@edera.dev to get the Edera AWS account ID
The example automatically detects and uses either Terraform or OpenTofu—just install your preferred tool.
Edera AMIs are only available in us-west-2 and AWS GovCloud (US-West). Contact support@edera.dev to get access.
Configuration options
The complete example supports several configuration options in terraform.tfvars:
Required configuration
# The Edera AWS account ID (provided by Edera team)
edera_account_id = "123456789012"Optional configuration
# Cluster settings
cluster_name = "my-edera-cluster"
cluster_version = "1.32"
region = "us-west-2" # or us-gov-west-1 for GovCloud
# Node group settings
instance_types = ["m5n.xlarge"]
desired_size = 2
min_size = 1
max_size = 3
# SSH access (disabled by default for security)
enable_ssh_access = true
ssh_key_name = "my-ec2-keypair"SSH access to worker nodes
SSH access is disabled by default for security. To enable it:
Create or use existing EC2 key pair:
aws ec2 create-key-pair --key-name edera-eks-key \ --query 'KeyMaterial' --output text > edera-eks-key.pem chmod 400 edera-eks-key.pemEnable SSH in terraform.tfvars:
enable_ssh_access = true ssh_key_name = "edera-eks-key"Deploy/redeploy the cluster:
make deploy # SSH access requires node group recreationConnect via SSH:
# Get node IPs kubectl get nodes -o wide # SSH to node ssh -i edera-eks-key.pem ec2-user@<node-ip>
Manual configuration (alternative approach)
If you prefer to understand the components or integrate with existing infrastructure, the sections below explain the individual pieces. However, we strongly recommend starting with the complete example first.
Verifying AMI access
Once your AWS account has been granted access by the Edera team, you can verify available AMIs. The complete example includes automation for this, but if you want to check manually:
# Set your region (us-west-2 or us-gov-west-1)
export REGION=us-west-2
export EDERA_ACCOUNT_ID=your-edera-account-id
aws ec2 describe-images --owners $EDERA_ACCOUNT_ID \
--region $REGION \
--query 'reverse(sort_by(Images[*].[CreationDate, ImageId, Name, State], &[0]))' \
--output tableEdera AMI names follow the pattern: edera-protect-{version}-{os}-amazon-eks-node-{k8s version}-{build date}
Terraform configuration reference
For reference, here are the key components needed for EKS with Edera. The complete example includes all of this plus VPC, security groups, and proper configuration.
AMI Data Source:
data "aws_ami" "edera_protect" {
owners = [var.edera_account_id] # The Edera account ID
most_recent = true
filter {
name = "name"
values = ["edera-protect-v1.*-al2023-amazon-eks-node-${local.cluster_version}-*"]
}
}EKS Node Group Configuration:
eks_managed_node_groups = {
edera_protect = {
ami_id = data.aws_ami.edera_protect.id
ami_type = "AL2023_x86_64_STANDARD"
# Critical: Label nodes for Edera RuntimeClass
labels = {
"runtime" = "edera" # Required for pod scheduling
}
enable_bootstrap_user_data = true
}
}🔗 See complete configuration: View the full Terraform files with VPC, networking, and all required components.
Testing and verification
The complete example includes comprehensive automated testing and verification:
make test # Full deployment test with automated verification
make verify # Quick status check of existing deploymentWhat make test does
The automated test performs these verification steps:
- Configures kubectl for the cluster
- Applies Edera RuntimeClass (Kubernetes 1.32 compatible)
- Verifies cluster status and node readiness
- Checks node labels for
runtime=edera - Validates RuntimeClass configuration
- Deploys test workload using Edera runtime
- Confirms pod scheduling and runtime assignment
- Displays comprehensive results including AMI verification
Expected output: ✅ Test pod running with Edera runtime protection
Manual testing steps
If you’re configuring manually, here are the key verification steps:
1. Configure kubectl:
aws eks --region $REGION update-kubeconfig --name edera-cluster2. Apply RuntimeClass:
kubectl apply -f https://public.edera.dev/kubernetes/runtime-class.yaml
kubectl get runtimeclass edera3. Verify node labels:
kubectl get nodes --show-labels | grep runtime=edera4. Test with a pod:
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: edera-test
spec:
runtimeClassName: edera
containers:
- name: nginx
image: nginx:1.25.3
EOF🔗 Complete verification: The verification script in the examples repository performs comprehensive checks of your deployment.
Troubleshooting
Pods stuck in Pending state:
- Check node labels:
kubectl get nodes --show-labels | grep runtime=edera - Verify RuntimeClass:
kubectl get runtimeclass edera - Check pod events:
kubectl describe pod <pod-name>
AMI access issues:
- Verify the Edera account ID is correct in
terraform.tfvars - Ensure you’re in a supported region (us-west-2 or us-gov-west-1)
- Contact support@edera.dev for AMI access
Terraform/OpenTofu issues:
- The example auto-detects your preferred tool (no configuration needed)
- Check version:
terraform --versionortofu --version(requires 1.3+) - Run
make planto preview changes before deploying
SSH access not working:
- Ensure
enable_ssh_access = trueandssh_key_nameare set interraform.tfvars - SSH access requires node group recreation:
make delete && make deploy - Verify EC2 key pair exists in the same region
Need to clean up?
make clean- Removes test resources only (keeps cluster)make delete- Destroys entire infrastructure (with confirmation)
🔧 Complete troubleshooting guide: The EKS example README includes comprehensive troubleshooting steps, common solutions, and verification scripts.
Next steps
- Deploy your applications using
runtimeClassName: ederain your pod specs - Explore examples in the learn repository for other platforms
- Read the docs at docs.edera.dev for advanced configuration
- Get support at support@edera.dev - we like solving problems.