Deploying Workloads on EKS Auto Mode: Managing Persistent Volumes and Configuring Ingress
This blog marks the third installment in our series on Amazon EKS Auto Mode, focusing on how to effectively deploy workloads on an EKS Auto Mode cluster while adhering to best practices. In our earlier blogs, we covered foundational topics to set the stage for this discussion. The first blog, Why Amazon EKS Auto Mode Outshines Traditional EKS: Simplifying Kubernetes Orchestration, explored the benefits, features, and setup of EKS Auto Mode. The second blog, Seamless Migration from Standard EKS to EKS Auto Mode: A Comprehensive Guide, detailed the process of transitioning from a standard EKS cluster to Auto Mode.
Building on this knowledge, this blog dives into the practical aspects of deploying workloads on an EKS Auto Mode cluster. We will provide an overview of managing persistent storage using Persistent Volumes (PVs) and Persistent Volume Claims (PVCs), as well as configuring ingress to handle external network traffic. Whether you’re a seasoned Kubernetes user or new to EKS Auto Mode, this guide will help you optimize your workload deployment strategy.
NOTE: In the below hands-on practicals we will be using the examples provided by AWS for better understanding.
Deploy your workloads within EKS Auto Mode clusters
- In this we will be covering majorly 4 Points:
- Deploy a sample inflate workload to an Amazon EKS Auto Mode cluster
- Deploy a sample load balancer workload to EKS Auto Mode
- Deploy a sample stateful workload to EKS Auto Mode
- Deploy and control workload placement on EKS Auto Mode nodes
- Deploy a sample inflate workload to an Amazon EKS Auto Mode cluster
Step 1: Review existing compute resources (optional)
First, use kubectl to list the node pools on your cluster.
kubectl get nodepools
Sample Output:
general-purpose
In this tutorial, we will guide you through deploying a workload configured to use the general-purpose node pool. This node pool is built into EKS Auto Mode and comes with sensible defaults for typical workloads like microservices and web applications. However, you also have the option to create a custom node pool.
Second, use kubectl to list the nodes connected to your cluster.
kubectl get nodes
If you just created an EKS Auto Mode cluster, you will have no nodes.
In this tutorial, you will deploy a sample workload. If there are no available nodes, or if the workload cannot be accommodated on the existing nodes, EKS Auto Mode will automatically provision a new node.
Step 2: Deploy a sample application to the cluster
Review the following Kubernetes Deployment and save it as inflate.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: inflate spec: replicas: 1 selector: matchLabels: app: inflate template: metadata: labels: app: inflate spec: terminationGracePeriodSeconds: 0 nodeSelector: eks.amazonaws.com/compute-type: auto securityContext: runAsUser: 1000 runAsGroup: 3000 fsGroup: 2000 containers: - name: inflate image: public.ecr.aws/eks-distro/kubernetes/pause:3.7 resources: requests: cpu: 1 securityContext: allowPrivilegeEscalation: false kubectl apply -f inflate.yaml
Step 3: Watch Kubernetes Events
Use the following command to watch Kubernetes events, including creating a new node. Use ctrl+c to stop watching events.
kubectl get events -w --sort-by '.lastTimestamp'
Use kubectl to list the nodes connected to your cluster again. Note the newly created node.
kubectl get nodes
Step 4: View nodes and instances in the AWS console
IMP: To visualize resources on the Kubernetes Dashboard, you need to create an access entry and grant the necessary permissions to the user or role through which you are logged in.
Below Screen-shots will guide you how to set up an access entry,
Now you can visualize all your cluster resources and compute on the k8s dashboard.
Deploy a sample load balancer workload to EKS Auto Mode
In this tutorial, we will demonstrate how easily we can set up an ingress without deploying any AWS load balancer controller, simply by creating an IngressClass and Ingress resource.
Step 1: Create the Namespace
First, create a dedicated namespace for the 2048 game application.
Create a file named 01-namespace.yaml:
apiVersion: v1 kind: Namespace metadata: name: game-2048
Apply the namespace configuration:
kubectl apply -f 01-namespace.yaml
Step 2: Deploy the Application
The application runs multiple replicas of the 2048 game container.
Create a file named 02-deployment.yaml:
apiVersion: apps/v1 kind: Deployment metadata: namespace: game-2048 name: deployment-2048 spec: selector: matchLabels: app.kubernetes.io/name: app-2048 replicas: 5 template: metadata: labels: app.kubernetes.io/name: app-2048 spec: containers: - image: public.ecr.aws/l6m2t8p7/docker-2048:latest imagePullPolicy: Always name: app-2048 ports: - containerPort: 80 resources: requests: cpu: "0.5"
Key components:
Deploys 5 replicas of the application
Uses a public ECR image
Requests 0.5 CPU cores per pod
Exposes port 80 for HTTP traffic
Apply the deployment:
kubectl apply -f 02-deployment.yaml
Nodes are provisioned by the nodepool,
Step 3: Create the Service
The service exposes the deployment to the cluster network.
Create a file named 03-service.yaml:
apiVersion: v1 kind: Service metadata: namespace: game-2048 name: service-2048 spec: ports: - port: 80 targetPort: 80 protocol: TCP type: NodePort selector: app.kubernetes.io/name: app-2048
Key components:
Creates a NodePort service
Maps port 80 to the container’s port 80
Uses label selector to find pods
Apply the service:
kubectl apply -f 03-service.yaml
Step 4: Configure Load Balancing
You will set up an ingress to expose the application to the internet.
First, create the IngressClass.
Create a file named 04-ingressclass.yaml:
apiVersion: networking.k8s.io/v1 kind: IngressClass metadata: namespace: game-2048 labels: app.kubernetes.io/name: LoadBalancerController name: alb spec: controller: eks.amazonaws.com/alb
Then create the Ingress resource. Create a file named 05-ingress.yaml:
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: namespace: game-2048 name: ingress-2048 annotations: alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: ip spec: ingressClassName: alb rules: - http: paths: - path: / pathType: Prefix backend: service: name: service-2048 port: number: 80
Key components:
Creates an internet-facing ALB
Uses IP target type for direct pod routing
Routes all traffic (/) to the game service
Apply the ingress configurations:
kubectl apply -f 04-ingressclass.yaml
kubectl apply -f 05-ingress.yaml
Step 5: Verify the Deployment
Check that all pods are running:
kubectl get pods -n game-2048
Verify the service is created:
kubectl get svc -n game-2048
Get the ALB endpoint:
kubectl get ingress -n game-2048
The ADDRESS field in the ingress output will show your ALB endpoint. Wait 2-3 minutes for the ALB to provision and register all targets.
Step 6: Access the Game
Open your web browser and browse to the ALB endpoint URL from the earlier step. You should see the 2048 game interface.
Deploy a sample statefulsets workload to EKS Auto Mode
Similar to the AWS LB controller, there is no need to install the EBS CSI controller, as it is automatically available in EKS Auto Mode. All you need to do is create a storage class and provision your PVC.
Step 1: Create the storage class
The StorageClass defines how EKS Auto Mode will provision EBS volumes.
EKS Auto Mode does not create a StorageClass for you. You must create a StorageClass referencing ebs.csi.eks.amazonaws.com to use the storage capability of EKS Auto Mode.
Create a file named storage-class.yaml:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: auto-ebs-sc annotations: storageclass.kubernetes.io/is-default-class: "true" provisioner: ebs.csi.eks.amazonaws.com volumeBindingMode: WaitForFirstConsumer parameters: type: gp3 encrypted: "true"
Apply the StorageClass:
kubectl apply -f storage-class.yaml
Key components:
provisioner: ebs.csi.eks.amazonaws.com – Uses EKS Auto Mode
volumeBindingMode: WaitForFirstConsumer – Delays volume creation until a pod needs it
type: gp3 – Specifies the EBS volume type
encrypted: “true” – EBS will use the default aws/ebs key to encrypt volumes created with this class. This is optional, but reccomended.
storageclass.kubernetes.io/is-default-class: “true” – Kubernetes will use this storage class by default, unless you specify a different volume class on a persistent volume claim.
Step 2: Create the persistent volume claim
The PVC requests storage from the StorageClass.
Create a file named pvc.yaml:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: auto-ebs-claim spec: accessModes: - ReadWriteOnce storageClassName: auto-ebs-sc resources: requests: storage: 8Gi
Apply the PVC:
kubectl apply -f pvc.yaml
Key components:
accessModes: ReadWriteOnce – Volume can be mounted by one node at a time
storage: 8Gi – Requests an 8 GiB volume
storageClassName: auto-ebs-sc – References the StorageClass we created
Step 3: Deploy the Application
The Deployment runs a container that writes timestamps to the persistent volume.
Create a file named deployment.yaml:
apiVersion: apps/v1 kind: Deployment metadata: name: inflate-stateful spec: replicas: 1 selector: matchLabels: app: inflate-stateful template: metadata: labels: app: inflate-stateful spec: terminationGracePeriodSeconds: 0 nodeSelector: eks.amazonaws.com/compute-type: auto containers: - name: bash image: public.ecr.aws/docker/library/bash:4.4 command: ["/usr/local/bin/bash"] args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 60; done"] resources: requests: cpu: "1" volumeMounts: - name: persistent-storage mountPath: /data volumes: - name: persistent-storage persistentVolumeClaim: claimName: auto-ebs-claim
Apply the Deployment:
kubectl apply -f deployment.yaml
Key components:
Simple bash container that writes timestamps to a file
Mounts the PVC at /data
Requests 1 CPU core
Uses node selector for EKS managed nodes
Step 5: Verify the Setup
Check that the pod is running:
kubectl get pods -l app=inflate-stateful
Verify the PVC is bound:
kubectl get pvc auto-ebs-claim
Check the EBS volume:
# Get the PV name
PV_NAME=$(kubectl get pvc auto-ebs-claim -o jsonpath='{.spec.volumeName}’)# Describe the EBS volume
aws ec2 describe-volumes \
–filters Name=tag:CSIVolumeName,Values=${PV_NAME}
Verify data is being written:
kubectl exec “$(kubectl get pods -l app=inflate-stateful \
-o=jsonpath='{.items[0].metadata.name}’)” — \
cat /data/out.txt
Control if a workload is deployed on EKS Auto Mode nodes
When running workloads in an EKS cluster with EKS Auto Mode, you might need to control whether specific workloads run on EKS Auto Mode nodes or other compute types. This topic describes how to use node selectors and affinity rules to ensure your workloads are scheduled on the intended compute infrastructure.
The examples in this topic demonstrate how to use the eks.amazonaws.com/compute-type label to either require or prevent workload deployment on EKS Auto Mode nodes. This is particularly useful in mixed-mode clusters where you’re running both EKS Auto Mode and other compute types, such as self-managed Karpenter provisioners or EKS Managed Node Groups.
EKS Auto Mode nodes have set the value of the label eks.amazonaws.com/compute-type to auto. You can use this label to control if a workload is deployed to nodes managed by EKS Auto Mode. Require a workload is deployed to EKS Auto Mode nodes
This nodeSelector value is not required for EKS Auto Mode. This nodeSelector value is only relevant if you are running a cluster in a mixed mode, node types not managed by EKS Auto Mode. For example, you may have static compute capacity deployed to your cluster with EKS Managed Node Groups, and have dynamic compute capacity managed by EKS Auto Mode.
You can add this nodeSelector to Deployments or other workloads to require Kubernetes schedule them onto EKS Auto Mode nodes.
apiVersion: apps/v1 kind: Deployment spec: nodeSelector: eks.amazonaws.com/compute-type: auto
You can also add nodeAffinity to Deployments or other workload to deploy onto EKS Auto Mode nodes.
affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: eks.amazonaws.com/compute-type operator: In values: - auto
IMP: If you want you can create you custom nodepool and give label as show below,
Sample example:
apiVersion: karpenter.sh/v1 kind: NodePool metadata: name: prod-apps spec: template: metadata: labels: role: prod-apps # (This will define the label of nodes how we can schedule our workloads on the node as per our label) spec: requirements: - key: kubernetes.io/arch operator: In values: ["amd64"] - key: kubernetes.io/os operator: In values: ["linux"] - key: karpenter.k8s.aws/instance-hypervisor operator: In values: ["nitro"] - key: karpenter.sh/capacity-type operator: In values: ["on-demand"] - key: karpenter.k8s.aws/instance-category operator: In values: ["c", "m", "r"] #add - key: karpenter.k8s.aws/instance-size operator: In values: ["large", "2xlarge", "xlarge"] - key: topology.kubernetes.io/zone operator: In values: - ap-south-1a - ap-south-1b - ap-south-1c nodeClassRef: group: karpenter.k8s.aws kind: EC2NodeClass name: default expireAfter: 720h # 30 * 24h = 720h limits: cpu: 50 disruption: consolidationPolicy: WhenEmptyOrUnderutilized consolidateAfter: 1m
As you can see here key is role and value is prod-apps
affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: role operator: In values: - prod-apps
Require a workload is not deployed to EKS Auto Mode nodes
You can add this nodeAffinity to Deployments or other workloads to require Kubernetes not schedule them onto EKS Auto Mode nodes.
affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: eks.amazonaws.com/compute-type operator: NotIn values: - auto
All lab codes are shared on this github: https://github.com/sart-glitch/eks-automode-labs.git
Conclusion:
In this blog, we have successfully demonstrated several key concepts for working with EKS Auto Mode clusters. We covered how to inflate a workload, provision ingress without the need for an ingress controller, and enable automatic PVC provisioning without the EBS CSI driver. Additionally, we explored how to schedule workloads on Auto Mode nodes provisioned by a node pool, as well as how to prevent workloads from being scheduled on specific nodes. These steps simplify the management of workloads and resources in EKS, making it easier to deploy and scale applications seamlessly.
Additional things to know:
Pricing to be considered for EKS AUTO MODE: https://aws.amazon.com/eks/pricing/
Setup EKS Auto mode cluster using aws-cli, eksctl and console.
Migration of Standard EKS cluster to Automode cluster
References
https://docs.aws.amazon.com/eks/latest/userguide/automode.html