DEVOPS & CLOUD

Set up your first EKS Cluster on AWS with Verizon 5G Edge in five minutes.

About this post

In this tutorial, you will deploy a Kubernetes cluster on Amazon® Elastic Kubernetes Service (EKS) in an AWS® Wavelength Zone using Boto3 and Verizon 5G Edge.

The EKS cluster will consist of the control plane in-region and a self-managed node group in the NYC Wavelength Zone. Note that you can change the Wavelength Zone (us-east-1-wl1-nyc-wlz-1) to any of the following markets:

  • Boston (us-east-1-wl1-bos-wlz-1)
  • San Francisco Bay Area (us-west-2-wl1-sfo-wlz-1)
  • Atlanta (us-east-1-wl1-atl-wlz-1)
  • Washington, DC (us-east-1-wl1-was-wlz-1)
  • New York City (us-east-1-wl1-nyc-wlz-1)
  • Miami (us-east-1-wl1-mia-wlz-1)
  • Dallas (us-east-1-wl1-dfw-wlz-1)
  • Denver (us-west-2-wl1-den-wlz-1)
  • Las Vegas (us-west-2-wl1-las-wlz-1)
  • Seattle (us-west-2-wl1-den-wlz-1)

Getting started: Setup requirements for AWS Wavelength

To get started, make sure you have provisioned AWS credentials for the lab, either through the AWS CLI or through Boto. To learn more about provisioning credentials, check out the Boto3 Quickstart guide.

Set up the Boto3 client.

First, import Boto3 and configure the Boto client for EC2, IAM, EKS, Auto Scaling and CloudFormation with appropriate credentials and set us-east-1 as our default region.

Create the VPC and parent region subnets.

To set up the logically isolated virtual network, create a VPC with a 10.0.0.0/16 CIDR range and call it wl-vpc. Next, create both a subnet (10.0.0.0/24) for the EKS control plane and a public route through an internet gateway attached to the VPC. Lastly, create an additional subnet (10.0.2.0/24) for the EKS control plane for high availability.

Extend the VPC to a carrier subnet.

After launching the subnets in the parent region, extend the VPC to include the carrier subnet in the NYC Wavelength Zone with a 10.0.1.0/24 CIDR range and call it wl-test-subnet. Be sure to also create a public route, this time through a carrier gateway, and attach it to the VPC.

Configure networking to connect carrier subnet resources to the control plane.

Just before creating the EKS cluster, add a few additional routes to the carrier subnet route table, including the following:

  • S3 Gateway endpoint. Provides access to AWS-managed CloudFormation template for the self-managed nodes
  • EC2 interface endpoint. Connects worker nodes to the control plane
  • ECR + ECR (API) interface endpoints. Provides access to AWS’ managed container registry to your cluster

Additionally, you will need to create an IAM role called wavelength-eks-role, consisting of the AmazonEKSClusterPolicy policy that allows EKS clusters to make calls to other AWS services on your behalf.

Create the EKS cluster and self-managed worker nodes.

Using the IAM role you just created, launch the EKS cluster eks_wavelength using the two subnets in the parent region.

After launching the EKS cluster, it typically takes 10 to 15 minutes for the cluster to complete the provisioning process.

Given that you cannot proceed with creating the self-managed worker node without the cluster’s API server endpoint, and you also cannot halt execution, generate a timed checkpoint to assess if the cluster metadata is available.

After retrieving the cluster API server endpoint, and if Certificate Authority (CA) data doesn’t throw an error, you can proceed to launching the node group. AWS has a handy template in S3 that you could use.

Final configuration and permissioning

Next up is configuring the node’s security group and attaching a carrier IP to the node for inbound/outbound access through the carrier network. To do so, however, you need the CloudFormation stack to have completed provisioning its resources.

That means you need to frequently check the StackStatus attribute of the CloudFormation stack until it reaches CREATE_COMPLETE and we can proceed. From there, you will need three outputs from the stack itself:

  • NodeInstanceRole. Permits EKS nodes to make API calls to the control plane
  • NodeSecurityGroup. Authorizes ingress rule for SSH
  • NodeAutoScalingGroup. Extracts node ENI to attach carrier IP

After retrieving the outputs, you will need to authorize SSH to the node group and allocate/attach a carrier IP address to the node.

Lastly, you will need to create and apply the aws-auth config-map. You can now leverage the command line to run the kubectl commands manually.

Your EKS cluster setup on 5G Edge is done!

Congratulations! You have now configured your EKS cluster! At this point, running Pods, ReplicaSets and Deployments is up to you.

To check the status of your cluster and get the registered nodes to your cluster, use the command below:

kubectl get nodes

For more information about creating your first service and deployment, visit our 5G Edge developer resources.

Powering the next generation of immersive applications at the network edge.