DevOps & Cloud

Deploying our first 5G application at the edge

Verizon 5G Edge Blog
8 min readApr 12, 2021

Mark Persiko, Sr. DevOps Engineer at Verizon Location Technology

Bryan Kobler, Sr. DevOps Engineer at Verizon Location Technology

Summary

Verizon Location Technology deployed Kubernetes and a location-based application to Verizon 5G Edge with AWS® Wavelength. The application powerfully illustrates the difference 5G Edge makes in application latency and location-based service interaction.

Key takeaways:

  • 5G Edge works with familiar Amazon Web Services (AWS) resources and services
  • Kubernetes supports declarative application deployment to 5G Edge

You can use 5G Edge for application functions that require low latency and local compute offload from application clients

Introduction

The Verizon Location Technology team recently extended its infrastructure and application deployment capabilities from computing in a regional public cloud to a mobile edge computing (MEC) platform — 5G Edge with AWS Wavelength. In partnership with AWS we deployed our first 5G Edge application: a reverse geocoder to highlight latency savings in MEC coverage areas with geo-optimized data.

A reverse geocoder is a service that translates latitude/longitude coordinates (lat longs) into a physical street address. This service is critical for businesses that rely on optimal routing, like food delivery and package shipment, or for those that need location awareness, such as for vehicle fleets that must report their location for telemetry purposes.

Our goal was to reduce service latency for reverse-geocode queries in key locations, leveraging Verizon’s MEC capabilities to deploy our service as close to the network as possible. We engineered both the service and data to utilize locally relevant stores for queries emanating from within the MEC coverage area of each location.

These pictures provide a good example of the latency reduction provided by deploying an application to 5G Edge.

Clicking outside the MEC coverage area returns data from the far cloud.
Clicking inside the MEC coverage area returns locally relevant data from the MEC node.

5G Edge demystified

Verizon’s 5G Edge with AWS Wavelength service allows customers to experience the benefits and utilize the capabilities of the Verizon 5G network and MEC. In partnership with AWS, 5G Edge enables applications and services to run closer to where mobile device users are located. This lowers service latency and can offload compute-intensive operations from mobile devices. And AWS makes it easy to extend existing infrastructure from regional cloud locations to Wavelength Zones, which reside at the edge of Verizon’s wireless network.

There are some new considerations when deploying applications and services to an MEC environment, such as:

  • A Wavelength Zone is a single-fault domain with a Parent Region.
    You can use the Parent Region as fallback for your 5G Edge app
  • AWS Wavelength supports a limited number of EC2 instance types
  • To expose 5G Edge applications to the carrier network, additional resources are needed. Fortunately, AWS Virtual Private Cloud (VPC) constructs make this easy to do — and do securely. Security groups, network access control lists (NACLs), and route tables work the same way in both Wavelength Zones and Regions

Our 5G Edge deployment journey

Using data relevant to a network-planning scenario, we deployed our first 5G Edge service: a simple proxy that responded “server ready” when browsing its URL.

Production-ready applications would require more consideration and planning, like how do we make use of limited resources such as EC2 servers, carrier-assigned IP addresses and storage? Additionally, how would we secure parts of the application from being accessed by nefarious agents?

We chose to expose the proxy to the carrier network and deploy our application to a separate, private subnet in the edge environment. The carrier part we had figured out; we’d use only one carrier-assigned IP address this way! A private subnet without access from the carrier would separate layers of an application — access, compute and storage.

The application deployment would require egress access to a Docker repository. Not having a carrier IP meant introducing network address translation (NAT) into the edge environment. We relied on the parent AWS region for NAT access, which enabled software deployment — not generally a latency-sensitive activity in itself.

This application was lightweight enough to live completely in the MEC node. It does not represent our comprehensive geocoding solution; that would comprise both MEC and “far cloud” based components. Just enough was deployed to supply a) a locally relevant coverage area for resolving lat-longs to addresses, and b) a proxy to call out to the far cloud for lat-longs not in that coverage area.

One lesson from this journey was not to “lift and shift” an entire application to an MEC environment. We only deployed the parts that are useful for latency-sensitive operations. MEC and far cloud are each good for different things, so it is critical to optimize each part of your application for the strength of the environment it will use.

Why we chose EKS

Prior to deploying our application, we extended our AWS Elastic Kubernetes Service (EKS) to mobile edge computing in order to standardize our deployment and infrastructure strategy for applications. EKS allowed us to use a managed Kubernetes service to deploy, run and scale all of our applications in the same manner. Applications are deployed in containers, meaning we can build our infrastructure once and then turn our focus to performance and scalability.

So, what’s the difference between EKS at the edge vs. in the Parent Region? In truth, it really isn’t that different. At the edge, Kubernetes is very much the same as in a regional cloud. Teams can still provision compute, storage and networking resources for Kubernetes, and declaratively deploy applications to 5G Edge, just as in the far cloud. Constructing the edge components of a Kubernetes cluster requires extra AWS resources. Labeling EKS nodes in the edge differently from nodes in the cloud helps to target deployments effectively.

Verizon 5G Edge and EKS

Here are all of the key components of EKS extended to 5G Edge:

  • Edge Proxy ASG: Exposes applications using a minimum of carrier IP addresses
  • NodePort Service: Exposes Edge Ingress ports to the Edge proxy ASG
  • Edge Ingress: Exposes apps running on EKS MEC nodes to Wavelength
  • EKS Worker ASG: Provides the compute environment for MEC-based apps
  • Cloud Ingress ELB: Exposes apps running on EKS worker nodes in the far cloud
  • EC2 VPC Endpoints: Provide the ability to extend EKS to Wavelength
  • S3 Gateway Endpoint: Provides an optimal path for S3 API calls
  • EKS control plane: Exposes the k8s API on multiple subnets for the Wavelength Zone to reach it with fault tolerance
  • Node Labels: Permit selective deployment of apps to the cloud or to MEC

In 5G Edge, infrastructure considerations include when to Separate, Save and Scale.

Separate services running in the regional cloud from those in the edge.

In the regional cloud, you’re building for failure tolerance when you move from datacenter to Amazon. From traditional AWS to MEC, you’re refining your application to separate the low-latency components from the ones that can handle longer round-trip times for handling requests. Generic compute instances in an Amazon Region are sufficient for that, so you don’t put every service in the MEC. Instead, focus on use cases that need low latency and near-edge compute vs. everything else the regional cloud provides.

So where should developers start? Think about the problem domain of your application, and break it down into two parts:

  1. Those that are latency-sensitive and compute-intensive
  2. Those that require more data to be stored and can handle longer retrieval times

The former would be best to go into MEC; the latter into the regional cloud or traditional data center. Think about where the components of your application will function optimally.

Saving up-front costs vs. making long-term investments

At this time, Wavelength Zones only provide on-demand instances, or savings plans to budget ahead for either one year or three years of use. As a result, utilizing fewer, bigger instances will likely pay off in the long run, and be enough to cover fault cases and run most workloads. Ever-changing latency baselines will also affect decision-making about how much compute is needed to meet or exceed those baselines.

Scaling for both ad hoc and planned events

In the regional cloud, compute instances scale according to seasonal needs (e.g., Prime Day); they scale up and then down, after the season passes. But what should happen at the edge? What do scaling events look like there? How high can we scale and how quickly? With MEC nodes deployed to sporting arenas, scaling around scheduled events in these arenas could be one such strategy. New features of AWS Auto Scaling provide more intelligent scaling options.

Learning from our deployment challenges

We had to start with the AWS command-line interface (CLI) and then the AWS console. Initially, some aspects were visible in one toolset that weren’t in the other.

Our requirements also drove the specs for automation; eventually, we were able to provision both Wavelength-based VPC components and EC2s, via AWS CloudFormation. Some preparatory steps were still done in AWS CLI, as CloudFormation isn’t always as up to speed on new AWS features.

AWS has not yet made a load balancer resource available in Wavelength. Our workaround was to deploy a HAProxy Auto Scaling Group of EC2 instances. This was our reverse proxy bearing a carrier IP address that forwarded traffic to a backend running the reverse geocode application. We were able to use a Lambda function to add/remove hosts from the HAProxy config to simulate a very basic ELB.

Finally, deploying an application to 5G Edge with AWS Wavelength isn’t just about the application — it’s about the data that will accompany it. We generated a dataset that contained mappings of lat-long pairs to street addresses that were pertinent to the coverage area of the MEC node.

We decided to deploy a standard application orchestration framework in Wavelength, to allow more focus on the characteristics of the application and less on the infrastructure needed to run it. This is where Kubernetes comes into play. Kubernetes allowed us to declaratively deploy selected application pods to Kubernetes workers running in MEC with the use of node labels. The non-MEC application pods were deployed to Kubernetes workers in the far cloud. This made it easy to separate the application into MEC and non-MEC components.

Looking ahead to the 5G future

We found that 5G Edge with AWS Wavelength gives application users more location-based value, including the ability to see locally relevant data and engage meaningfully with it, with lower latency. Lower latency ultimately means that users (human or machine) will be able to understand the world around them even more quickly and answer questions like, “Where am I?” “What’s around me?” and “How do I adapt to my environment in real time?”

We hope that our reverse geocoder application gives you an idea of how you might approach your own use case and transform existing capabilities to push the edge of what’s possible.

Because in the coming 5G world, demand for contextually aware machines and software is going to increase, and location technology will be a critical component. The time to prepare your applications for MEC with Verizon 5G Edge is now!

--

--

Verizon 5G Edge Blog
Verizon 5G Edge Blog

Written by Verizon 5G Edge Blog

Powering the next generation of immersive applications at the network edge.

Responses (1)