Robert Belson, Corporate Strategy, Verizon
Mobile edge computing (MEC) is poised to become one of the most exciting cloud computing technologies to date. With more deployment flexibility across the rising number of edge locations and new performance characteristics that were previously unrealizable, MEC has the opportunity to not only enable new use cases but transform existing applications as part of the everlasting race to optimize applications beyond what we can do today.
While the benefits of MEC —low latency, high throughput, and compute topologically closer to the end user — are well known, the differentiating factors between MEC and other edge computing solutions are more complex. To best characterize what MEC actually is, let’s consider what MEC is not and examine some of the popular misconceptions about MEC and its related counterparts.
Content Delivery Network (CDN)
While Cloudflare, Akamai, CloudFront, EdgeCast and MaxCDN account for well over 50% of the $12B+ Content Delivery Network industry, CDNs are an important part of the network edge — but fundamentally a different offering than edge compute. Traditionally speaking, CDNs are a distributed network of proxy servers that deliver both static and dynamic content — such as webpages, images, or videos — to the end user at lower latency than if served from a more remote, centralized location. However, CDN infrastructure does not reside at the mobile network edge.
While modern CDN capabilities, such as Akamai EdgeWorkers, have afforded some compute features within the platform itself, CDNs are fundamentally not an edge computing offering but rather an edge delivery offering. Unlike CDNs, which are often designed for general purpose workloads, MEC is designed precisely for compute-intensive workloads and often consists of graphical processing units (GPU) available to the developer at the network edge.
While CDNs today have a presence in every major metropolis, the CDN infrastructure itself does not reside in the radio network — thereby incurring incremental latency from the edge of the radio network to the CDN edge location. Thus, in the context of mobile edge computing, the bona fide network edge is truly within the radio network itself — with 5G Edge.
AWS Outposts and Local Zones
Another common misconception of MEC is that it is the same as AWS Outposts and Local Zones — recent edge services announced by AWS at re:Invent 2019. While AWS Outposts and Local Zones are analogous to 5G Edge, they are also fundamentally different as neither of the offerings reside in the radio network. Moreover, Outposts are designed primarily for on-premises facilities and focused on single-tenancy models, as opposed to the multi-tenancy model of MEC.
Clearly, the proximity to the radio network — coupled with the 5G network itself — is the differentiating factor for mobile edge computing. But why is that the case?
5G in 5G Edge
While 5G Edge allows both 5G and 4G LTE connected devices to interact with compute resources at the edge, the difference in quality-of-experience is unmistakable.
While users can expect 50–80ms latency in 4G networks, 5G networks are anticipated to support end-to-end latency targets in the range of 20ms. The improved latency results are enabled not only by spectrum, such as millimeter wave, but also by the unique advantages afforded by the 5G network design.
Consider the following analogy: you are an ice cream truck operator and hoping to optimize your operations to accommodate the maximum number of customers on a summer afternoon. Today, given the unparalleled demand, you experience a hoard of customers around your truck, with each customer concurrently shouting their requests for ice cream flavors. Currently, you optimize your store across two dimensions:
- Customer selection rate: The number of seconds required for the employee to decide which customer to serve next.
- Service rate: The number of seconds required for the employee to scoop the ice cream onto a cone for a given customer
In many cases, once the employee has identified which flavor the customer wants, the actual scooping time is fairly quick. Rather, the choice of which customer to serve next-in the absence of a clear, single-file line-is often the bottleneck.
In the case of the 5G network, this ice cream-scooping dilemma is ever present. In 5G, the customer selection rate is the control plane latency, measuring for the network attachment operation & air interface, and the service rate is the data plane latency, measuring the packet latency upon connection. In 5G, while the data plane latency decreases marginally, the primary benefits are realized by a significant decrease in control plane latency.
Edge Computing Applications Built Right
Through the benefits of lower control plane & user plane latency, 5G networks will provide users with substantial performance improvements and the flexibility to leverage MEC resources to deliver real-time predictive insights at the network edge.
Verizon’s 5G Ultra Wideband network provides significant throughput advantages over 4G LTE. 5G Ultra Wideband’s speeds — which can reach over 1Gbps — coupled with its low latency, empowers application developers to build in totally new ways. Previously, even with low latency, the time taken to upload a medium-resolution (1 MB) image over a traditional 40 Mbps LTE network would take approximately 1000 ms — or rather many times the speed of the human eye at processing visual changes. Thus, in the case of image recognition, as an example, 4G networks would struggle to deliver the promise of real-time insights.
Now consider Verizon’s 5G UWB network with 1Gbps throughput; that same 1 MB image could potentially be delivered in roughly 32 milliseconds — a fraction of the time compared to 4G.
Simply put, mobile edge computing alone is not enough. With Verizon’s 5G UWB network, application developers have an opportunity to build real-time applications like never before.
Originally published at https://enterprise.verizon.com.