DEVOPS & CLOUD

Introducing the 5G Edge Computing Challenge Winners

Verizon 5G Edge Blog
14 min readDec 20, 2021

By Bala Thekkedath, Head of Product Marketing, AWS Wavelength, and Robert Belson, Corporate Strategy, Verizon

In March, we teamed up to bring a one-of-a-kind hackathon to the global developer community to feature state-of-the-art technology using the power of AWS on Verizon 5G. In the 5G Edge Computing Challenge, developers set out to build innovative applications on Verizon 5G Edge and AWS Wavelength to challenge the art of the possible for mobile edge computing applications.

After months of ideation, development and deployment, we saw over 100 submissions from 22 countries. There was an impressive diversity of projects leveraging mobile edge computing to solve problems in public safety, health, entertainment and elsewhere. Our panel of judges across the Verizon and AWS teams ultimately selected winners based on the published criteria: creativity, originality, usage of 5G and AWS Wavelength and potential societal and commercial impact.

On July 6, we announced the winners of the 2021 AWS Wavelength and Verizon 5G Edge Hackathon: the “Eye5G” project by Rico Beti and Xin Tong. Their project helps users with visual impairment to detect objects around them and provide near real time feedback verbally. Eye5G announces objects using text-to-speech in different languages tailored for different needs. The project was built using the YOLO image classifier on Amazon EC2 instances in AWS Wavelength Zones.

In recognition for their contributions, Rico and Xin will receive $15,000 along with an opportunity to present the project to executives from AWS and Verizon.

Want to learn more about the winners and their 5G Edge journey? Let’s hear from the teams directly.

1st Place | Eye5G by Rico Beti and Xin Tong

About the application:

“An estimated 100 million people worldwide have moderate or severe distance vision impairment or blindness. Our project, Eye5G, is an experimental aid for people with visual impairments, featuring low-latency, real-time object detection using 5G Edge Computing and is designed for indoor and outdoors use.

The front-end consists of an easy-to-use Android app that captures the user’s surroundings and streams a video feed to the back end for object detection. We have also created a prototype of an optional, wearable IoT device in the form of a pair of camera-enabled sunglasses. The sunglasses also have LEDs placed on either side to assist people with partial blindness by emitting light pulses when objects have been detected. Detected objects returned by the server are prioritized, grouped and verbally announced to the user through the app.

The back end of the system is designed to run inside an AWS Wavelength Zone on an EC2 G4 instance optimized for machine learning. Incoming connections are handled by a Websocket server written in Rust that decodes incoming frames from the video feed and forwards them to the ML model. For object detection, we have incorporated the excellent C-based Darknet & YOLOv4. By utilizing the GPUs available on EC2 G4 instances, we are able to perform real-time object detection in ~0.05 seconds per frame.”

Practical advice for 5G Edge developers:

“Working with AWS Wavelength gave us a unique perspective on real-time systems with ultra-low-latency connections and high bandwidth and allowed us to expand our knowledge of machine learning in the cloud and mobile app development. 5G and edge computing is an interesting technology for which a plethora of novel use cases will emerge in the near future as it rolls out in more parts of the world.

By far the biggest challenge for us was testing. Currently, Wavelength is only available in a few US cities through the Verizon network. As we are based in Perth, Australia, we did not have direct access and the latency due to distance was problematic. We have addressed this issue by developing and testing our system on a standard EC2 instance in the Asia Pacific region. Once we have reached a workable state, we were able to test it in a Wavelength zone through Verizon’s Nova testing platform. So, keeping things modular is key. Another important aspect when developing for Wavelength is being conscious of performance. To be able to take true advantage of Wavelength’s low latency, the back end needs to generate a response as quickly as possible. A latency of one millisecond loses its significance when the server takes two seconds to generate a response.”

2nd Place | Around You by Gustavo Zomer

About the application:

“Around You is a mobile app using machine learning that allows visually impaired people to interact with the world in a faster, more practical and intuitive way, almost restoring their visual perception. It works by identifying objects, clothes, colors, patterns, products, menus, documents and more using only the smartphone camera, a 5G internet connection and the smartphone speaker. The user just points the camera to what they want to see, and the app says to the user what is happening around them. When connected to AWS Wavelength via a 5G network, Around You provides an almost instantaneous interaction with the world, making the lives of people with visual impairments easier, faster and more importantly, more enjoyable.”

Practical advice for 5G Edge developers:

“AWS Wavelength was essential for enabling the app to achieve the low-latency required for this type of application. It allowed the machine learning models to be deployed on the edge of the Verizon 5G Network, enabling fast-streaming of the device camera and audio feed to the machine learning models to achieve low-latency inference and responses from the API server.

One of the best parts of using AWS Wavelength was the easy setup and configuration. Deploying an instance in an AWS Wavelength zone is very similar to deploying an instance in other regions. You can even deploy GPU-enabled instances, which makes it a perfect fit for running machine-learning applications on the edge.

In addition, you don’t even need to be within a Wavelength zone to develop your app. You can develop your application and connect to the Wavelength instance either with a VPN or a proxy to the Wavelength instance via an instance in the parent region or you can try the Nova platform. Either way, it is very simple to switch from development to production, as you just need to change your application to point to the Wavelength instance when you are ready to deploy.

Finally, my main takeaway from using AWS Wavelength is that it enabled me to think about new project ideas that otherwise would be infeasible due to latency constraints. Being able to deploy instances closer to the users is a new capability that will allow the development of new apps that not only are faster, but also provide an enhanced user experience.”

3rd Place | Transwave by Nnaemeka Eziamaka

About the application:
“Transwave is a speech-to-text web app that leverages the ultra-low-latency of 5G to provide audio transcriptions for individuals with impaired hearing. With the power of 5G, these transcriptions are done at blazing speeds, thereby offering a better customer experience.”

Practical advice for 5G Edge developers:
I learned a lot when it came to architecting a solution for the 5G Edge Wavelength hackathon. It was interesting how the wavelength zone cannot be accessed directly from the internet; though connections to the internet are possible if they originate from the wavelength zone.

Several architectures such as the hub (in a regular AWS region) and spoke model can be employed to gain access to deployments in the wavelength zone.

The most important thing for me was architecting the solution such that only the parts that required ultra-low latency got deployed in wavelength zones.

Note that the only infrastructure available in the wavelength zones are compute instances and storage. While it is possible to use AWS AI services such as AWS Transcribe; this would entail making calls back to a regular AWS region, increasing latency and defeating the purpose of deploying in a wavelength zone.

If your solution requires machine learning inference, it is best to deploy said software on a server in the wavelength zone rather than calling the AI services provided by AWS.

It was also important to properly set up the networking, paying attention to the setup of internet and carrier gateways.

All in all, it was a great learning experience, and with the partnership between AWS and Verizon, I can’t wait to see the solutions that will leverage the 5G Wavelength Zones.”

4th Place | Augmented Reality Remote Assist by Padmashri Suresh, Karthikeyan, Santhosh KV, Vidhya S. and Daksh S.

About the application:
“The application helps to conduct unidirectional “see what I see” video calls with live multidirectional annotations and improve augmented reality (AR) troubleshooting through a machine-learning-powered, knowledge-based recommendation system.”

The challenge:
Maintenance operations are plagued with numerous challenges:

● Technicians generally seek assistance over voice calls and spend time and energy articulating problems rather than solving them

● Video calls are executed on external, potentially unsafe platforms with their share of latency and quality issues

● Solutions that reap the benefit of AR face delayed download times of larger models

● Lack of contextual documentation and historical account for service tickets in progress

With an end goal to reduce maintenance turnaround time for complex equipment, weed out the need for back-and-forth visits by field engineers and resolve the above problem statements, we prototyped an AR-powered iPad application christened ARRA, that helps engineers remotely collaborate in real-time and enable interactive troubleshooting of complex workflows to circumvent sticky spots.

Our solution benefits from the unique interconnectivity provided by AWS Wavelength zones and blazing fast speeds offered by Verizon 5G to address connectivity and latency challenges.”

Solution features and the Wavelength and Verizon 5G advantage:

● On-demand download of massive 3D models from an AWS Wavelength Zone produced the best benchmarking results (More on that later)

● Ability to conduct a unidirectional See what I see video call with live multidirectional annotations and negligible latency. Potential to be propelled to 4K-quality video feeds

● Ability to conduct AR troubleshooting through an ML-powered recommendation system using Amazon Kendra

AWS and Verizon 5G Edge solution architecture

1. The front-end application runs on an iPad and communicates with ServiceNow to retrieve assigned maintenance tickets.

2. Two EC2 servers were hosted — one within a Wavelength zone (parent) and another (child) within the same VPC to enable communication via an OpenVPN server. This allows us to test and administer the parent EC2 instance.

3. The parent EC2 runs a Python API server that hosts and provides access to 3D models. The iPad app accesses this server by referencing it by its domain name resolved by a Amazon Route 53 entry if accessing via the VPN for testing. For mobile devices within Verizon 5G network, communication is enabled directly via a carrier gateway.

4. Amazon Kendra serves product documentation stored in S3 and content integration with ServiceNow.

5. Android devices were used on the Nova Testing platform to execute simple wget requests to the parent EC2 server and us-west-2 S3 storage to benchmark via Termux. Additionally, we used AWS Device Farm Remote Access and used an OpenVPN connection to the parent EC2 server to benchmark and compare 4G speeds (configured via network profiles) as of current real-world alternatives.

Solution Stack Cloud: AWS Wavelength, AWS Device Farm, Verizon Nova Testing, OpenVPN, Python Server, Amazon Kendra, iPad, ARKit, 3D Models, Amazon S3 .

Practical advice for 5G Edge developers:
Because we were new to the realm of 5G and Wavelength, there was much to learn and discover, including:

• ARRA requires high bandwidth and low latency rates to maintain user experience. Wavelength Zones embed compute and storage resources and provide interconnectivity with the telecommunications provider’s 5G network. The architectural design results in minimum hops and requests that never leave the carrier’s network resulting in lower network latency optimal for our use case.

• Amazon Kendra is an exceptional search service powered by machine learning that gives our solution the foresight to provide technicians with relevant documentation before they embark on a job. We hosted product documentation, articles and user manuals to act as references. The service in turn analyzed incident information and offered contextual input to technicians. The service which interfaces with Kendra is hosted in the same Wavelength Region and consumed by the iPad application.

• NOVA is a Verizon mobile automation testing platform that connects to 5G and enables testing of Wavelength and 5G dependent applications. We used it to execute our tests using the Android and iOS devices geographically closest to our wavelength zones.

To decipher the efficacy of large 3D model downloads in 5G with AWS wavelength in coherence with our use case, we compared all possible yet logical alternatives against each other to create benchmarks and registered a benefit of a factor of six for the time taken to download a model of size ~100 MB. AWS Wavelength and Verizon 5G has the undisputed potential for modern use cases to address enigmas earlier thought impossible.

5th Place | Evacuation Support Application by Kazuki Sakamoto, Keisuke Kitamura, Tomonobu Sembokuya, Masaki Yoshida and Hiromichi Matsunami

About the application:
The application enables people to evacuate from dangerous situations by quickly identifying a route that is safer for them by providing visual information of streets or shelters in real time.

We built Evacuation Support App in this 5G Edge hackathon to enable people to escape from danger quickly by identifying which route is safe and better. In Japan, looking back on our history, we’ve been hit by a lot of disasters. From the viewpoint of geography, we are exposed to some danger. Nowadays there are several applications that show routes to shelters when people are in dangerous situations. However, do these apps take into account how safe the route itself is? These factors inspired us to implement this application.

The architecture is as follows:

Backend system

This system interacts with a drone, extracts data from pictures taken by the drone (we borrowed it from Media Lease corporation(http://www.mlinc.co.jp)). Then it saves disaster information into a database. The drone is meant to head for the area which is suggested to be hit by disaster. This system is comprised of an iOS app and backend APIs. This is the system process:

1. Person in charge takes pictures using iOS app with a drone

2. Data is sent to a backend API

3. Based on the pictures, the backend API identifies whether a disaster happened or not and if so, what kind of disaster it is

4. The API saves the data into a database once it identifies disaster happens

In terms of the iOS app, there are two features:

1. The ability to take pictures using a drone. This app can communicate with a drone made by DJI and take pictures via the drone’s camera using DJI SDK. Pictures are saved in the internal storage of the drone.

2. The ability to send and save disaster information into database. Pictures saved in internal storage are sent to S3 using DJI SDK/ Amazon Amplify. In the meantime, location information as Exif data is attached to pictures (JPEG). Then, object information in S3 is supposed to be sent to backend API.

With regards to backend API, it consists of two components:

1. Disaster Detection API with Sagemaker. This uses the IncidentsDataset learning model which is made up of 446,684 pictures related to events such as traffic accidents and natural disasters. This API predicts the type of disaster that has occurred and returns the result. (In terms of IncidentsDataset, please see the thesis “Detecting natural disasters, damage, and incidents in the wild.”)

2. Aggregate API. This API calls the disaster detection API and makes it identify whether a disaster happened or not based on the pictures received from an iOS app. Once the disaster detection API determines that a disaster has occured, the API sends location information and the URL of pictures to Dynamo DB.

In terms of sending disaster and shelter information to HTML, Lambda detects every action in Dynamo DB and sends JSON file to the S3 bucket which the app runs in.

The process is as follows:

1. Get all tables in Dynamo DB as a JSON file. Once Lambda detects update, delete and add actions in Dynamo DB, it converts data in each table (such as location, disaster information) to JSON. The converted data is saved in designated S3 buckets.

2. Cleanse data for the app in S3. Once the JSON file is saved in the designated S3 bucket, Lambda starts to cleanse the file. After completing it, the file is sent to another S3 bucket which the app runs in. Then, HTML reads the data and the app shows disaster and shelter information.

Frontend system

This app is a map application showing disaster and shelter information. It is intended to be used when disaster happens to help people escape from the dangerous area. It is a web application running on cross-platform browsers such as mobile devices and PCs.

There are two main features:

1. Show information. Showing basic geo information is accomplished by AWS Location and MapLibre GL JS. When the app runs, the country code and local government code in accordance with ISO3166 is designated using a URL parameter. Then, the app gets the up-to-date disaster information and shelter information in the local area which is specified by the code. The app shows the information on a map using graphical depictions like pins, a red circle and so forth.

2. Search for routes. This app can search for safe routes from the user’s current location to shelters. On the first attempt to get information, it uses the Route feature on the AWS Location service. After obtaining the information, it checks whether the dangerous area is overlapped with routes or not, makes the best effort not to guide user to the area, and changes the route to safer one.”

Honorable mentions

Beyond our top 5 winners, we are also awarding an additional 5 submissions, in no particular order, with $2000 prize money.

1. Smart Road 5G created by Peter B, Oleksandr Saienko, Dmytro Minochkin and Askold Klyus

2. LifeHawk created by Jay Desai

3. Carbon Removal Monitoring by Dahl Winters

4. Coral Imaging by Dave O. and Flapmax

5. Real Time Wireless Player Performance Analytics by Name Soo Park and Ezra Park[H2 subhead]

What’s next?

You can view all the submissions on the 5G Edge Computing Challenge . If you missed an opportunity to join this Hackathon, stay tuned for the next one.

For questions on where to start and how to build applications using AWS Wavelength and Verizon 5G, join AWS and Verizon team experts at Verizon 5G Labs events or join us live on Twitch.

--

--

Verizon 5G Edge Blog

Powering the next generation of immersive applications at the network edge.