was successfully added to your cart.

Cart

Uncategorized

The Smart City Edge

By January 3, 2018 No Comments

By Paula Reinman
Co-authored by Dr. Aakanksha Chowdhery

 

Smart cities are all the rage.  From Alphabet’s Sidewalk Labs turning Toronto’s eastern waterside into a smart neighborhood pilot to Barcelona’s superblock by superblock plan, cities around the world are integrating technology, data, analytics and citizen input to become cleaner, safer and easier places to live.

Safety and security is a critical concern for smart cities, requiring cameras deployed throughout the city for applications including surveillance, traffic statistics and post-event analysis for disasters like the Boston Marathon bombing.  Today, most of this collected video data is stored close to the camera.  The video is difficult to search, causing problems particularly when timely remote access would provide valuable real-time insights, because transferring camera video data to a cloud data center is particularly bandwidth intensive. While much of the attention in the networking space is on the cloud, it turns out that smart city security can only work at scale by innovating at the edge of the network.

Many of these important developments are happening in the labs at Princeton University led by 2012 Young Scholar, Dr. Aakanksha Chowdhery.

 

The Video Tsunami

“Cameras are ubiquitous,” Chowdhery says.  “We have surveillance cameras in buildings, traffic monitoring cameras on roads and at lights and drone cameras streaming live events, doing surveillance and helping with disaster response.”

There were over 62M building surveillance cameras in the US in 2016 and traffic surveillance cameras in over 400 metro areas, creating petabytes of content each year.

This tsunami of video and images contains information to help find perpetrators, to enforce traffic rules and to locate people who need help but finding exactly the right frames and shots to turn this huge collection of data into information is a technical challenge.  In one surveillance sample, only 20% of a 250-hour video feed had any faces at all.  Sending the full video to the cloud for analysis wastes bandwidth and power while compromising security of the video stream.

A better answer is to identify just the right information at the edge of the network and to send only those images over the cloud.  Fog computing, or edge computing, is an architecture proposed to solve this problem.  A fog node is part of a decentralized computing architecture that essentially extends cloud computing capabilities to the network’s edge by bringing analytics, computing, storage and applications to the most efficient place between the data and the cloud.

In this case, a fog node is comprised of something like a raspberry pi (a $25 computer with basic functionality) or Nvidia Jetson TK1 (a basic computer with a GPU) that serves a camera or a group of cameras in order to identify the right images for the problem and send them to the cloud.

Scaling Smart Cities in the Fog

Processing and analyzing video at the network edge, or in the fog, optimizes bandwidth and power usage, reduces latency and enhances privacy – all at the scale needed for smart cities.

Reducing Bandwidth

Suppose police are looking for a 6’1” tall blonde male suspect who is at large in the city or for a red BMW with license plate number “731RTZ.”  Police can put out a query, or classifier, for the description across the network and each node will do local analysis of video as it comes in, looking for images matching the description.  Fog nodes then filter for specific frames potentially including the suspect or vehicle.  This greatly reduces the bandwidth needed to transmit relevant video.

 

 

 

 

 

 

 

 

 

 

 

Reducing Bandwidth Consumption by Sending Only the Relevant Video Frames

 

 

 

 

 

 

 

Saving Power

Fog nodes enable smart city networks to send only filtered video frames (shown above), saving bandwidth.  However, moving the processing burden to the edge of the network uses power at the fog node.  By running the query, or activating the local classifier adaptively, the system itself can optimize between bandwidth savings and power consumption at the fog node.

Reducing Latency

Many applications, especially surveillance and disaster response, need real-time inputs from a human operator in situations where the latency on the video data link is critical to the mission’s success. Managing wireless link latency is challenging when the cameras collecting the video are mounted on a drone or a moving vehicle. In such scenarios, fog nodes at the drone adapt video bitrates or compresses video content based on the rapid wireless channel fluctuations as the drone moves by predicting future throughputs using spatial throughput maps and location information. This ensures that the relevant video frames are delivered within required latency budget.

Ensuring Privacy

When police collect video today from body cameras, they need to blur, or obfuscate, the faces of the people in the video before showing it to anyone.  Fog nodes allow law enforcement to obfuscate faces at the edge of the network.  This prevents potentially sensitive video from traveling over the cloud, where it is difficult to secure the video and images.  When specific faces are needed, they can be retrieved from the images at the fog node.

Safe and robust smart cities will rely on technology that moves computing closer to where the data connects to the cloud, removing many of the cloud security issues from the equation.  Securing at scale is a critical issue across the Internet of Things landscape.  The volume of video we are creating, combined with the bandwidth and network requirements needed to make that video useable, means the action will be at the edge for safety and security applications.

 

This post originally appeared in the blog for The Marconi Society, a foundation supporting scientific achievements in communications and the Internet that significantly benefit mankind.

Leave a Reply

IEEE Eta Kappa Nu