Apart from collecting, aggregating and relaying real-time data from networked sensors and components, AI-powered intelligent edge devices run powerful visual analytics to enable tremendous cost savings.
Enhanced visual analysis and localized decision-making by edge devices has emerged as a new trend amidst today’s video-driven IoT innovations. By eliminating the need for a remote cloud server to process video data, visual analytics on the edge reduces the latency in decision-making and enable scalability to add more such ‘intelligent’ devices in the network. AI, driven by computer vision and deep learning, is often used to realize this intelligence in edge devices. In applications such as city/home/factory surveillance, unmanned vehicle navigation, operations management or industrial robotics, this can be quite vital for accurate and timely decisions.
Let us take a look at some of the key advantages of embracing intelligent video analytics at the edge.
Reduce Operational Expenditure (OPEX) with AI
While intelligent edge devices are often expensive, they help save millions in annual OPEX for video-driven IoT networks.
- Edge devices analyze video locally and share only the metadata extracted from the analysis to the cloud. Therefore, rather than gigabytes/terabytes of video, we need to upload only a few kilobytes of data.
- Since video processing is done locally on the device, the compute resource requirements on the cloud are significantly reduced. Associated network bandwidth and power requirements are considerably reduced too. AI thus allows day-to-day operations at a marginal fraction of what one would incur otherwise.
- By doing away with the need to exchange sensitive/private video data over the network, we can also drastically reduce the cost for implementing sophisticated security measures.
Save Voluminous Video Uploads with Edge Gateways
In a network of video devices interfacing to a cloud server, can we harness the power of AI without changing the existing edge devices?
YES. We can fulfill our main goal of saving voluminous video uploads/archival by augmenting rather than changing edge devices. This is typically done using intelligent edge gateways that are installed as a bridge between existing devices and the cloud server. These gateways receive the video data from existing devices (over local network/other interfaces), analyze it and derive decisions that are relayed back to the source devices. Metadata from this analysis is often uploaded to the cloud server. In essence, these gateways act like an add-on for AI functionality that existing devices do not have the capacity for.
Reduce Bandwidth and Storage costs with Vision and Machine Learning Algorithms
Even if the core function of a device (for instance a surveillance camera, unmanned vehicle DVR or police car NVR) is to stream and/or record video, it will still be able to benefit from AI. By leveraging machine vision and deep learning algorithms, such a device can achieve tremendous savings.
For instance, in a surveillance scenario, the monitoring station does not really care for portions where there is no action, such as a vacant parking lot or a deserted aisle in a retail store. Such portions can be streamed/recorded at lower quality or even skipped altogether. Using AI, we can detect objects and events of interest in the video including cars being parked, incidents of vandalism, and suspicious loitering. We can then dynamically choose to stream/record at the desired quality when such incidents occur.
For typical video surveillance installations, we can achieve up to 70% of savings in storage space or transmission bandwidth requirements through intelligent decision-making on streaming/recording.
Reduce the Need for Human Intervention
As Benjamin Franklin said, “Time is Money”. In that spirit, AI yields tremendous savings by significantly reducing the effort that human resources need to spend on mundane tasks.
For instance, an unmanned ground vehicle (UGV) or robot is often navigated remotely by an operator while looking at the live video feed over a radio link. Given bandwidth and connectivity issues with such links, it is wiser to let the UGV/robot navigate itself using on-board AI capabilities. The robot can thus segment the input video into regions, track the pre-defined path and recognize obstacles, thus freeing up valuable human resources who would otherwise be essential for these tasks. What’s more, localized decisions on the device reduce latency in acting on visual information, and that can be vital for mission critical applications.
In a Nutshell
System engineers designing video-driven IoT networks can save millions by integrating visual analytics in the edge devices. Long term cost gains far exceed any short term price implications for these intelligent devices.
Stay tuned for our next blog on intelligent decision making in streaming/recording applications.
Ittiam offers innovative solutions to realize intelligent streaming/recording edge devices and gateways. We leverage DSPs/GPUs to accelerate visual analytics for real-time operation.
Write to us at firstname.lastname@example.org for more details.