内容をスキップ

Login  |  Sign Up

Why does IoT need multimedia edge compute platforms?

With processing of data increasingly happening at the edge as organizations look to control cloud compute expenses, lower latency and future-proof their deployments, focus is turning to what needs to be enabled within the edge compute platform so data can be processed and inferences generated from the edge. Applying artificial intelligence (AI) algorithms to run on the new generation of edge devices is making smart solutions possible because it provides a means to automatically optimize and perform necessary compute processes.

In the multimedia sector, this approach is proving its value as the rapid integration of multimedia peripherals such as cameras, displays, sensors, microphones and speakers within smart modules is a key step to adding greater functionality to IoT devices. These peripherals are in fact sources of data that can be processed by the edge compute resource in the device and enable the loop to be closed within the device without centralized analysis and decision making for some apps and use cases.

Audiovisual enhancement of edge compute platforms can strengthen data inferences

Audio sensors enable significant amounts of data to be collected but the demands of cameras are more complex to accommodate within a smart module. This therefore demands careful approaches to both camera hardware and software to ensure optimal performance is achieved while maintaining power consumption efficiency and viable overall device cost for the use case in question.

It is relatively simple to add a camera to an edge computing architecture and this is particularly straightforward using Quectel’s CamX architecture. For example, when a camera is being developed for a use case, you define the use case in xml and say the case will have a particular feature, such as video. You then create a session and say the session will have multiple nodes that will be linked and all the data parsed across the node. You define all the sensor related information in sensor mode and then have a link between the input and the output. This process can be replicated for any camera requirement and the smart module can capture the data.

Another example is if you frame data coming from the camera and you pass it to the processing resource to generate an inference. For example, the image is of a human or an animal. This model provides a foundation for a vast number of use cases powered by camera images and it is very simple to adapt to specific applications such as vehicle number plates, the presence of humans and many other visual use cases that require images to be processed rapidly in order to arrive at an outcome.

Of course, camera images are far from the only use for edge computing resources but they provide a simple model for demonstrating how specific data can have immense value added to it by applying analysis on the edge device and then moving to an inference that has an actionable outcome. The shortening of that loop between gathering data, processing it and acting upon it is the universal benefit of edge compute platform in IoT and details of how to design effectively for this new era are explored in greater depth in the recent Quectel Masterclass ‘Multimedia edge compute platforms’.