from the ghost-in-the-machine dept.
As the number of devices connected to the internet continues to increase, so does the amount of redundant data transfer between different sensory terminals and computing units. Computing approaches that intervene in the vicinity of or inside sensory networks could help to process this growing amount of data more efficiently, decreasing power consumption and potentially reducing the transfer of redundant data between sensing and processing units.
Researchers at Hong Kong Polytechnic University have recently carried out a study outlining the concept of near-sensor and in-sensor computing. These are two computing approaches that enable the partial transfer of computation tasks to sensory terminals, which could reduce power consumption and increase the performance of algorithms.
"The number of sensory nodes on the Internet of Things continues to increase rapidly," Yang Chai, one of the researchers who carried out the study, told TechXplore. "By 2032, the number of sensors will be up to 45 trillion, and the generated information from sensory nodes is equivalent to 1020 bit/second. It is thus becoming necessary to shift part of the computation tasks from cloud computing centers to edge devices in order to reduce energy consumption and time delay, saving communication bandwidth and enhancing data security and privacy."
[...] So far, the work by Chai and his colleagues primarily focused on vision sensors. However, near-sensor and in-sensor computing approaches could also integrate other types of sensors, such as those that detect acoustic, pressure, stain, chemical or even biological signals.
1.) Feichi Zhou, Yang Chai. Near-sensor and in-sensor computing, Nature Electronics (DOI: 10.1038/s41928-020-00501-9)
2). Feichi Zhou, Zheng Zhou, Jiewei Chen, et al. Optoelectronic resistive random access memory for neuromorphic vision sensors, Nature Nanotechnology (DOI: 10.1038/s41565-019-0501-3)
3.) Yang Chai. In-sensor computing for machine vision, Nature (DOI: 10.1038/d41586-020-00592-6)