The main objective of this project is building a hardware platform oriented to surveillance and monitoring with limited infrastructure. It will consist of a network of autonomous sensors, with local image processing and scene analysis capabilities, and able to wirelessly communicate to each other.
The partial objectives are:
- The efficient implementation of concurrent sensors and processors, capable of generating a simplified representation of the scene. The processing scheme will be bio-inspired. Maximum power consumption for the smart sensor will be 250mW. 100% accomplished, as expected, massively parallel processing has yielded important energy savings.
- The implementation of smart vision sensor chips in vertical integration technologies. This will solve the trade-off between processing speed and spatial resolution. The targeted image size will be 800×600 pixels (SVGA). 60% accomplished, access to 3D-IC technologies has not been possible, functionality demonstrated in planar technologies, extrapolations yield 640*480 pixels in 90nm CMOS.
- The incorporation of CMOS-compatible light sensing structures for 3D information extraction. We pretend to start a research line on 2D-3D information interaction. 80% accomplished, implemented structures are able to extract 2D and 3D information about the scene, further tests are being carried out.
- On-chip generation of an alternative scene representation, based on salient points and characteristic features. This will be done at least at 25fps. 80% accomplished, prototypes display processing speeds above 25fps, but interfaces need to be optimized.
- Including power management techniques at system level, in order to combine the use of low-power operators, hardware re-use and energy scavenging. 65% accomplished, approach validated by simulation, test chip still under design.
- The design of distributed vision algorithms based on the exchange of high-level information between nodes. This will take the form of collaborative tracking of elements through the area under surveillance. It can be also applied to events or gesture detection and interpretation with combined information from different points of view. 60% accomplished, the basic infrastructure is available, distributed vision algorithms still need to be worked out.
- The interface of the integrated vision system with the communications infrastructure. The information flow must be agile without compromising the power budget. We will make use of a commercial platform in principle, although we may work in the protocol and the MAC and LLC sub-layers. 50% accomplished, several interfaces have been programmed to an FPGA but still a unified approach is lacking.
- The development of a demonstrator in exploiting the capabilities of wireless smart cameras. This demonstrator will be developed in a scenario in which there will be a need for node autonomy, wireless communication, easy network deployment and low maintenance. Given the results obtained in projects V-mote and WiVisNet, the demonstrator will be oriented to surveillance in natural environments for early wildfire detection, wildlife tracking or perimeter monitoring. 80% accomplished, early wildfire detection proved with minimal hardware, experiments are carried out on privacy-aware surveillance.