Surveillance and MONitoring baseD on low-power intEGrated visiOn devices

News

July 27, 2015: Experiments on single-capture HDR imaging with one of the prototype chips designed in this project

November 30, 2014: Patents on focal-plane image processing and enhancement licensed to Fobos Solutions to improve the visual capabilities of their drones.

Test setup

July 1, 2014: Experimental setup of the demonstrator on early wildfire detection. This setup employs the Silicam IGO camera, based on a low-resolution sensor. Background image, current capture and processed frame are shown.

Test setup

Images from the sequence

June 12, 2014: Watch this video about one of the latest chips designed in this project

September 16, 2013: Preliminary experiments on early detection of wildfires get media attention.

Test setup Test setup

Abstract

The main goal of this project is building a hardware platform for the development of distributed, co-operative and collaborative, vision applications, with special emphasis in outdoor surveillance and monitoring with a limited infrastructure.

This platform will consist in a wireless network of smart cameras, able to locally process images, extract features and analyze the scene. Wireless communication between the nodes permits the exchange of information in order to realize distributed vision algorithms, or transmitting information towards a base station. The implementation of the smart camera is oriented to reducing power consumption. A single chip contains the image sensor and, concurrently, the processing and memory elements that are needed to realize low-level vision tasks in a fully parallel manner. This renders an efficient use of the resources and permits the on-chip generation of simplified representations of the scene. These representations are quite useful in the realization of higher order cognitive tasks, like objects and events classification and scene interpretation.

Surveillance systems based on distributed cameras developed so far have required a considerable infrastructure. Power supply must be conveyed to every node. A great deal of information needs to be transmitted through the network. Also, a powerful processing facility is required to handle such an information flow. On the other side, wireless sensor networks have been employed to environmental monitoring and surveillance. However, they realize tracking of some scalar magnitudes (pressure, temperature, humidity, chemicals concentration, etc.) by taking sensor readings and transmitting them to the base station. The use of different sensor modalities with a more complex data structure or a larger number of data has been typically excluded. In the case of vision, despite being our most valuable tool to extract information from the environment, we have not been able to incorporate vision capabilities into the sensor network nodes without compromising their autonomy.

Our intention is to demonstrate that a careful design, the proper partitioning of the system and a holistic approach to reducing the power consumption, can lead to a wireless low-power smart camera that allows the deployment of a low-cost scalable camera network in a scenario with limited infrastructure. The approach will be to find the convergence between wireless sensor networks and distributed smart cameras in order to develop a platform for surveillance and monitoring.

This project is divided into 2 sub-projects:

  • TEC2012-38921-C02-01:

    Single-chip vision system for networked and distributed vision applications. IMSE-CNM, CSIC-Universidad de Sevilla (PI: Ricardo Carmona Galán)

  • TEC2012-38921-C02-02:

    2D and 3D scene capture, energy scavenging and feature-based hierarchical image processing. CITIUS, Universidade de Santiago de Compostela (PI: Víctor Brea Sánchez)

Last update: July 27, 2015