- The development of integrated sensing and processing chips that combine image capture with on-chip acceleration feature extraction/learning with a limited number of resources and under a restricted power budget.
- The conception of a chip architecture that extracts 2D and 3D information at sensor level for an enriched description of the scene that can be shared and combined between groups of camera nodes working in cooperation.
- The design of hardware accelerators that will permit the implementation of heavy-duty feature and representation learning and deep learning inference.
- The conception of compact and efficient reconfigurable embedded vision systems, where local processing of visual information is combined with agile transmission of metadata and a careful power management.
- The development of the cooperative vision algorithms that will operate on an enriched representation of the scene that can be locally shared by a set of nodes, allowing them to react collectively.
- Discarding the concept of a central hub where all the data crunching is performed in favour of a scalable distributed processing system in which visual information drives dynamic adaptation and feedback to enhance users’ experience.
- The introduction of scalable, easily deployable, always-on, visual monitoring methods that will be the basis for a new class of products and services