Design of parallel processing architectures for embedded deep learning techniques
@ Le2i laboratory (CNRS-Université Bourgogne Franche-Comté), France
In computer vision, object detection still represents one of the most challenging problems because it is prone to localization and classification error. State-of-the-art detectors are many based on a two-step process including region proposals followed by localization of objects in the candidate regions. Common region proposal techniques consumes significant running time to perform exhaustive search in the input image and outputs hundreds or thousands of potential regions of interest and metadata such as objectness score. This ESR project aims at developing a toolkit of real-time building blocks dedicated to intelligent computation of ROIs for object detection. SW for CPU/GPU platforms and HW for FPGA-based camera will be considered. Work will be done in collaboration with WP4-WP7 in which the building blocks can be reused in complex real-life applications.
Yu Liu was born and raised in Taiwan. Granted the opportunity to study abroad at a young age, he obtained his bachelor’s degree in biomedical engineering from University of Wisconsin, Madison. The multidisciplinary curriculum of Yu’s bachelor enabled him to further pursue his growing interest in image analysis. In 2018, Yu obtained his master’s degree in computer vision and robotics from VIBOT, one of the Erasmus Mundus Joint Master programme. Yu’s research interest includes visual scene understanding, applications such as autonomous driving and robot vision solutions. He also seeks for industrial positions in the future and is motivated to see his work being applied in industry.
"Architectures for embedded deep learning techniques"
Research and develop deep learning-based spatio-temporal action detection algorithm.
- Exploit nearby frames’ relations for computationally efficient video analytic solutions.
- Computation should suffice for power-constrained embedded systems.
- Real-time and online action detections, spatially and temporally.