You are here


Identity-preserving tracking in multi-camera environment is challenging due to several reasons - occlusions that occur as soon as several people are involved, uneven illumination or different camera settings that make the appearance of people vary in the images, non-overlapping cameras that make continuous tracking not possible. Also, deployment typically requires on-site intervention of an expert which limits scaling up of business opportunity.

Objective is to realize, starting from our previous developments on occlusion robust tracking, a easily deployable multi-camera solution for person tracking and monitoring with

  • Self-calibration that use observations of the walking human to estimate camera position and orientation relative to a ground reference, and optimal camera and tracking parameters;
  • Self-calibration and online adaptation that use observations of walking people to estimate the scene illumination for robust tracking in unevenly illuminated environment;
  • Person re-identification based on robust visual signature and estimated scene illumination for track linking in camera networks with non-overlapping views;
  • Decentralized and on-camera processing to avoid image transfer and boost scalability for privacy-preserving monitoring in large and complex environment.

These features are desirable in video surveillance domain and for customer behaviour analytics in retail domain.

Furthermore, we carry out research activities towards the addition of head pose estimation in multi-camera environment, to provide a complete unobtrusive solution for the large-scale analysis of peoples' visiting patterns in indoor spaces. Another opportunity we are recently investigating on is the integration with indoor mobile localization (and usage patterns) of mobile services.

Some related publications

  • X. Alameda-Pineda, Y. Yan, E. Ricci, O. Lanz, N. Sebe, "Analyzing Free-standing Conversational Groups: a Multimodal Approach", In ACM Multimedia (ACM MM), 2015 Best Paper Award
  • E. Ricci, J. Varadarajan, R. Subramanian, S. Rota Bulò, N. Ahuja, O. Lanz, "Uncovering Interactions and Interactors: Joint Estimation of Head, Body Orientation and F-formations from Surveillance Videos", International Conference on Computer Vision (ICCV), pp. 4660-4668, 2015. (Oral presentation)
  • X. Alameda-Pineda, J. Staiano, R. Subramanian, L. M. Batrinca,  E. Ricci, B. Lepri, O. Lanz, N. Sebe. "SALSA: A Novel Dataset for Multimodal Group Behavior Analysis". IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2016.
  • Y. Yan, E. Ricci, G. Liu, O. Lanz, N. Sebe.A Multi-task Learning Framework for Head Pose Estimation under Target Motion”, IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 38(6):1070-1083, 2016.
Wednesday, 1 January, 2014 to Saturday, 31 December, 2016