• Phone: 0461314512
  • FBK Povo
Short bio

Building upon my Electronics degree background, I received my PhD in Telecommunications from the University of Lancaster (UK) in 1998, by developing techniques to robustly transmit images around the world using HF radio, exploiting the ionosphere as a global reflector.

After a two year post-doc in the field of marine powerline communications, I left academia and went to work in industry for Philips Electronics as a Research Scientist in Le Mans (France) and then moved on to Monza (Italy) investigating topics such as Augmented Reality personal avatars for future 3G networks and video enhancements for LCD televisions.

In 2004, I decided to join the Fondazione Bruno Kessler Research Institute in Trento (Italy), to further explore my interests in computer vision. For the past 12 years, I have been a researcher manager and coordinator of several European projects, including FP6, FP7 and H2020 projects, such as CHIL, Netcarity and My-e-Director.

After gaining experience in large-project management, in 2011 I decided to search for new EU funding opportunities as a project coordinator, and was succesful in bringing together a strong consortium and constructing a proposal in the field of Augmented Reality that generated 3.6 MEuro of EU funding, named VENTURI. Then in 2016, I successfully obtained funding for a H2020 project called REPLICATE which explores the use of mobile devices to convert real-world objects into creative resources for AR/VR and Mixed Reality.

My primary role in the Technologies of Vision Research Unit and FBK, is the generation of new funding and the subsequent coordination of Horizon 2020 projects.

Research interests
3D reconstruction Industry 4.0 cultural heritage preservation semantic understanding creativity mobile computer vision Augmented/Virtual/Mixed Reality sports activity analysis
  1. V. Libal; B. Ramabhadran; N. Mana; F. Pianesi; P. Chippendale; O. Lanz; G. Potamianos,
    Distributed Computing, Artificial Intelligence, Bioinformatics, Soft Computing, and Ambient Assisted Living, 10th International Work-Conference on Artificial Neural Networks, IWANN 2009,
    , pp. 687-
    , (International Workshop of Ambient Assisted Living,
    Salamanca, Spain,
    06/10/2009 a 06/12/2009)
  2. Paul J. Chippendale; Michele Zanin; Claudio Andreatta,
    Environmental Content Creation and Visualisation in the Future Internet,
    Future Internet,
    Berlin, Heidelber,
    Springer Verlag,
    , pp. 82-
    , (Future Internet Symposium - FIS 2008,
    Wien, Austria,
    09/28/2008 - 09/30/2008)
  3. Paul Chippendale; Michele Zanin; Claudio Andreatta,
    Spatial and temporal attractiveness analysis through geo-referenced photo alignment,
    , pp. II-1116-
    , (IEEE International Geoscience and Remote Sensing Symposium - IGARSS 2008,
    Boston, MA, USA,
    07/06/2008 - 07/11/2008)
  4. P. Chippendale; O. Lanz,
    Optimised Meeting Recording and Annotation using Real-time Video Analysis,
    Machine Learning for Multimodal Interaction,
    , pp. 50-
    , (5th International Workshop on Machine Learning for Multimodal Interaction - MLMI 2008,
    Utrecht, The Netherlands,
    09/08/2008 a 09/10/2008)
  5. P.Chippendale; M.Zanin,
    Automatic Spatiotemporal Registration of Alpine Photos using DTMs and Image Processing Techniques,
    , (6th ARIDA Workshop on Innovations in 3D Measurement, Modeling and Visualisazion,
    Povo, Italy,
    02/25/2008 a 02/26/2008)
  6. Paul Chippendale; Michele Zanin; Claudio Andreatta,
    Creating a social sensor network for monitoring the environment.,
    This paper presents a technology capable of enabling the creation of a diffuse, calibrated vision-sensor network from the wealth of socially generated geo-referenced imagery, freely available on the Internet. Through the implementation of an accurate image registration system, based on image processing, terrain modelling and subsequent correlation, we will demonstrate how images taken by the public can potentially be used as a means to gather environmental information from a unique, ground-level viewpoint normally denied non-terrestrial sensors (consider vertical or overhanging cliffs). Moreover, we will also show how the same registration technology can be used to visualize geo-referenced environmental content (such as mountain names, tracks, GIS layers, etc.) `inside` photos, offering scientists and public alike a novel tool to `see` how the environment is changing around them.,
  7. R. Brunelli; A. Albertini; C. Andreatta; P. Chippendale; M. Ruocco; O. Stock; F. Tobia; M. Zancanaro,
    Detecting Focus of Attention,
    PEACH - Intelligent Interfaces for Museum Visits,
    Berlin Heidelberg,
    , pp. 45 -
  8. N. Mana; B. Lepri; P. Chippendale; A. Cappelletti; F. Pianesi; P. Svaizer; M. Zancanaro,
    Multimodal Corpus of Multi-Party Meetings for Automatic Social Behavior Analysis and Personality Traits Detection,
    , (International Conference on Multimodal Interfaces (ICMI 2007),
    Nagoya, Japan,
    da 11/12/2007 a 11/15/2007)
  9. O. Lanz; P. Chippendale; R. Brunelli,
    Multimodal Technologies for Perception of Humans,
    Berlin / Heidelberg,
    , pp. 57-
    , (Classification of Events, Activities and Relatinships, Evaluation and Workshop -CLEAR,
    Baltimore, MD, USA,
    from 5/8/2007 to 5/9//2007)
  10. P. Chippendale,
    Towards Automatic Body Language Annotation,
    , pp. 487-
    , (7th International Conference on Automatic Face and Gesture Recognition - FG2006 (IEEE),
    Southampton, UK,
    04/11/2006 - 04/13/2006)