You are here

Towards gesture-based multi-user interactions in collaborative virtual environments

This project deals with the creation of a virtual reality (VR) setup that enables multiple users to participate in collaborative virtual environments and interact via gestures. A collaborative VR session is established through a network of users that is composed of a server and a set of clients. The server manages the communication amongst clients and is created by one of the users. Each user's VR setup consists of a Head Mounted Display (HMD) for immersive visualisation, a hand tracking system to interact with virtual objects and a single-hand joypad to move in the virtual environment. We use Google Cardboard as a HMD for the VR experience and a Leap Motion for hand tracking, thus making our solution low cost. We evaluate our VR setup though a forensics use case, where real-world objects pertaining to a simulated crime scene are included in a VR environment, acquired using a smartphone-based 3D reconstruction pipeline. Users can interact using virtual gesture-based tools such as pointers and rulers.

The figure below shows the diagram of a collaborative VR setup comprised of a smartphone-based 3D reconstruction step (left-hand side) and an immersive collaborative experience (right-hand side). Automatically selected images from a smartphone's camera feed are sent to a reconstruction server that produces the 3D model of a scanned object. This 3D model can be imported in an immersive VR environment where multiple users can collaborate. The collaborative virtual environment is based on a client-server architecture. The server is represented as a red contour. The user that starts a VR session is the server. Clients connect to the server either as active or passive users, red and green colours on the right-hand side, respectively. A Leap Motion (orange VR icon inside each circle) is connected to each computer and hand-tracking data is transmitted to an associated Cardboard device. Users can interact with virtual objects in the same VR space and see each other's actions.

The use-case shown in the animated images below involves two users concurrently connected via WiFi in the same local network which were connected through their personal Leap Motion to interact via gestures. We authored the VR environment using the 3D models obtained with the smartphone-based reconstruction pipeline and included a reconstructed couch, a reconstructed lamp and a knife. Users are interacting with their hands, and the hands are visible in each other's views.

Reference:
N. Pretto, F. Poiesi, "Towards gesture-based multi-user interactions in collaborative virtual environments," Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-2-W8, 203-208, 2017
Date: 
Monday, 20 March, 2017 to Monday, 31 July, 2017
Duration: 
4 months
Contacts: 
Dipartimento di Psicologia e Scienze Cognitive (uniTN) - Nicolo' Pretto
Fondazione Bruno Kessler - Fabio Poiesi