You are here
Towards gesture-based multi-user interactions in collaborative virtual environments
The figure below shows the diagram of a collaborative VR setup comprised of a smartphone-based 3D reconstruction step (left-hand side) and an immersive collaborative experience (right-hand side). Automatically selected images from a smartphone's camera feed are sent to a reconstruction server that produces the 3D model of a scanned object. This 3D model can be imported in an immersive VR environment where multiple users can collaborate. The collaborative virtual environment is based on a client-server architecture. The server is represented as a red contour. The user that starts a VR session is the server. Clients connect to the server either as active or passive users, red and green colours on the right-hand side, respectively. A Leap Motion (orange VR icon inside each circle) is connected to each computer and hand-tracking data is transmitted to an associated Cardboard device. Users can interact with virtual objects in the same VR space and see each other's actions.
The use-case shown in the animated images below involves two users concurrently connected via WiFi in the same local network which were connected through their personal Leap Motion to interact via gestures. We authored the VR environment using the 3D models obtained with the smartphone-based reconstruction pipeline and included a reconstructed couch, a reconstructed lamp and a knife. Users are interacting with their hands, and the hands are visible in each other's views.