meetings

Photo by Malte Helmhold on Unsplash

MEETWEENMy Personal AI Mediator for Virtual MEETings BetWEEN People

MEETWEEN project focuses on eliminating language barriers to make human-human interaction more seamless and natural.

Abstract: In a world of increasing preoccupation with artifacts, interacting with our fellow human beings remains one of our most enjoyable, but also one of our practically most critical activities. We derive inspiration from each other, solve problems and chart our future together. Yet our interaction with fellow humans is far from seamless or frictionless: despite much greater world-wide reach, we suffer (perhaps more than ever) from isolation, barriers and separation due to language, culture, physical distances, time-zones, scheduling conflicts, and distractions to our attention. With greater freedom, reach and flexibility, our isolation and complexities also appear to increase. In our proposed project Meetween, we aim to find solutions to these problems. Rather than artificial intelligence (AI) getting in the way of the human experience, we harness its power to make human-human interaction more seamless and natural, eliminate language barriers, replace the techno-clutter with support. 

The project aims to 1) build the science-based technology solutions needed to power the next generation of video-conferencing platforms for Europe, to support smooth, engaging, barrier-free collaboration across languages; 2) exploit the all-round, integrated algorithmic capabilities offered by foundation models and self-supervised training on large datasets to nimbly adapt to participant context, cultural and regional specificities, including linguistic ones; 3) foster and facilitate business collaboration throughout the European Union by providing real-time machine-learning-powered speech-to-speech translation, summarization and virtual assistant services for online meetings; 4) defend a European vision for AI with regard to safety, privacy, social and ethical approaches, anchored in our regulations, data standards and shared initiatives and resources.

Keywords: speech, deep learning, representation learning, videoconferencing, multimodal, open data, open models, audio, video, gesture, gaze, speech recognition, speech synthesis, expressivity, virtual agent

Participants



Grant

Project ID: 101135798

Call: HORIZON-CL4-2023-HUMAN-01-CNECT

Type of action: HORIZON-RIA
Start date: 2024
Duration: 48 Months 

Contacts

FBK TeV: Paul I. Chippendale 

links

Contact in TeV: Paul I. Chippendale 

Team: MT, STEK and TeV research units