AI for live production

 

Smart Growth Operational Programme 2014-2020

Measure 1.1: R&D projects of Enterprises

Sub-measure: Industrial research and development works conducted by enterprises

Name od beneficiary: BIVROST Sp. z o.o.

Project title: Development of a system for autonomous directing of live video streaming using artificial neural networks and machine learning algorithms.

Abstract: The goal of this project is to develop a maintenance-free video directing system for live streaming with the use of artificial neural networks and machine learning algorithms.

 Total value of the project: 14 747 317,93 PLN

Value of the subsidy: 11 239 751,45 PLN

Period of the project: 2021-2023

The project is granted by European Union and financed from European Regional Development Fund under the Smart Growth Programme. The project is implemented as a part of ‘Szybka Ścieżka dla Mazowsza’ competition organized by The National Centre of Research and Development.

----------------------------------------------------------------------------------------------------------- 

ABSTRACT

With the edge computing idea, AI systems play an important role. They can initially process data at the lowest level as well as handle complex video analysis at the later stages of the pipeline. AI acceleration opens the door to computer vision applications in different industries. In entertainment and sports transmissions it can be used to track players, detect unusual situations or create automated video correction systems. Medical applications can detect abnormalities invisible to the human eye. Vision systems for security boosted with AI can keep track of luggage or people increasing safety and protection. Also, industrial quality control is a subject to transform with AI-enabled computer vision.

The goal of this project is to develop a maintenance-free video directing system for live streaming with the use of artificial neural networks and machine learning algorithms. Today's digital transformation, forcing a remote model of communication and cooperation, has set high-quality requirements for all forms of video, including those created live. There is a new need for paradigms for engaging, keeping attention and interweaving sources for live broadcasting. This need applies to the education sector, from universal teaching to specialized training. However, the same is true of business teleconferences and webinars, social and cultural events, telemedicine, e-sport, and web creativity.

The main goal of the project is to remove the requirements of rare and specialist knowledge in the field of live video production. Where the knowledge of artistic procedures and psychological patterns are required today to implement even the simplest video formats, we should be able to leave all these aspects to intelligent automation processing, and in the result it transforms any lecture or conference room into a fully operative studio producing attractive and catchy content. It is estimated that artificial intelligence solutions affect the ways of creating and distributing content. But emerging markets need more efficient tools. The project is based on the autonomous management of image sources during transmission, the use of AI in the model of an expert system supported by machine learning, where based on images from multiple cameras and sound streams, the decision algorithm selects the source presented to the final recipient. Data processing is based on the edge computing paradigm, placing computational units closest to image sources minimizing data transfer and delays.