Browse Prior Art Database

Method to automatically line up separately recorded video streams

IP.com Disclosure Number: IPCOM000235011D
Publication Date: 2014-Feb-24
Document File: 2 page(s) / 40K

Publishing Venue

The IP.com Prior Art Database

Abstract

A method is proposed to enable quicker, automated synchronisation of multiple video and audio streams by analysing the recorded environment, creating synchronisation points using display lighting.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 51% of the total text.

Page 01 of 2

Method to automatically line up separately recorded video streams

A common scenario in video productions is to record from two or more devices at once and then during post production, sync these videos up and then allow switching between the two. Other scenarios such as picture-in-picture or other cross video/audio stream combination require more complex editing because both videos are being displayed, therefore a cut in one video requires cuts in the other videos. One example of where this would be useful is when a user is recording one video that includes what is being displayed on the computer screen and a second video that is a camera focussed on the users face, such as during "Let's Play"s on YouTube, product demonstrations, etc.

    Current solutions mean that when video producers wish to stitch these separate video files together, unless the files have the correct timestamp (which

considering recording across multiple devices, isn't always available), they must do so manually and so for each video they must play them together and identify common actions or explicitly perform an action which they can recognise on the multiple devices.

This proposed method would provide an alternative method of identifying

where the sync points are within multiple video streams and works by generating

synchronisation data emitted via displays or other sources of light, such as making a screen a solid colour in a specific pattern. By analysing all content streams for this pattern and colour, the system can identify the precise timing's that should be used for synchronisation.

The advantage of using such a system is that it would be trivial to use (i.e.

waiting a few seconds whilst the pattern is played), is trivial to implement (requiring no custom equipment beyond the camera system already in place) and saves the user time when editing the video at a future point.

This method is comprised of two stages.

First, during recording the user must initiate a synchronisation event. The

method will analyse the average colour that is being observed over a period of time, e.g. 5 seconds (th...