Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

Analysis and Visualization of Video Content Capture Areas

IP.com Disclosure Number: IPCOM000248997D
Publication Date: 2017-Jan-25
Document File: 4 page(s) / 106K

Publishing Venue

The IP.com Prior Art Database

Abstract

Disclosed is a system that uses cognitive image analysis and cognitive object comparison with visual electronic mapping data to determine the geo-location range for all in-focus objects appearing in a video on a frame-by-frame basis. The system uses the captured object’s location information to visualize on a map the locations featured in the video as well as any locations that overlap between the videos, and to select a specific location or object on a map and then direct the user to videos that feature this location.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 39% of the total text.

1

Analysis and Visualization of Video Content Capture Areas

A video camera equipped with a Global Positioning System (GPS) can record the geo-location of the camera and embed this information into recorded video clips. This geo-location information represents specifically the location of the camera, but does not precisely record the location of the subject captured in the focused video frame. In Figure 1, for example, the video camera device records its physical geo-location using GPS as latitude and longitude coordinates. This is the "point of capture". The exact range of latitude and longitude coordinates for the object in focus is different and separate. As additional frames of video are recorded, the direction of capture and the focus area coordinates continue to adjust. This current method does not address the geo-location coordinates of the object captured in the focus area of a video or the frame-by-frame changes as the direction of capture and focus area change.

Figure 1: Geo-location of the camera video recording device is not the same as the geo-location of the subject in focus in the video frame

The novel contribution is a system that uses cognitive image analysis and cognitive object comparison with visual electronic mapping data to determine the geo-location range for all in-focus objects appearing in a video on a frame-by-frame basis.

The system performs a frame-by-frame analysis of the video footage to determine geo-location coordinates of the video footage capture area. It then creates a visualization of the video content capture area for a given video onto an electronic street view or three-dimensional (3D) map. This map visualizes specific locations that appear in a video, the duration of the appearance, and the point and duration of play in the video. Visualization on a street view or 3D map compares video content capture areas among multiple videos to identify common areas of capture.

The output of the system can be used in three ways:  For a given video, to visualize on a map the locations featured in the video

2

 For multiple videos, to visualize on a map locations that overlap between the videos

 To select a specific location or object on a map and then direct the user to videos that feature this location

Implementing this system comprises three stages: 1. Video content capture area analysis 2. Visually plotting video content capture area 3. Visual comparison of video content capture areas

Stage 1: Video content capture area analysis In this stage, the system performs a frame-by-frame analysis of a video to determine geo-location coordinates for the location at which the user captured the video. This stage makes use of a cognitive system to perform image recognition and comparison:

1. Existing metadata is extracted from the video footage. This metadata information is added at the time of recording to the video footage and may include one or both of the following:

A. Geo-location of capture point: where the camera was...