Browse Prior Art Database

Intelligent Spatial Positioning of Ads on Videos

IP.com Disclosure Number: IPCOM000247676D
Publication Date: 2016-Sep-27
Document File: 4 page(s) / 338K

Publishing Venue

The IP.com Prior Art Database

Abstract

Disclosed is a technique for effective spatial positioning of advertisement (ad) placeholders on top of online video content, such that the ad does not obstruct significant elements. The novel technique utilizes learning models that are trained using crowd sourced, profile aware, eye-focus meta-data, which are further fine-tuned based on ad cancellation data analysis.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 44% of the total text.

Page 01 of 4

Intelligent Spatial Positioning of Ads on Videos

Advertisements (ads) that render on top of streaming videos (i.e., on line) often cover video content, which is an undesirable effect. Current methods for video frame analysis can detect objects in the frame; however, these approaches are costly, require preprocessing of the video, and can be inaccurate when identifying significant content
(i.e., content that is interesting to the user).

A method and system are needed to intelligently position advertisement content that

overlays a video in a manner that reduces the occurrence of obstructing significant, meaningful video content.

Enabling art for the disclosed solution includes object recognition and localization techniques that identify objects and draw a bounding box around it. These techniques support ad positioning based on identified object-free areas. In addition, research is growing in the area of identifying meaningful objects within videos and mages.

The novel contribution is a technique for effective spatial positioning of advertisement (ad) placeholders on top of online video content, such that the ad does not obstruct significant elements. The novel technique utilizes learning models that are trained using crowd sourced, profile aware, eye-focus meta-data, which are further fine-tuned based on ad cancellation data analysis (as explained in Steps 1-5 ).

This technique does not involve video/image processing. The user only needs to upload the video. For the initial few days, when training is not mature, ad placeholders are random. Over time, as more users watch the video, the system learns from the users, evolves, and begins positioning the ads to not obstruct the view of any object of interest.

Different users can have different preferences. The system clusters people based on the associated profiles and then applies the technique; thus, the approach is personalized to users. Based on a user profile, the system alters the placeholder location. Further, the technique becomes fine-tuned based on ad cancellation data analysis. Alternatively, if the user repeatedly clicks on the ad (even though it was placed in an un-focused area), the system collects this information to learn and adjust user thresholds.

Step 1: Collection of eye-focus meta-data at frame level during video rendering

The user uploads a new online video to content rendering systems. When the initial few users watch the video, no overlay ad is displayed. While the video renders, the system tracks and collects the user's eye focus on the screen. In training mode, the system does not show the ads, as the ads can interfere with the focus gathering. In certain embodiments, the system may display a few overlay ads but in that case, the system tracks the locations of the ads in the training mode in order to negate the "false hits" caused by the presence of the ads. At the end of the video watching, the video content rendering engine sends the collected eye-focus meta-d...