Browse Prior Art Database

Ad targeting to near by vehicles based on segments extracted from computer vision features

IP.com Disclosure Number: IPCOM000238837D
Publication Date: 2014-Sep-22
Document File: 4 page(s) / 62K

Publishing Venue

The IP.com Prior Art Database

Abstract

Targeted advertising is the process of matching ads to potential audience. In this disclosure we wish to target ads to car driver/passengers that drive in proximity to another mobile ad targeting vehicle. Today, most ads on mobile vehicles are static, the same ad is displayed no matter what actual audience is exposed to the ad. Some solutions take advantage of the location of the advertising vehicle, yet, similarly to the static case, there is no real consideration of the actual audience being exposed to the ads. Recent advancements in ubiquitous computing now allow to take advantage of new computer vision and sensors that can provide signals on the environment in which advertising vehicle operates on, including in this case, other vehicles that drive near by. Trying to utilize this new opportunity, two main challenges should be addressed: 1) How to extract features about car passengers driving in proximity to the mobile ads vehicle. 2 ) How to map extracted features into segments which can be used by an ad serving system to target the right ad to the right segment.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 35% of the total text.

Page 01 of 4

Ad targeting to near by vehicles based on segments extracted from computer vision features

The implementation includes four main components:
1. Computer vision and sensors to extract various features on near-by vehicles, their drivers and passengers, and movement features.

2. A segmentation module that classifies captured features into (people) segments.
3. A recommendation module that targets ads to a given segment.

4. An ad display module that is installed on the advertising vehicle.

    An additional local computing module just serves as a mediator between the mobile advertising vehicle and the ad matching server using wireless communication. The following figure depicts a possible implementation of the idea and its flow. Details of the various components are provided below.

Computer vision and sensors module:

Example visual features that we want to capture from the advertising vehicles' surrounding environment are:

1


Page 02 of 4

A) Type of cars

  B) Car color
C) Single/or multiple passengers and seating positions D) Driving pattern
The type of sensor that we suggest to use are: Color Sensors, IR Sensors and Depth sensors, all are available commercially in Primesense's* and Microsoft's* depth sensing video cameras (dubbed: "PrimeSense" & "Kinect").

By the use of the depth sensing cameras we can get good segmentation of the cars closest to the bus. This article shows how video images can be segmented efficiently by using depth video: http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6163000

As an additional feature to segment out each visible car in the image, technology

for License Plate Recognition may be used. It is important to note, that one wouldn't actually collect the License Plate number, but only verify that a particular rectangular region in the image is a license plate (correct aspect ratio, legal number, legal color etc). For each segment one would verify that one license plate was detected. If none

was detected for a segmented area of the image, that hypotheses for a car would be neglected. For stability concerns there would be a time filter of the tracking of each segment and license plate.

When we have achieved a rough segmentation of the field of view, and we track each segment from frame to frame, we can do further processing: i) In each segment look for a car logo from limited set of brands. Typically car manufacturers put up clear and distinct logos just at the middle of the car front's. This make it particular easy to detect these brand logos using state of the art object detection techniques (using feature detectors such as SURF, SIFT, HOG, ORB, MSER together with any common machine learning system).

    ii) With the car segmented from the rest of the image, and with an identification of the car brand, the proportions of the car (as understood from the depth map) should be matched to the car manufacture's database of car models (3D models). By matching visual appearance (both physical volume and design details...