Browse Prior Art Database

Method and apparatus for detecting device relative movement Disclosure Number: IPCOM000248780D
Publication Date: 2017-Jan-10
Document File: 6 page(s) / 489K

Publishing Venue

The Prior Art Database


This article presents a new method of detecting devices' relative movement type with respect to the environment where the device is located. First section defines the current landscape where this method can be applied, then a high-level view of the method and its advantages are presented. In the following sections the method is described in detail and other methods and approaches are mentioned.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 32% of the total text.


Method and apparatus for detecting device relative movement

With the continuous rise of mobile and Internet -of-Things (IoT) devices new business opportunities are revealed. One of such business opportunity is using an extension of reality – Augmented Reality (AR) – in order to build new type applications that interact with the real three -dimensional (3D) world and augment or enhance it by adding a virtual layer on top . Many of the existing AR applications are generally built to use a device camera to detect or recognize a marker or image target and once that target is recognized (by processing each individual camera captured image or frame and compared with a reference ), a virtual entity (like a 3D model, an image or simply some text) is displayed over the camera live feed.

In this landscape, the problem this method is solving is to detect the device 's (relative) movement type (i.e. whether the device is moving forward, backward, up or down). A device is defined as a hardware equipped with a camera that can capture images using image sensors. Examples of such hardware are phones, tablets, IoT devices like Arduino with camera module, etc. The method proposes a new method of detecting the device’s relative movement by using the device camera by reproducing (to some extent) the human visual perception (perspective) of objects getting bigger (or smaller) when one moves closer (or farther) – or vice versa. This is achieved by: - computing the features (using existing methods like the one patented at US6711293 - SIFT or ORB - Rublee, Ethan; Rabaud, Vincent; Konolige, Kurt; Bradski, Gary (2011). "ORB: an efficient alternative to SIFT or SURF" (PDF). IEEE International

Conference on Computer Vision (ICCV) ) in consecutive captured images from the camera, - filter the ones that from static objects, - find common features between the images - find the type of movement.

The advantages of this method are: - the movement of the device that has a camera is determined , and not of the objects that pass in front of the camera - no other hardware components are used - like gyroscope or accelerometers - works on devices equipped only with a camera - that one can build apps that are aware of the type of movement of the device running and taking actions accordingly .

In figure 1 a phone (105) equipped with a camera (104) for producing images of a background (103) is showed. Assuming the phone is held by a person, the method outputs whether the person is moving the phone (or itself by walking/running) forward or backward, up or down.


Fig 1.

The method disclosed assumes that are at least two images captured (in the case when there’s on one, a second image is captured before the rest of the steps ).

Step 1. Capture images – figure 2 shows two consecutive images captured by the camera that are analyzed by the method disclosed .

Step 2. Find and filter common features, and their description – For each image under analysis a set of features will be determined...