Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

Mobile device screen orientation optimisation for a group of users

IP.com Disclosure Number: IPCOM000248584D
Publication Date: 2016-Dec-20
Document File: 2 page(s) / 55K

Publishing Venue

The IP.com Prior Art Database

Abstract

Determination of appropriate screen orientation when multiple users are viewing a device; using face orientation for determining an optimum screen orientation for the orientation of a given face, then factoring in facial recognition and scoring for determining an optimum orientation for the multiple users.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 52% of the total text.

Mobile device screen orientation optimisation for a group of users

Most mobile devices can switch the orientation of the display between landscape and portrait, typically based on using accelerometers to measure the direction of gravity. It is even possible to use a front-facing camera to detect a face and use the viewer's face orientation to set the screen orientation. But if you place the device face up on a table (so that more than one person can look at the device at once, say), what should the device do?

If the mobile device has accelerometers to detect gravity then it can detect when1. the device is oriented with the screen facing up.

If the mobile device has a front facing camera then it can use facial recognition to2. detect a number of faces that are it it's field of view. It can then use the relative positions of the eyes and nose to detect the orientation of each face.

The device can then orient the screen's content based on the orientations of the3. faces that it can detect. This may mean matching the only face, the "closest" face, the "owner's" face or the highest number of faces ("most faces").

Advantage is that less physical interaction is needed with the device in order to trigger the correct orientation.

The mobile device can act on a trigger for when the accelerometers detect movement (this typically means registering with the Operating System for updates to the orientation). If this movement results in the device being oriented with the screen facing upwards (where "facing upwards" includes a range of angles such that the z-axis of the screen is within some range close to vertical), then the device can trigger the following face detection stage.

The device can use the front-facing camera to detect all of the faces within it's field of view. Each face can be analysed to detect its orientation relative to the device, by detecting multiple features of the face (such as the nose and eyes) and using their relative positions. This may involve just using three facial features for a crude orientation value, or more features for a more refined orientation value.

The face detection may also try to identify each of the faces against a known data-set, such as the data representing the owner of the device (see https://en.wikipedia.org/wiki/Facial_recognition_system for how to do this)...