Browse Prior Art Database

Navigation in an Aural Presentation Space

IP.com Disclosure Number: IPCOM000010751D
Original Publication Date: 2003-Jan-16
Included in the Prior Art Database: 2003-Jan-16
Document File: 6 page(s) / 61K

Publishing Venue

IBM

Abstract

Visual presentation of information on small screen devices such as mobile telephones or PDAs is supplemented by aural information through stereophonic inputs. Informational data and control objects can be located and distinguished in the aural presentation space by the use of characteristic sounds, their direction and intensity, and by text to audio conversion and vice versa.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 17% of the total text.

Page 1 of 6

Navigation in an Aural Presentation Space

Introduction

    The mobile telephone and/or other small portable devices (such as PDAs) are being enabled for user access to the World Wide Web, driven by the WAP protocols. Consumers will cause demand in the millions for these devices, and there will be a user expectation to be able to achieve (nearly) as much with them in accessing the internet as one can with a "normal" personal computer. The difficulty is that these small mobile devices have very limited "real estate" in terms of viewable area (primarily, by virtue of their physical size).

    A presentation space contains information objects (such as, text, pictures, diagrams, etc.) and input or control objects (such as Selection Buttons, List Boxes, etc.). In a visual presentation space the latter are drawn graphically in a conventional manner, as a visual trigger to their nature, and, in so doing, draw attention to themselves in the plethora of visible information. Such visible "singularities" make the eye scan of the visual presentation space significantly more effective.

    This publication proposes techniques using sound (both voice and ears) as a method to overcome the visual limitations of small devices, and to offer extended capability through using an aural presentation space. These methods are not to proposed to overcome any visual limitations of the user (which has been the main focus in the literature), but to overcome the physical limitations of the device. Naturally, this must include the ability to locate and distinguish aurally, specific points of significance in the aural presentation space.

Nature of the Device

For the purposes of this discussion, it is assumed that the device has, at least:

a stereophonic capability (perhaps by the use of headphones, or two "button" earpieces similar to those in portable audio cassette and CD players)

a "text to audio" converter (or access to one, potentially in a software adapter on a server)

an "audio to text" converter (or access to one, potentially in a software adapter on a server)

a physically limited two-dimensional viewable area (probably only 5cm by 5cm or thereabouts)

a mechanism to allow the positioning of the user's point of reference within a presentation space (such as a joystick, a mouse or control keys on a keyboard). Note that this mechanism should be capable of positioning the user's point of reference in three dimensions (not just in two dimensions).

Description of the Proposal

    A web-served visual presentation space (which is normally two-dimensional) contains various types of object. Broadly speaking, these can be grouped into two categories namely, informational data (text, pictures, diagrams, etc.), and locations for user input. The latter take the form of control objects (e.g. Simple Buttons, Radio

Page 2 of 6

Buttons, Selection Buttons, List Boxes) and which can also be aggregated into more elaborate forms. Typically, in visual presentation spaces, these control object...