Browse Prior Art Database

Using Directional-Audio to Provide an Enhanced End-User Experience to Those Involved in Group-Dispatch Calls

IP.com Disclosure Number: IPCOM000004785D
Original Publication Date: 2001-May-21
Included in the Prior Art Database: 2001-May-21
Document File: 3 page(s) / 12K

Publishing Venue

Motorola

Related People

Jay Almaula: AUTHOR [+3]

Related Documents

http://www-mice.cs.ucl.ac.uk/multimedia/software/rat/features.html: URL [+3]

Abstract

Using Directional-Audio to Provide an Enhanced End-User Experience to Those Involved in Group-Dispatch Calls

This text was extracted from a PDF document.
This is the abbreviated version, containing approximately 48% of the total text.

1

Using Directional-Audio to Provide an Enhanced End-User Experience to Those Involved in Group-Dispatch Calls

by Jay Almaula, Jeff Eschbach and Loren J. Rittle

1 Known State of the Art

Directional-audio (also referred to as “3D-audio” and various trademarked names) refers to the use of digital signal processing of audio signals in order to create an effect to a listener that an audio source is originating from a particular direction. The creation of such a sound effect typi- cally requires an array of speakers or that the listener is wearing headphones [1]. Human interface research has shown that multiple speakers in a call may be more clearly understood when the speakers are perceived to be spatially distinct [2].

Modern audio conferencing applications are using directional audio to improve the quality of multi-party calls. For example, version 4.2.13 of the RAT audio conferencing application incorpo- rates the directional-audio feature [3].

2 Open Issues

From the known state of the art, it is obvious that directional-audio capability may be added to a group call, and other forms of multi-party calls, in order to better differentiate multiple speakers. In systems that use directional-audio, source directions assigned to speakers may need to be changed dynamically as the exact group membership changes over time. However, current imple- mentations only have automatic methods for static assignment of source directions (the user may override the assignments manually, but this may be quite tedious in real-world use-cases).

In dynamic conferences, members may leave or join at any time. This may require a dynamic reassignment of source directions for speakers. Also, these reassignments need to be indicated to the different group members in order to avoid confusion.

3 Our Enhanced Solution

This section presents methods for dynamically reassigning audio directions in a group call as well as mechanisms for indicating such reassignments to members of that group call.

c

c

a

b

b

a

User

Figure 1a

Figure 1b

Figure 1c

2

3.1 Dynamic Assignment

Within an environment providing directional-audio, it is possible to dynamically assign the direc- tion for each source. Any new source requires an assigned direction when it joins the environ- ment. Further, as sources are added or removed, the direction of some or all other sources can also be modi(fi)ed to better delineate each. For example, consider a listener with two established direc- tional audio sources (diagram 1a) that dynamically adds a third source. This new source could simply be placed between the two others (diagram 1b), thereby providing a unique direction for each within the environment. However, the dynamic reassignment of the original two audio sources provides a more even distribution of all sources within the preferred range ...