Use of Screen-Based Navigation to control Server-Based Voice Browsers from a telephony device
Original Publication Date: 2001-Mar-01
Included in the Prior Art Database: 2003-Jun-19
Use of Screen-Based Navigation to control Server-Based Voice Browsers from a telephony device Disclosed is a system for using a screen-based navigation graphical user interface (GUI) to control server-based voice browsers from a telephony device. The method generates aural content via a voice-based browser on a server and transmits it to a client pervasive device. Then the aural content is navigated through the use of a stylus driven GUI interface on a pervasive device. The stylus navigation simulates voice and dual tone multi frequency (DTMF) commands, voice dictation, or more complex navigation. The systems addresses a problem that exists with today's web access using different forms of telephony devices. Navigation of web content in these devices requires the use of voice recognition or DTMF, i.e. touch tone based navigation, or a Wireless Markup Language (WML) user interface. In voice recognition, the user is limited by the length of the commands generated due to noise interference as well as inconsistencies in the speech for example: yes, no or one through ten may be the extent of what can accurately recognized. DTMF requires users to memorize key navigation commands or the server must explicitly prompt the user for using a specific key to generate the desired response. WML interfaces are limited to small screen sizes and hence are used for abbreviated or clipped web content such as weather reports and stock quotes.