Browse Prior Art Database

A Text-Oriented Data Input Model for Self Constructing Graphical User Interfaces

IP.com Disclosure Number: IPCOM000109318D
Original Publication Date: 2005-Mar-23
Included in the Prior Art Database: 2005-Mar-23
Document File: 6 page(s) / 69K

Publishing Venue

IBM

Abstract

The notion of a self constructing user interface can blend language, pictures, and a graphical user interface into a powerful model for input into a computer application. The user would use a traditional keyboard just as they normally would to type in text. However, the user would not be entering information into a simple text editor, but would be using a graphical interface that constructed itself on the fly according to the user's key strokes.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 46% of the total text.

Page 1 of 6

A Text-Oriented Data Input Model for Self Constructing Graphical User Interfaces

Language

One of the great characteristics of a language whether it be English, French, C++, or Java, is the incredible flexibility it offers through a very simple medium: the spoken or written word. Think of the unlimited number of ideas you can convey using just a simple text editor on a computer. With that single, static control, you can express ideas on any topic imaginable. Or think of the incredible number of different types of software applications one can create by using language such as C++ or Java.

Pictures

Pictures do have huge advantages over text when a sketch or chart is appropriate for the idea. An image can communicate structure and order, for example, that would be difficult using text.

Graphical User Interfaces

Graphical user interfaces evolved as an alternative to text-based interfaces because they provided a metaphor for users to facilitate interaction with the software. Buttons, pull-down menus, tabbed books, and the file and document displays all provide an anchor to real world objects with which users already have familiarity.

One of the disadvantages of current graphical user interfaces is that the control layout is predefined. The controls are arranged in some logical manner during the design phase on a panel or window, and the controls are basically cemented into place from that point on. That characteristic is fine for many applications, but is a disadvantage when the user is tasked with "building" something such as a search phrase containing combinations of Boolean operators and phrases. It is true that text is most often used for such tasks, but text is not optimal for all but the most simple search phrases, and is likely used because a simple text string input is easy to implement and manipulate with software. A more graphical representation would be ideal, especially for complex representations. But such an interface would have to be dynamic; it would need to have the ability to build itself.

Self-constructing user interfaces

What if all three of these were combined into one interaction model: a graphical user interface the simplicity and flexibility of language/text input the power of pictorial representation

The notion of a self constructing user interface can blend all of these into a powerful model for input into a computer application. The user would use a traditional keyboard just as they normally would to type in text. All of the current-day conventions would be available: type in text as input, <tab> to the next field, <enter> to accept information, use the arrow keys to move the cursor, etc. However, the user would not be entering information into a simple text editor, but would be using a graphical interface that constructed itself on the fly according to the user's key strokes. Pressing the <tab> key might prompt the application to
a) create a single line text box or drop down list, b) show it to the right of t...