Browse Prior Art Database

Neural Networks Custom Teach Environment

IP.com Disclosure Number: IPCOM000103303D
Original Publication Date: 1990-Sep-01
Included in the Prior Art Database: 2005-Mar-17
Document File: 1 page(s) / 52K

Publishing Venue

IBM

Related People

Bigus, JP: AUTHOR

Abstract

Artificial neural networks are massively parallel computing models. In order to model this massive parallelism, neural networks are simulated by software on conventional hardware. In this simulated environment, the processors and learning algorithms of the neural network are coded into programs, and the adaptive connections are represented by data structures.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 73% of the total text.

Neural Networks Custom Teach Environment

      Artificial neural networks are massively parallel computing
models. In order to model this massive parallelism, neural networks
are simulated by software on conventional hardware. In this simulated
environment, the processors and learning algorithms of the neural
network are coded into programs, and the adaptive connections are
represented by data structures.

      It is desirable to be able to create and train a neural network
on one computer system and deliver applications on another system of
the same or different architecture. Disclosed is a method for
providing portability of trained and untrained neural networks
between similar or dissimilar computer systems.

      A transportable Network Definition File has been created which
allows a complete neural network to be transferred between systems.
This file is a specially formatted text file which is readable by all
supported systems.  This common network definition allows a network
to be trained on a development system (workstation) and transferred
to another (delivery) system for actual application use.

      The network definition files are created and read by code which
is sensitive to the structure of the files. Corresponding sets of
load and save routines on each development platform enable the use of
a common text file to completely describe a neural network's
architecture and state (including connection weights and training
parameter values).  Hav...