Browse Prior Art Database

Canonical Time Series Modeling by Neural Network Functions

IP.com Disclosure Number: IPCOM000101021D
Original Publication Date: 1990-Jun-01
Included in the Prior Art Database: 2005-Mar-16
Document File: 2 page(s) / 52K

Publishing Venue

IBM

Related People

Nadas, A: AUTHOR [+3]

Abstract

A (statistical) time series model is constructed based on binary representation of the data. The model utilizes (deterministic) multilayer perceptrons in an essential way. Let Y=(Yt¯t=0,1,...) be a stationary Markov time series. Conventional parametric models are exemplified by AR, ARMA, ARIMA models used in LPC and similar models of the speech signal. As the name implies, these use linear models for the conditional expectation of the value at the next instant given the past. We construct below a family of nonlinear models using multilayer perceptrons.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 71% of the total text.

Canonical Time Series Modeling by Neural Network Functions

       A (statistical) time series model is constructed based on
binary representation of the data.  The model utilizes
(deterministic) multilayer perceptrons in an essential way. Let
Y=(Yt¯t=0,1,...) be a stationary Markov time series. Conventional
parametric models are exemplified by AR, ARMA, ARIMA models used in
LPC and similar models of the speech signal.  As the name implies,
these use linear models for the conditional expectation of the value
at the next instant given the past.  We construct below a family of
nonlinear models using multilayer perceptrons.

      CONSTRUCTION We assume that the time series had coordinate
values in a finite alphabet and that these values have been
transformed into fixed length bitstrings in some canonical way and
these, in turn, have been concatenated to form Y. Thus, Y is a time
series of random bits.  Conditionally on k (the order of the chain)
initial bits, the joint probability of the series has the form
This can be written as where is the probability that the bit at time
t is turned on given the previous k bits.  Now let be a neural net
function with parameters (weights) mapping bitstrings of length K
into probabilities.  Define

                            (Image Omitted)

 where r
is to be estimated by training the model.

      This model has two advantages.  First, specifying a single
probability specifies a complete distr...