Browse Prior Art Database

Latent Principle and Single Pass Algorithm for Network Analysis

IP.com Disclosure Number: IPCOM000086445D
Original Publication Date: 1976-Sep-01
Included in the Prior Art Database: 2005-Mar-03
Document File: 3 page(s) / 25K

Publishing Venue

IBM

Related People

Hsieh, HY: AUTHOR [+3]

Abstract

Introduction. Two algorithms are presented for imposing the computation efficiency of today's computer-aided circuit analysis programs.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 54% of the total text.

Page 1 of 3

Latent Principle and Single Pass Algorithm for Network Analysis

1. Introduction.

Two algorithms are presented for imposing the computation efficiency of today's computer-aided circuit analysis programs. Consider a typical network matrix equation: [J(i)] x(i) = b(i) (1) where [J(i)] is the Jacobian matrix of a network N(i), x(i) is the internal variable and S(i) is the stimulus, S(i) Epsilon b(i).

For every time step Delta t of solution process, the less of updating J(i), the less of computing x(i), the better. The following two sections will evaluate how this can be accomplished. Section 2 discusses the latent principle and how this approach can be used to reduce the computation time during network analysis. Section 3 proposes the single-pass algorithm for obtaining the partial derivatives contained in [J(i)] more efficiently. 2. The Latent Principle.

This approach deals basically with a nonlinear network when incorporated as a nonlinear macromodel in a circuit analysis program. The latent principle suggests a temporary cessation of network activities between stimulation and the previous history of the internal variables of the nonlinear network. Symbolically, we can have

(Image Omitted)

where Epsilon(j), j = 1, 2 are predetermined errors k > 1, l < k.

If a network N(i) can satisfy equations 2 - 4, the network N(i) is said to be latent. Consequently, the internal variables for -x(i)(t(n + j) can be obtained as: x(i) (t(n + j)) = x(i) (t(n - k)) j = 0, 1, 2 (5) if S(i) (t(n + j)) - S(i) (t(n - k) </= 1 In this way, the following are avoided: 1. updating of j(i) 2. iteration or computation for x(i).

It should be noted that not all the S(i)(t) (4) and x(i)(t) (4) have to be monitored in order to maintain the network N(i) latent. In fact, only the stimulus S(i)(t) (4) must be checked. Usually, the number of S(i)(t) is quite small.

It is clear that the probability of a network N(i) to be latent is far greater when the total network N is larger, N(i)Epsilon N. If we consider each network N(i) as a macromodel, a great saving in computation is thus achieved. 3. Single-Pass Algorithm.

The partial derivatives are computed from the central differencing scheme for Jacobian J is: Partial Derivative f over Part. Deriv. N = f(x + Delta x) - f(x - Delta
x) over Del...