Browse Prior Art Database

# Adaptation of Floating Point Division Algorithm to Fixed Point Arithmetic

IP.com Disclosure Number: IPCOM000105188D
Original Publication Date: 1993-Jun-01
Included in the Prior Art Database: 2005-Mar-19
Document File: 6 page(s) / 150K

IBM

## Related People

Markstein, P: AUTHOR [+2]

## Abstract

In a paper on division algorithms Anderson, Earle, Goldschmidt, and Powers describe a iterative division algorithm for floating point arithmetic. Division in floating point arithmetic is not precise; in other words the value of the least significant bit is determined by the rounding mode currently in effect; thus this algorithm is perfectly suited to this number system. In fixed point arithmetic the result must be precise because an integer quotient and integer remainder are computered by the divide instruction. Also the following equation is always valid.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 33% of the total text.

Adaptation of Floating Point Division Algorithm to Fixed Point Arithmetic

In a paper on division algorithms Anderson, Earle, Goldschmidt,
and Powers describe a iterative division algorithm for floating point
arithmetic.  Division in floating point arithmetic is not precise; in
other words the value of the least significant bit is determined by
the rounding mode currently in effect; thus this algorithm is
perfectly suited to this number system.  In fixed point arithmetic
the result must be precise because an integer quotient and integer
remainder are computered by the divide instruction.  Also the
following equation is always valid.

A/B and A = Q*B + R

Fig. 1

Where Q is the quotient of the division and R is the remainder.

It is this requirement to generate a precise result that is the
challenge in adapting the Goldschmidt algorithm to fixed point
arithmetic.  The Goldschmidt algorithm is a converging algorithm,
that is on every iteration more bits in the quotient are valid.  The
limit on the number of iterations for a word length of n bits is log
base 2 of n.  Another property of the algorithm is that it converges
from below toward the solution if there are no rounding errors in the
intermediate number system used to perform the iterations.  In order
to make the algorithm work in fixed point arithmetic, the dividend
and divisor must be normalized numbers of the form:

1/2 <= A <  1

1/2 <= B <1

Fig. 2

This is not a problem for floating point numbers since they are
always represented as normalized numbers in the hardware.

With these requirements, the adaptation to fixed point arithmetic
required solving four major problems.  The first problem is the
choosing
of a word length for the intermediate number system to assure that
the algorithm converged to within 1 of the final result.  The second
problem was detecting when the algorithm had converged from below and
fixing up the result.  The third problem was detecting when the
algorithm had converged from above and fixing up the result.  The
fourth problem was dealing with the fixed point must negative number
(ox80000000) which does not have a positive number counter part, and
thus cannot be made to obey the equations in Fig. 2.

The solution to the first problem was arrived at by computer
simulation of the algorithm where a variable number of bits could be
assigned to the intermediate number system to perform the iterations.
The results of this simulation showed that 36 bits would be required
to converge the algorithm to within 1 of the final result.  A
mathematical proof of this choice is presented next.

This is an analysis of the adaptation of the Goldschmidt division
process as ap...