Browse Prior Art Database

Constrained Channel Coding Using a Reduced Number of Arithmetic String Coding Steps Per Cycle

IP.com Disclosure Number: IPCOM000048247D
Original Publication Date: 1982-Jan-01
Included in the Prior Art Database: 2005-Feb-08
Document File: 2 page(s) / 14K

Publishing Venue

IBM

Related People

Langdon, GG: AUTHOR

Abstract

This invention relates to the expansion of the bit strings from the Markov binary source into longer strings characteristic of a (d,k) constrained channel code using an arithmetic string decoder having a selected shift and add to event decoding operation. A (d,k) code is described by a (k+1) state finite state machine (FSM), with states 0,1,...,k. Binary alternatives (flux reversal or no flux reversal) occur at states d,d+1,...,k-1. In these states channel information is conveyed. Corresponding to each of these states is a Shannon probability for each alternative which maximizes the information-carrying capacity of the constrained channel. A binary arithmetic code can be used for approximating probabilities in this range.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 60% of the total text.

Page 1 of 2

Constrained Channel Coding Using a Reduced Number of Arithmetic String Coding Steps Per Cycle

This invention relates to the expansion of the bit strings from the Markov binary source into longer strings characteristic of a (d,k) constrained channel code using an arithmetic string decoder having a selected shift and add to event decoding operation. A (d,k) code is described by a (k+1) state finite state machine (FSM), with states 0,1,...,k. Binary alternatives (flux reversal or no flux reversal) occur at states d,d+1,...,k-1. In these states channel information is conveyed. Corresponding to each of these states is a Shannon probability for each alternative which maximizes the information-carrying capacity of the constrained channel. A binary arithmetic code can be used for approximating probabilities in this range. The code is "complete" in the sense that the subdivision operation on the available code space completely uses that space. That is, the binary alternatives partition the number line (0,1) which is the code space for arithmetic codes.

The interval assigned to channel string s is (C(s),C(s)+A(s)). For a complete binary arithmetic code, we require: A(s) equals A(s.0) plus A(s.1) (1) where "s" is the prefix of the channel string so far encoded, A is the code space size; s.0 is the channel string obtained from already encoded string s followed by 0 as the next symbol. The code string itself is generated by:

C(s.0) equals C(s) (2a)

C(s.1) equals (s) plus A(s.0) (2b) where "0" is the less probable symbol.

From equation (1) it follows that to encode a symbol (bit) following the encoding of string s, it is necessary to split A(s) into two parts. Part A(0) corresponds to probability p(0), and should be in the same proportion. If p(0) is between 0.25 and 0.5, it is desired to "split off" A(s) into a part corresponding to
.25 A(s) to .5 A(s).

The following portions of A(s) can be obtained by shifting K bits right:

Shift K Portion of A(s)

1 .5

2 .25

3 .125

4 .0625

5 .03125

6 .015625 Encoding

Recursion variables A and C are stored in registers. The splitting of A(s) into portions A(s0) according to the above table can be shifting and adding or subtracting. These portions of A(s) can be obtained through a two...