Introduction

Coding theory is a branch of communications concerned with the design and evaluation of efficient signaling schemes for reliable data transmission and storage. Notable examples of the application of coding theory include: telephone-line modems [37], where increasing transmission speeds introduce high levels of noise; compact-disk recorders [31], in which error is inherent in the production process; and deep-space probes [38], in which large lag times confound the problems associated with transmission error. All these applications are based on a vast body of basic research in coding theory, see for instance [5,7,42] and over 3000 references therein.

It was shown by Shannon [49] that it is possible to transmit information over a noisy channel with arbitrarily small probability of error, at rates up to the capacity of the channel. Reliable transmission of information over noisy channels requires the use of error-correcting codes which encode input in such a way that errors can be detected and corrected at the receiving site.

Thus one of the major problems in coding theory is the design of error-correcting codes that minimize the probability of error on one hand, while maximizing the information rate on the other hand. Another problem of primary importance in applications is the design of efficient decoding algorithms which, given an error-corrupted sequence observed at the output of a noisy channel, quickly reconstruct the most likely transmitted codeword. Both problems are known to be inherently difficult [61] and in fact the latter problem is NP-hard [5] in the general case.

Our approach to both of these problems involves the study of decoding techniques based on certain graphs. We first develop codes which have a low probability of error and a reasonably high information rate, and can be decoded efficiently by means of a trellis. The trellis may be thought of as a constrained finite-state automaton; it was originally introduced by Forney [32] in 1967 to explain the Viterbi decoding algorithm. We then consider a generalization of trellises, known as Tanner graphs [55,65], in an attempt to further understand the efficiency of iterative decoding.



Subsections
http://people.bu.edu/trachten