It was shown by Shannon  that it is possible to transmit information over a noisy channel with arbitrarily small probability of error, at rates up to the capacity of the channel. Reliable transmission of information over noisy channels requires the use of error-correcting codes which encode input in such a way that errors can be detected and corrected at the receiving site.
Thus one of the major problems in coding theory is the design of error-correcting codes that minimize the probability of error on one hand, while maximizing the information rate on the other hand. Another problem of primary importance in applications is the design of efficient decoding algorithms which, given an error-corrupted sequence observed at the output of a noisy channel, quickly reconstruct the most likely transmitted codeword. Both problems are known to be inherently difficult  and in fact the latter problem is NP-hard  in the general case.
Our approach to both of these problems involves the study of decoding techniques based on certain graphs. We first develop codes which have a low probability of error and a reasonably high information rate, and can be decoded efficiently by means of a trellis. The trellis may be thought of as a constrained finite-state automaton; it was originally introduced by Forney  in 1967 to explain the Viterbi decoding algorithm. We then consider a generalization of trellises, known as Tanner graphs [55,65], in an attempt to further understand the efficiency of iterative decoding.