History

Specifically, iterative decoding algorithms on factor graphs [24] have become a subject of much active research in recent years [6,11,33,40,41,50,65,66].

As an example, the well-known turbo codes and turbo decoding methods [6,11] constitute a special case of this general approach to the decoding problem. Factor-graph representations for turbo codes were introduced in [65,66], where it is also shown that turbo decoding is an instance of a general decoding procedure, known as the sum-product algorithm. Another extensively studied [22,62] special case is trellis decoding of block and convolutional codes. It is shown in [33,66] that the Viterbi algorithm on a trellis is an instance of the min-sum iterative decoding procedure, when applied to a simple factor graph. The forward-backward algorithm on a trellis, due to Bahl, Cocke, Jelinek, and Raviv [3], is again a special case of the sum-product decoding algorithm. More general iterative algorithms on factor graphs, collectively termed the ``generalized distributive law'' or GDL, were studied by Aji and McEliece [1,2]. These algorithms encompass maximum-likelihood decoding, belief propagation in Bayesian networks [23,47], and fast Fourier transforms as special cases.

It is proved in [1,55,66] that the min-sum, the sum-product, the GDL, and other versions of iterative decoding on factor graphs all converge to the optimal solution if the underlying factor graph is cycle-free. If the underlying factor graph has cycles, very little is known regarding the convergence of iterative decoding methods.

This work is concerned with an important special type of factor graphs, known as Tanner3.1 graphs. The subject dates back to the work of Gallager [25] on low-density parity-check codes in 1962. Some 20 years later, Tanner [55] extended the approach of Gallager [25,26] to codes defined by general bipartite graphs, with the two types of vertices representing code symbols and checks (or constraints), respectively. He also introduced the min-sum and the sum-product algorithms, and proved that they converge on cycle-free graphs. More recently, codes defined on sparse (regular) Tanner graphs were studied by Spielman [50,53], who showed that such codes become asymptotically good if the underlying Tanner graph is a sufficiently strong expander. These codes were studied in a different context by MacKay and Neal [40,41], who demonstrated by extensive experimentation that iterative decoding on Tanner graphs can approach channel capacity to within about 1dB. Latest variants [39] of these codes come within about 0.3db from capacity, and outperform turbo codes.

In general, a Tanner graph for a code $\ensuremath{\mathbb{C}}$ of length n over an alphabet A is a pair ($ \cal {G}$,$ \cal {L}$), where $ \cal {G}$ = (V, E) is a bipartite graph and $ \cal {L}$ = {$ \cal {C}$1,$ \cal {C}$2,...,$ \cal {C}$r} is a set of codes over A, called behaviors or constraints. We denote the two vertex classes of $ \cal {G}$ by $ \cal {X}$ and $ \cal {Y}$, so that V = $ \cal {X}$ $ \cup$ $ \cal {Y}$. The vertices of $ \cal {X}$ are called symbol vertices and |$ \cal {X}$| = n, while the vertices of $ \cal {Y}$ are called check vertices and |$ \cal {Y}$| = r. There is a one-to-one correspondence between the constraints $ \cal {C}$1,$ \cal {C}$2,...,$ \cal {C}$r in $ \cal {L}$ and the check vertices y1, y2,..., yr in $ \cal {Y}$, so that the length of the code $ \cal {C}$i$ \mbox{$\,\inn\,$}$$ \cal {L}$ is equal to the degree of the vertex yi$ \mbox{$\,\inn\,$}$$ \cal {Y}$, for all i = 1, 2,..., r. A configuration is an assignment of a value from A to each symbol vertex x1, x2,..., xn in $ \cal {X}$. Thus a configuration may be thought of as a vector of length n over A. Given a configuration $ \chi$ = ($ \chi_{1}^{}$,$ \chi_{2}^{}$,...,$ \chi_{n}^{}$) and a vertex y$ \mbox{$\,\inn\,$}$$ \cal {Y}$ of degree $ \delta$, we define the projection $ \chi_{y}^{}$ of $ \chi$ on y as a vector of length $ \delta$ over A obtained from $ \chi$ by retaining only those values that correspond to the symbol vertices adjacent to y. Specifically, if {xi1, xi2,..., xi$\scriptscriptstyle \delta$} $ \subseteq$ $ \cal {X}$ is the neighborhood of y in $ \cal {G}$, then $ \chi_{y}^{}$ = ($ \chi_{{i_1}}^{}$,$ \chi_{{i_2}}^{}$,...,$ \chi_{{i_\delta}}^{}$). A configuration $ \chi$ is said to be valid if all the constraints are satisfied, namely if $ \chi_{{y_i}}^{}$$ \mbox{$\,\inn\,$}$$ \cal {C}$i for all i = 1, 2,..., r. The code  $\ensuremath{\mathbb{C}}$ represented by the Tanner graph ($ \cal {G}$,$ \cal {L}$) is then the set of all valid configurations.

While the foregoing definition of Tanner graphs is quite general, the theory and practice of the subject [17,40,39,41,50,55] is focused almost exclusively on the simple special case where all the constraints are single-parity-check codes over  IF2. This work is no exception, although we will provide for the representation of linear codes over arbitrary fields by considering the zero-sum codes over IFq rather than the binary single-parity-check codes. It seems appropriate to call the corresponding Tanner graphs simple. Notice that in the case of simple Tanner graphs, the set of constraints $ \cal {L}$ is implied by definition, so that one can identify a simple Tanner graph with the underlying bipartite graph $ \cal {G}$. All of the Tanner graphs considered in this correspondence, except in Section5.3, are simple. Thus, for the sake of brevity, we will henceforth omit the quantifier ``simple.'' Instead, when we consider the general case in Section5.3, we will use the term general Tanner graphs.

We can think of a (simple) Tanner graph for a binary linear code $\ensuremath{\mathbb{C}}$ of length n as follows. Let H be an r×n parity-check matrix for  $\ensuremath{\mathbb{C}}$. Then the corresponding Tanner graph for $\ensuremath{\mathbb{C}}$ is simply the bipartite graph having H as its $ \cal {X}$,$ \cal {Y}$ adjacency matrix. It follows that the number of edges in any Tanner graph for a linear code  $\ensuremath{\mathbb{C}}$ of length n is O(n2). Thus, if we can represent $\ensuremath{\mathbb{C}}$ by a Tanner graph without cycles, then maximum-likelihood decoding of  $\ensuremath{\mathbb{C}}$ can be achieved in time O(n2), using the min-sum algorithm for instance.

However, both intuition and experimentation (cf.[40]) suggest that powerful codes cannot be represented by cycle-free Tanner graphs. The notion that cycle-free Tanner graphs can support only weak codes is, by now, widely accepted. Our goal in this correspondence is to make this ``folk knowledge'' more precise. We provide rigorous answers to the question: Which codes can have cycle-free Tanner graphs?

Our results in this regard are two-fold: we derive a characterization of the structure of such codes and an upper bound on their minimum distance. The upper bound (Theorem 3.6) shows that codes with cycle-free Tanner graphs provide extremely poor trade-off between rate and distance for each fixed length. This indicates that at very high signal-to-noise ratios these codes will perform badly. In general, however, the minimum distance of a code does not necessarily determine its performance at signal-to-noise ratios of practical interest. Indeed, there exist codes -- for example, the turbo codes of [6,11] -- that have low minimum distance, and yet perform very well at low signal-to-noise ratios. The development of analytic bounds on the performance of cycle-free Tanner graphs under iterative decoding is a challenging problem, which is beyond the scope of this work. Nevertheless, our results on the structure of the corresponding codes indicate that they are very likely to be weak: their parity-check matrix is much too sparse to allow for a reasonable performance even at low signal-to-noise ratios.

The rest of this chapter is organized as follows. We start with some definitions and auxiliary observations in the next section. In Section3, we show that if an (n, k, d ) linear code $\ensuremath{\mathbb{C}}$ can be represented by a cycle-free Tanner graph and has rate R = k/n$ \ge$0.5, then d$ \le$2. We furthermore prove that if R < 0.5, then $\ensuremath{\mathbb{C}}$ is necessarily obtained from a code of rate $ \ge$0.5 and minimum distance $ \le$2 by simply repeating certain symbols in each codeword. Theorem 3.6 of Section 3.4 constitutes our main result: this theorem gives an upper bound on the minimum distance of a general linear code that can be represented by a cycle-free Tanner graph. Furthermore, the bound of Theorem 3.6 is exact. This is proved in Section 3.5 by means of an explicit construction of a family of (n, k, d ) linear codes that attain the bound of Theorem 3.6 for all values of n and k. Asymptotically, for n$ \to$$ \infty$, the upper bound takes the form:

d  $\displaystyle \lesssim$  2$\displaystyle \mbox{\ensuremath{\lfloor { 1/R } \rfloor}}$ (9)

and an immediate consequence of (3.1) is that asymptotically good codes with cycle-free Tanner graphs do not exist. We show in Section 3.5 that the same is true for Tanner graphs with cycles, unless the number of cycles increases exponentially with the length of the code. Finally, we also show in Section 3.5 that for every binary code $\ensuremath{\mathbb{C}}$ that can be represented by a cycle-free Tanner graph, there exists a graph $ \cal {G}$ such that $\ensuremath{\mathbb{C}}$ is the dual of the cycle code of $ \cal {G}$. This establishes an interesting connection between codes with cycle-free Tanner graphs and the well-known [10,29,28,48,52] class of graph-theoretic cut-set codes. Finally, we conclude this chapter with a partial analysis of general Tanner graphs.

http://people.bu.edu/trachten