For one, it is still not clear why lexicodes, or even
trellis-oriented G-codes, have such good code parameters. Perhaps
the closest lead is that they can be regarded as the
coding-theoretic analogs of laminated lattices over
^{n},
which are the best known sphere packings in most low dimensions [12,14].
Nevertheless, we suspect that the bounds on
parameters of the generalized lexicodes can be improved by a
more sophisticated count of the worst-case companion pairings.
A quick glance at the tables in the appendix shows that
the bounds in Section 2.3.1, though better
than what is known so far, are still quite weak.

From the perspective of construction, one should note that the exponential time and space bound (with respect to the co-dimension) of Algorithm 2.12 may be improved with approximation techniques. The actual continuation of the algorithm depends only on the properties of a few coset leaders, and a heuristic approach to choosing these leaders might work well. Experimental results show that a straightforward strategy of ignoring additional coset leaders after a prescribed memory limit gives reasonable results for several iterations, but thereafter performance considerably degrades as the set of known coset leaders becomes stale. Thus, a more sensible heuristic is needed.

Speeding up Algorithm 2.12 would allow for the design of lower rate codes with good trellis decoding properties. However, it would also be interesting to derive complexity-theoretic performance measures for the decoding time needed for lexicodes and generalized lexicodes. At least for the trellis-oriented codes this result should follow from an analysis of Equation 2.9 which relates the decoding complexities of successive trellis-oriented codes by means of their differences in length (which correspond to their differences in covering radius). Thus, the decoding complexity of such code may be simply gleaned from the covering radii of preceding codes in its construction, though a closed-form for the result of this process is still needed.

Finally, the greedy construction of the lexicode family might lend itself to faster decoding than general codes. However, we suspect that the G-codes are defined broadly enough so as to be difficult to decode. That is, this class of codes is likely to contain hard instances of the decoding problem. Proving that it does, and discerning precisely which codes are not instances of the G-construction, is an interesting topic for future research.

http://people.bu.edu/trachten