Iteratively decodable codes

In this page pointers are presented to programs in C/C++ for simulating iterative decoding algorithms. These include programs to compute constellation-constrainted capacity. The design of interleavers is an important issue, particularly with short frame lengths. Computer programs to construct several types of interleavers are given. These and other issues are discussed in Chapter 8 of the book!


In the design and implementation of a turbo code in software, there are two main issues:
1. Construction of an interleaver
2. Simulation of iterative decoding  

1. Interleavers

RANDOM interleaver
HELICAL interleaver
PRIMITIVE interleaver
CYCLIC interleaver
DIAGONAL interleaver
BLOCK interleaver

The programs output the interleaver array as one integer per row (i.e., separated by CR characters). Other types of interleavers exist, but the above classes should always yield competitive alternatives. The programs are well documented.

2. Iterative decoding  

MAP decoding of a parallel concatenated (turbo) convolutional code: Rate 1/3
turbo.cpp random.cpp random.h

Simulation of a binary rate-1/3 turbo code with two identical rate-1/2 component recursive convolutional encoders. The memory of the encoders can be selected betwen 2, 3 and 4, corresponding to 4, 8 and 16 state trellises respectively. Files turbo_MAP.cpp and  random.cpp must be compiled together, with the "gcc" command in a unix-like environment (or equivalent in other OS) as

gcc -O2 turbo_MAP.cpp random.cpp -lm  

NOTE: This version of the program does not consider tail bits to terminate the trellis. As a result, performance will be worse than turbo codes with tail bits, specially with short interleaver lengths. The original program was written by Mathys Walma in 1998. Please refere to the original README file. Many changes to the program were necessary to make it more compatible with the style of the programs in this site.

MAP decoding of a parallel concatenated (turbo) convolutional code: Puncturing and rate 1/2
turbo_punc.cpp random.cpp random.h

These programs are used for simulation of a rate-1/2 punctured turbo code. A puncturing rule is applied in the branch metric (gamma) computation stage, very much in the same way as in the convolutional code case. In this version, the puncturing rule is hard-coded in the program, but it should be easy to specified it in a file, just as in the case of binary punctured convolutional codes.

All other comments made for the rate-1/3 turbo code above are pertinent to the punctured rate-1/2 turbo code.


Below are iterative soft-decision (belief propagation) decoding and hard-decision decoding algorithms for the important family of low-density parity-check codes. Since these algorithms can be applied to any linear code (with a low-density parity check matrix), a simplification is made in that the all-zero codeword is always transmitted. This simplifies programming enormously.

The iterative decoding algorithms need the parity-check matrix as an input. The format of the files specifying the structure of these matrices (or the Tanner graphs) is the same as that used by David MacKay, that is, the "alist" (adjacency list) format. Please refer to his web site for more information, some C source code, and to pick up some matrices for your simulations. Below are two examples of parity-check matrix file definition, for Gallager's (20,5) code and a finite (projective) geometry cyclic (273,191,17) code, respectively:

NOTE: The three numbers in suffix of a file name above are N.J.K, where N=length, J=maximum degree of bit nodes and K=maximum degree of check nodes.
Belief-propagation decoding algorithm: Probabilistic decoding

The algorithm works directly with probabilities. In terms of numerical precision, it is the most stable BP decoder, although it is very intensive in terms of exp() computations.
Belief-propagation decoding algorithm: Logarithm domain

This version of BP algorithm is obtained from the probabilistic one by straight log-domain translation. No approximation (table) used. This results in log(exp(a)+/-exp(b)) computations that are even more intensive than in pearl.c...
Belief-propagation decoding algorithm: Log-likelihood domain

This version utilizes log-likelihood ratios. This results in much improvement in terms of speed compared to log_pearl.c with practically the same numerical precision as pearl.c. It can be further improved by the use of a look-up table to avoid exp() computations, as mentioned in the book.
Bit-flipping hard-decision decoding algorithm

Gallager's iterative bit-flipping decoding of linear block codes. The user can specify a threshold on the number of unsatisfied parity checks needed in order to flip (or complement the value of) a bit.

The Tanner graph of a binary cyclic code: tannercyclic.m

The Tanner graph of a binary LDPC code: tanner_LDPC.m with alist2sparse.m (Igor Kozintsev, 1999)



Both turbo codes and LDPC codes are so-called capacity achieving. Therefore, there is an interest in knowing the theoretical limits of transmission when using a particular modulation scheme. This is called constellation-constrainted capacity. Below are some programs that are useful in computing this capacity compared to the ultimate Shannon capacity of an AWGN channel.

Compute the constellation-constrainted capacity of an AWGN channel:  capacity.c
Compute the Shannon capacity (versus SNR per symbol, or Es/No): shannon.c
Compute the Shannon capacity (versus SNR per bit, or Eb/No): shannon2.c


This page was last updated on August 6, 2008, by Robert H. Morelos-Zaragoza.