Menu
Cart 0

LDPC-Coded Differential Modulation - Decoding Algorithms

This is a continuation from the previous tutorial - laser pumping and population inversion.

 

The decoders we consider (which can include simplified versions such as binary message passing decoders) rely on the knowledge of the channel.

The communication channel, or an equivalent communication channel comprising the physical channel as well as various inner receiver and signal processing stages, can be characterized by its conditional probability \(P(z|y)\), that is, the probability of observing \(z\) at the receiver assuming a transmitted modulation symbol \(y\in\boldsymbol{\mathcal{M}}\).

Note that we restrict ourselves to memoryless channels, which can be achieved in practice by sufficiently long interleaving of the channel input. Notable examples are the binary symmetric channel (BSC) with (\(y=x\))

\[\begin{align}P(z=0|x=0)&=1-\epsilon\qquad{P}(z=1|x=0)=\epsilon\\P(z=0|x=1)&=\epsilon\qquad\quad\quad{P}(z=1|x=1)=1-\epsilon\end{align}\]

which is frequently used to model the hard-decision channel.

Another famous example is the real-valued AWGN channel with binary phase-shift keying (BPSK) modulation with \(y=(-1)^x\) and (real-valued) noise variance \(\sigma_n^2=N_0/2\), leading to

\[p(z|y)=\frac{1}{\sqrt{\pi{N_0}}}\exp\left(\frac{-(z-y)^2}{N_0}\right)\]

As the computation with probabilities tends to be numerically unstable and hinders potential hardware implementations, frequently LLRs are employed. The LLR \(L(z[t])\) for received symbol \(z[t]\) at time instant \(t\) is defined for BPSK as

\[L(z[t])=\log\frac{p(z[t]|x[t]=0)}{p(z[t]|x[t]=1)}\]

where \(\log(⋅)\) is the natural logarithm. It turns out that for the AWGN channel with BPSK modulation, we have

\[L_\text{AWGN}(z[t])=\frac{2}{\sigma_n^2}z[t]=4\frac{E_s}{N_0}z[t]=4r\frac{E_b}{N_0}z[t]=:L_c\cdot{z}[t]\]

This last equation means that the LLR \(L_\text{AWGN}(z[t])\) is obtained by multiplication of \(z[t]\) with a constant \(L_c:=\frac{2}{\sigma_n^2}\), which depends on the noise variance only.

Usually, the noise variance is assumed to be constant, and the constant \(L_c\) is predetermined and set to a value suitable for implementation. The noise variance may also be estimated at the receiver.

If higher-order modulation formats based on a constellation \(\boldsymbol{\mathcal{M}}\) are employed, we have to use a more involved computation rule. Starting from the mapping function \(\phi(\pmb{b})\), we first define the \(i\)th inverse mapping function

\[\begin{align}\phi_i^{-1}(b)&=\\&\{y=\phi(\tilde{b}_1,...,\tilde{b}_{i-1},b,\tilde{b}_{i+1},...\tilde{b}_q):y\in\boldsymbol{\mathcal{M}},(\tilde{b}_1,...,\tilde{b}_{i-1},\tilde{b}_{i+1},...,\tilde{b}_q)\in\mathbb{F}_2^{q-1}\}\end{align}\]

Thus, \(\phi_i^{-1}(b)\) returns the set of all modulation symbols to which a bit pattern whereof the \(i\)th bit takes on the value \(b\) is assigned. The LLR for the ubiquitous bit-wise decoder is then given by

\[\tag{3.25}L(b_i[t])=\log\left(\frac{\sum_{y\in\phi_i^{-1}(0)}p(z[t]|y)}{\sum_{y\in\phi_i^{-1}(1)}p(z[t]|y)}\right)\]

Before we step ahead and describe the decoding algorithms, we first introduce the max∗ operation, which simplifies the description of the BCJR decoder

Definition 4The max* operation is defined as

\[\text{max}^*(\delta_1,\delta_2):=\text{max}(\delta_1,\delta_2)+\log(1+e^{-|\delta_1-\delta_2|})=\log(e^{\delta_1}+e^{\delta_2})\]

and can be conveniently approximated by

\[\text{max}^*(\delta_1,\delta_2)\approx\text{max}(\delta_1,\delta_2)\]

The \(\text{max}^∗\) operation has several properties, namely

\[\begin{align}\text{max}^*(\delta_1,\delta_2)&=\text{max}^*(\delta_2,\delta_1)\\\lim_{\delta_1\rightarrow-\infty}\text{max}^*(\delta_1,\delta_2)&=\delta_2\\\text{max}^*(\delta_1,\delta_2,\delta_3)&=\text{max}^*(\delta_1,\text{max}^*(\delta_2,\delta_3))\end{align}\]

The latter property allows us to define

\[\overset{\chi}{\underset{j=1}{\text{max}}^*}\delta_j=\text{max}^*(\delta_1,\delta_2,...,\delta_\chi)=\text{max}^*(\delta_1,\text{max}^*(\delta_2,...\text{max}^*(\delta_{\chi-1},\delta_\chi)...))\]

and with the trivial case

\[\overset{1}{\underset{j=1}{\text{max}}^*}\delta_j=\delta_1\]

 

Differential Decoding

The soft-input soft-output differential decoding is carried out using the BCJR algorithm. We just summarize the operations of the BCJR algorithm in the LLR domain, such that it can be immediately applied. We give the equations for the case \(V=4\), which we have used in our simulations in this tutorial.

We use the trellis diagram to describe the BCJR algorithm. The algorithm consists of computing a forward and a backward recursion. In the forward recursion, the variables \(\tilde{\alpha}_t(\text{S}_i)\) are updated for \(i\in\{1, 2, 3, 4\}\). The initialization, which describes the initial differential memory, is usually carried out as

\[\begin{align}\tilde{\alpha}_0(\text{S}_1)&=0\\\tilde{\alpha}_0(\text{S}_i)&=-\infty\quad\text{for }i\in\{2,3,4\}\end{align}\]

The recursive update is given by (for all \(j\in\{1, 2, 3, 4\}\))

\[\tilde{\alpha}_t(\text{S}_j)=\overset{4}{\underset{i=1}{\text{max}}^*}\;\overset{3}{\underset{s=0}{\text{max}}^*}(\tilde{\alpha}_{t-1}(\text{S}_i)+\tilde{\gamma}_t(i,j,s))\]

Similarly, the backward recursion is carried out with the initialization \(\tilde{\beta}_{\tilde{n}}(\text{S}_j)=0\), for \(j\in\{1, 2, 3, 4\}\), where \(\tilde{n}\) denotes the length of the sequence of modulation symbols to be decoded. We have

\[\tilde{\beta}_{t-1}(\text{S}_i)=\overset{4}{\underset{i=1}{\text{max}}^*}\;\overset{3}{\underset{s=0}{\text{max}}^*}(\tilde{\beta}_t(\text{S}_j)+\tilde{\gamma}(i,j,s))\]

Before giving the equation to compute \(\tilde{\gamma}(i, j, s)\), we first introduce the sets \(\boldsymbol{\mathcal{M}}_{\text{S}_i}\subset\boldsymbol{\mathcal{M}}\), which contain all modulation symbols that are associated with state \(\text{S}_i\) (that are within the region associated with state \(\text{S}_i\)). For the example of the QPSK constellation, \(\boldsymbol{\mathcal{M}}_{\text{S}_1}=\left\{\frac{1+\imath}{\sqrt{2}}\right\}\) and for the example of the 16QAM constellation, we have

\[\boldsymbol{\mathcal{M}}_{\text{S}_1}=\left\{\frac{1+\imath1}{\sqrt{10}},\frac{3+\imath1}{\sqrt{10}},\frac{1+\imath3}{\sqrt{10}},\frac{3+\imath3}{\sqrt{10}}\right\}\]

The variable \(\tilde{\gamma}_t(i,j,\zeta)\) describes the (logarithmic) probability of a state transition from state \(\text{S}_i\) at time \(t-1\) to state \(\text{S}_j\) at time \(t\) provided that the phase-slip occurrence descriptor \(s[t]\) takes on the value \(\zeta\). We have

\[\begin{align}\tilde{\gamma}_t(i,j,\zeta)=&\frac{1}{N_0}\boldsymbol{\sum}_{\chi\in\boldsymbol{\mathcal{M}}_{\text{S}_j}}|z[t]-\chi|^2+\frac{1}{2}\boldsymbol{\sum}_{\kappa=1}^{\nu}\left(1-2\check{f}_{\text{diff},\kappa}^{-1}(\text{S}_i,\text{S}_j,\zeta)\right)L_{(t-1)\nu+\kappa}^{[\text{apriori},\Pi]}\\&+\log{P}(s=\zeta)\end{align}\]

Note that with the phase-slip model, we may abbreviate \(\log{P}(s=\zeta)=|\zeta|\log\xi\). The function \(\check{f}_{\text{diff},\kappa}^{-1}(\text{S}_i,\text{S}_j,s)\) returns the \(\kappa\)th bit \(b_\kappa\) of the differential encoding map that causes a state transition from \(\text{S}_i\) to \(\text{S}_{j'}\), where \(j'\) is an intermediate state leading to the final state \(j=((j'+s-1)\text{ mod }V)+1\) after taking into account the phase slip. \(L_i^{[\text{apriori},\Pi]}\) contains the input LLR values \(L_i^{[\text{apriori}]}\) that are provided by the LDPC decoder after full or partial interleaving. In the initial execution (first iteration) of the differential decoder, we may set \(L_i^{[\text{apriori},\Pi]}=0\).

Finally, we obtain for each \(t\in\{1,…,\tilde{n}\}\) and \(\kappa\in\{1,…,\nu\}\)

After (partial or full) deinterleaving of \(L_i^{[\text{diff},\Pi]}\), we obtain \(L_i^{[\text{diff}]}\), which is used as input of the LDPC decoder.

 

LDPC Decoding

A vast collection of various decoding algorithms for LDPC codes exist. Most of these decoders are message passing decoders, where the most prominent is probably the sum–product decoder. In what follows, we describe the sum–product decoder and the closely related min-sum decoder, which can be interpreted as being an approximation of the sum–product decoder.

In order to describe the sum–product and min-sum decoders, we introduce the set \(\mathcal{N}(m):=\{j:H_{m,j}\ne0\}\), which contains the positions (columns) of nonzero entries at row \(m\) of the parity-check matrix \(\pmb{H}\).

For the matrix given in Example 3.3, we have \(\mathcal{N}(1)=\{1;4;11;12;15;22;24;25;28;29;30;31\}\). Similarly, the set \(\mathcal{M}(n):=\{i:H_{i,n}\ne0\}\) contains the positions (rows) of nonzero entries at column \(n\) of the parity-check matrix \(\pmb{H}\). Again, for the exemplary matrix of Example 3.3, we have \(\boldsymbol{\mathcal{M}}(1)=\{1;2\}\), \(\boldsymbol{\mathcal{M}}(2)=\{3;5\}\) and so on.

Within the sum–product decoder, messages are exchanged between the variable nodes and the check nodes, thus the name message passing decoder. We denote the message that is passed from variable node \(i\) toward check node \(j\) by \(L_{i,j}^{[v\rightarrow{c}]}\).

Similarly, the message that is passed from check node \(j\) toward variable node \(i\) is denoted by \(L_{i,j}^{[v\leftarrow{c}]}\). Before first executing the LDPC decoder with a new frame of data, all messages are set to zero, that is, \(L_{i,j}^{[v\rightarrow{c}]}=L_{i,j}^{[v\leftarrow{c}]}=0\) for all combinations of \((j, i)\in[1,…,m]\times[1,…, n]\) such that \(H_{j,i}\ne0\).

The sum–product LDPC coder computes for each of the \(n\) variables, that is, for each transmitted bit, the total sum

\[L_i^{[\text{tot}]}=L_i^{[\text{diff}]}+\boldsymbol{\sum}_{j\in\boldsymbol{\mathcal{M}}(i)}L_{i,j}^{[v\leftarrow{c}]},\quad\forall{i}\in\{1,...,n\}\]

Using this total sum, the variable-to-check messages may be computed as

\[L_{i,j}^{[v\rightarrow{c}]}=L_i^{[\text{tot}]}-L_{i,j}^{[v\leftarrow{c}]},\quad\forall{i}\in\{1,...,n\},\forall{j}\in\boldsymbol{\mathcal{M}}(i)\]

In the second step, the check node update rule is carried out to compute new check-to-variable messages

\[L_{i,j}^{[v\leftarrow{c}]}=2\tanh^{-1}\left(\boldsymbol{\prod}_{i'\in\mathcal{N}(j)\text{\\}\{i\}}\tanh\left(\frac{L_{i',j}^{[v\rightarrow{c}]}}{2}\right)\right),\quad\forall{j}\in\{1,...,m\},\forall{i}\in\mathcal{N}(j)\]

where the inner product is taken over all entries in \(\mathcal{N}(j)\) except the one under consideration \(i\). This is indicated by the notation \(\mathcal{N}(j)\text{\\}\{i\}\).

Usually, in practical implementations, simplified approximations to this update rule are implemented, for example, the scaled min-sum rule

\[L_{i,j}^{[v\leftarrow{c}]}=\nu\left(\boldsymbol{\prod}_{i'\in\mathcal{N}(j)\text{\\}\{i\}}\text{sign}\left(L_{i',j}^{[v\rightarrow{c}]}\right)\right)\underset{i'\in\mathcal{N}(j)\text{\\}\{i\}}{\text{min}}|L_{i',j}^{[v\rightarrow{c}]}|\]

where \(\nu\) is an appropriately chosen scaling factor.

With the updated check-to-variable node messages, a new a priori message that is transmitted to the differential decoder may be computed as

\[L_i^{[\text{apriori}]}=\boldsymbol{\sum}_{j\in\boldsymbol{\mathcal{M}}(i)}L_{i,j}^{[v\leftarrow{c}]},\quad\forall{i}\in\{1,...,n\}\]

Note that the convergence of the sum–product decoder as described here can be considerable improved by using the so-called row-layered decoder, which allows to roughly halve the number of decoding iterations.

Many additional possible decoder variants, which may be better suited for an ASIC implementation than the message passing decoder described here, are discussed in other literatures.

 

The next tutorial introduces dispersive prisms and gratings


Share this post


Sale

Unavailable

Sold Out