Você está na página 1de 13

1 Low-Density Parity Check codes

1.1 Basic introduction in LDPC codes


Before we begin a discussion of low density parity check codes (LDPC) we review
some basics of linear block codes. We shall see that the other codes considered
so far in this book (parallel concatenation, serial concatenation, product turbo
codes) all can be framed in a common language: that of low density parity check
codes. For simplicity we will assume that all codes are binary. We shall also see
that LDPCs are conceptually very simple, they are de ned in terms of a sparse
parity check matrix. The decoder uses the message passing algorithm which we
discuss at length shortly. We provide a complete description and proof of the
decoding algorithm. For those not so interested in proofs, we give pseudo-code
implementations of the decoders that requires very little knowledge of how and
why the message passing algorithm works. The message passing algorithm is
highly parallel and is ideally suited for high data rate applications. Careful
programming of the decoder results a surprisingly simple decoder.
Before proceeding we review some basics of linear block codes. An (n; k)
block code C is a mapping between a k-bit message (row) vector m, and an n
length codeword vector c. The code C is linear if is a k; dimensional subspace
of an n;dimensional binary vector space Vn . The code can also be viewed as a
mapping of k-space to n-space by a k  n generator matrix G,where c = mG.
The rows of G constitute a basis of the code subspace. The dual space, C ?
consists of all those vectors in Vn orthogonal to C , namely for all c 2 C and all
d 2 C ?, < c; d >= 0. The rows of an n ; k  n parity check matrix H constitute
a basis for C ? . It follows that for all c 2 C , cH T = 0. A code is completely
speci ed by either G or H , but neither are unique.
Low density parity check codes were invented by Gallager [?, ?]. Their
description is among the simplest of any code. A low density parity check code
is one where the parity check matrix is binary and sparse, where most of the
entries are zero and only a small fraction are 1's. In its simplest form the parity
check matrix is constructed at random subject to some rather weak constraints
on H . A t ; regular LDPC is one where the column weight (number of ones)
for each column is exactly t resulting in an average row weight of nt=(n ; k).
One might x the row weight to be exactly s = nt=(n ; k). An (s; t)-regular
LDPC is one where both row and column weights are xed.
The following parity check matrix H is an LDPC matrix with t = 2
21 0 0 1 1 1 1
3
6
H = 64 00 1 0 1 0 1 0 77 (1)
1 1 0 0 0 0 5
1 0 1 0 1 0 1

1
and for any valid codeword c
2c 3
21 0 0 1 1 1
3
1 66 c
0
77
6
H = 64 00 1 0 1 0 1 0 77 66 ..
1
77 = 0 (2)
1 1 0 0 0 0 56 . 75
1 0 1 0 1 0 1 4c 5
c6
This expression (2) serves as the starting point for constructing the decoder.
The matrix/vector multiplication in (2) de nes a set of parity checks, which for
the speci c example are
po = c0  c3  c4  c5  c6
p1 = c1  c3  c5
p2 = c1  c2 
p3 = c0  c2  c4  c6 (3)
A complete discussion of the message passing decoder is given in the next
section. Next, we address encoding. By de ning an LDPC code in terms of H
alone it is not obvious what consitutes the set C of valid codewords. Furthermore
we need to specify the generator matrix G for the encoder. A straightforward
way of doing this is to rst reduce H to systematic form Hsys = [In;k jP ] : In
principle this is simple using Gaussian elimination and some column reordering.
As long as H is full (row) rank, Hsys will have n ; k rows. There is some
probability, however, that some of the rows of H are linearly dependent. In
that case H is not full rank and the resulting H will be in systematic form,
albeit with fewer rows. Once H is in systematic form,
 itis easy to con rm that
a valid (systematic) generator matrix is Gsys = P T jIk since GH T = 0. It is
interesting to note that the Hsys no longer has xed column or row weight, and
with high probability P is dense. The denseness of P can make the encoder
quite complex.
As an example the parity check matrix in (2) can be reduced to systematic
form as
21 0 0 0 1 0 1
3
6
Hsystematic = 64 001 0 0 0 0 0 77 (4)
0 1 0 0 0 05
0 0 0 1 0 1 0
which can be expressed as Hsystematic = [In;k jP]. This immediately deter-
mines the systematic generator matrix G = ;P T jI .
1.2 Relation of other codes to LDPCs
. In this section we show that conventional parallel and serial concatenated
turbo codes are low density parity check codes, making the class of codes it

2
presents to be quite powerful.
Consider rst a convolutional code. a parallel concatenated (Berrou) turbo
code shown in Figure YY (ADD TEXT).
A binary low-density parity check (LDPC) code is a binary linear code de-
ned by a sparse parity check matrix H . Suppose that the parity check matrix
A has N columns and M rows. Then the codeword consists of N bits which
satisfy M checks, where the location of a 1 in the parity check matrix indicates
which that a bit is involved in a parity check. The total length of the codeword
is N bits, the number of message bits is K = N ; M , and the rate of the
code is KN , assuming that the matrix is full rank. As shown in Figure ??, it is
possible to associate a bipartite graph to this parity check matrix, consisting of
N bit nodes (indicated by the white circles) and M check nodes (indicated by
the black squares), with an edge between the bit and check nodes if there is a
1 in the parity check matrix. It is known how to design irregular parity check
matrices (ref. [?]) to achieve performance extremely close to the Shannon limit
of the channel. In this paper, we consider regular parity check matrices, where
each column has a xed number of 1's and each row has a xed number of 1's,
since this serves as a common benchmark of performance.
For now we shall consider a simple AWGN channel model wihout ISI. Later
we shall show how the ISI channel of Figure 5 connects to the LDPC. Let r =
x + n where x = (x0 ; : : : ; xN ;1 ) is the codeword out of the LDPC encoder and
r is the received version of x and where n is an IID vector of Gaussian random
variables where each component has variance 2 . We assume that xj = 0(resp.
xj = 1) are transmitted as a ;1 (resp. +1). The decoding problem is to estimate
 p(xj =0 
the c's from the r's, by computing LLR posterior (xj ) = log p(xj =1jjrr)) for all
j . If LLRposterior > 0; x^j = 0 otherwise x^j = 1 The N code bits must satisfy
all parity checks, and we will use this fact to compute the posterior probability
(APP) p(xj = bjSj ; r), where Sj is the event that all parity checks associated
with xj have been satis ed.

1.3 Algorithms for LDPC decoder


The message passing algorithm is an APP algorithm only if the code graph has
no cycles. LDPC codes have cycles, so strictly speaking the message passing
algorithm does not compute the APPs. However the algorithm performs remark-
ably well. The cycle-free requirement implies that all code bits x0 ; : : : ; xN ;1 are
independent, which clearly they are not. At the end of this section algorithms
are given for implementing the message passing algorithm in both probability
and log domains. We now derive the message passing algorithm for LDPCs from
rst principles. Prior to decoding the decoder has the following: a parity check
matrix H , its bipartite graph, and N channel outputs r. Let M (j ) be the set of
parity nodes connected to the code bit xj , and let N (m) be the set of bit nodes
connected to the m'th parity check.

3
1.4 Message passing algorithm
The derivation given here most closely follows the description given by Gallager.
Using the assumption of code bit independence and Baye's rule the APP p(xj =
bjSj ; r) can be rewritten as
p(xj = bjSj ; r) = Kp(rj jxj = b)p(Sj jxj = b; r) (5)
where K is a constant for both b = 0; 1 and can be ignored. The rst term is
easy to compute, which for Gaussian noise
; b
p(rj jxj = b) = (212 )1=2 exp
rj ( +( 1) )2

2 2 (6)
and for a BSC (with crossover probability p)
p(rj jxj = 0) = prj (1 ; p)1;rj (7)
p(rj jxj = 1) = p1;rj (1 ; p)rj (8)
The second term in (6) is the probability that all parity checks connected to
xj are satis ed given r and xj = b. Note that Sj = fS0j ; : : : ; Skj g is a collection
of events, where Smj is the event that the m'th parity node connected to xj
is satis ed. Again by independence of the code bits (c0 ; : : : ; cN ;1 ) this can be
written as
Y
p(Sj jxj = b; r) = p(S0j ; S1j ; : : : ; Skj jxj = b; r) = p(Smj jxj = b; r) (9)
m2M (j )
where p(Smj jxj = b; r) is the probability that the m'th parity check connected
to the bit xj is satis ed given xj = b and r. If b = 0 this is the probability that
the code bits other than xj connected to the m'th parity check have an even
number of 1's. If b = 1, the other code bits must have odd parity. Using this
fact we next show that p(Smj jxj = b; r) has a relatively simple form.
As a preliminary calculation, suppose two bits satisfy a parity check con-
straint x1  x2 = 0, and it is known that p1 = P (x1 = 1) and p2 = P (x2 = 1).
Let q1 = 1 ; p1 and q2 = 1 ; p2. Then the probability that the check is satis ed
is
P (x1  x2 = 0) = (1 ; p1 ) (1 ; p2 ) + p1 p2
= 2p1 p2 ; p1 ; p2 + 1
which can be rewritten as
2P (x1  x2 = 0) ; 1 = (1 ; 2p1 ) (1 ; 2p2 ) = (q0 ; p0 )(q1 ; p1 ): (10)
Now suppose that L + 1 bits satisfy an even parity-check constraint x0 
x1  x2      xL = 0, as pictured in the factor graph in Figure ??.

4
Then for known probabilities fp1 ; p2 ; :::; pLg corresponding to the bits fx1 ; x2 ; :::; xL g,
it is possible to generalize (11) to nd the probability distribution on the binary
sum zL = x1  x2      xL , where zL = zL;1  xL

2P (zL = 0) ; 1 = (1 ; 2P (zL;1 = 1))) (1 ; 2pL)


= (2P (zL;1 = 0) ; 1) (1 ; 2pL)
where pL = P (xL = 1). Applying this recursively yields
Y
L
2P (zL = 0) ; 1 = (1 ; 2pi ) : (11)
i=1
or
YL YL! !
P (zL = 0) = 21 1 + (1 ; 2pi ) = 12 1 + (qi ; pi ) : (12)
i=1 i=1
Similarly it is possible to show
Y L !
P (zL = 1) = 12 1 ; (qi ; pi ) : (13)
i=1
Returning to our calaculation of p(Smj jxj = b; r), if xj = 0 then we use
P (zL = 0) and if xj = 1 we use P (zL = 1).
0 1
1 @1 + Y ; 
p(Smj jxj = 0; r) = 2 qmn0 ; qmn0 A
0 1
(14)
n0 N m nj
0 = ( )
1
1 @1 ; Y ; 
p(Smj jxj = 1; r) = 2 qmn0 ; qmn0 A
0 1
(15)
n0 =N (m)nj
where qmn
0
0 is the probability that code bit xn0 is zero, given r and excluding any
information about xn0 from parity check m. This exclusion is needed because
we desire extrinsic knowledge about xn0 from its parity checks to get extrinsic
knowledge about xj . Note that the rightmost product in (16) is over all code
bits connected to the m0 th parity node except for xj , since we are interested in
the even or odd parity of the bits other than xj .
Combining these results with (6) and (10) we get the nal expressions for
the APPs.
p(xj = 0jSj ; r) = Kp(rj jxj = 0)p(Sj jxj 0
= 0; r)
1
Y 1 @1 + Y ;q0 ; q1 A
= p(rj jxj = 0) mn0 mn0 (16)
m2M (j ) 2 n0 =N (m)nj

5
p(xj = 1jSj ; r) = Kp(rj jxj = 1)p(Sj jxj = 1; r)
0 1
Y 1 @1 ; Y ;q0 ; q1 A
= p(rj jxj = 1) mn0 mn0 (17)
m2M (j ) 2 n0 =N (m)nj
Careful inspection of the APPs in (20) one sees that many of the calculations
can be done in parallel. Furthermore, some can be viewed as \parity node"
computations and others as \bit node" computations, for example, for b = 1,
p(xj = 1jSj ; r) is
parity node
z0 }| 1{
z prior}| { Y 1@ Y ; 
p (y j xj = 1) 1 ; q mn0 ; qmn0 A :
0 1
(18)
m2M j 2 0
| ( )
{z n N m nj
= ( )
}
bit node
The notation can be simpli ed by letting qmj = qmj
0
; qmj
1
and de ning parity
check equations as
0 1
1 @1 + Y
rmj
0
= 2 qmn0 A
0
0 n N m nj
= (
1 )

1 @1 ; Y
rmj
1
= 2 qmn0 A : (19)
n0 =N (m)nj
With this new notation, we obtain the message passing algorithm given in Ap-
pendix A.1.
For the BSC, the expressions in (20) can be simpli ed a great deal. We
begin by eshing out what happens in the rst iteration of the decoder. The
probabilities qmj
0
and qmj
1
get initialized as
qmj
0
= p(rj jxj = 0) = prj (1 ; p)1;rj (20)
qmj
1
= p(rj jxj = 1) = p1;rj (1 ; p)rj (21)
i 's take on only
Depending on the value of the received bit rj the initialized qmj
one of two values. The expressions in (19) can be simpli ed as follows. First
recognize that if rj = 0
qmj
0
; qmj
1
= prj (1 ; p)1;rj ; p1;rj (1 ; p)rj (22)
= 1 ; 2p (23)
and if rj = 1, qmj
0
; qmj
1
= ;(1 ; 2p). This implies that (19) can be rewritten

6
as
0 1
1 @1 + Y
rmj
0
= 2 (1 ; 2p) (;1)rn0 A
0
0 n N m nj
= ( )
1
1 @1 + (1 ; 2p)jN m j; Y
= 2
( )
(;1)rn0 A
1

n0 N m nj
0 =
1 ( )

1 @1 ; (1 ; 2p)jN m j; Y
rmj
1
= 2 ( )
(;1)rn0 A
1

n0 =N (m)nj
(24)
Note that the product on the RHS of both of these terms evaluates to either
+1 or ;1 depending on the number of 1's in the set N (m)nj and is very easy
to compute given the value of the parity check computed for node m. Finally,
rmj
0
and rmj
1
take only one of two values, making this easy to implement in
hardware, with no real calculations required. Namely
 
rmj
0
= 12 1 ; (1 ; 2p)jN (m)j;1 if the parity of bits with indices from N (m)nj is odd
 
= 21 1 + (1 ; 2p)jN (m)j;1 if the parity of bits with indices from N (m)nj is even
 
rmj
1
= 21 1 ; (1 ; 2p)jN (m)j;1 if the parity of bits with indices from N (m)nj is even
 
= 21 1 + (1 ; 2p)jN (m)j;1 if the parity of bits with indices from N (m)nj is odd
(25)
Proceeding further the APPs can be simpli ed as
Y 1 jN (m)j;1 
p(xj = 0 j Sj ; r) = p(rj j xj = 0) 1  (1 ; 2p )
m2M (j ) 2
Y 1 jN (m)j;1  (26)
p(xj = 1 j Sj ; r) = p(rj j xj = 1) 1  (1 ; 2p )
m2M (j ) 2
which reduces to
p(xj = 0 j Sj ; r) = fj0
Y 1 1 ; (1 ; 2p)jN (m)j;1 Y 1 1 + (1 ; 2p)jN (m)j;1
m2M odd (j ) 2 m2M even (j ) 2
Y 1  jN (m)j;1  Y 1 
jN (m)j;(27)
p(xj = 1 j Sj ; r) = fj1 1 + (1 ; 2p ) 1 ; (1 ; 2 p ) 1

m2M odd (j ) 2 m2M even (j ) 2


Where M odd(j ) are the set of nodes connected to xj with odd parity, and
M even (j ) are those with even parity. Finally, the rst iteration calculations,

7
which are extremely easy to calculate given the hard decision results of the
parity checks, are:
 jM odd j j  jM even j j
p(xj = 0 j Sj ; r) = fj0 21 1 ; (1 ; 2p)jN (m)j;1 1 1 + (1 ; 2p)jN (m)j;1
( ) ( )

2
1  j N m j;  j M odd (j )j 
1 1 ; (1 ; 2p)jN (m)j;1jM even(28)
(j )j
p(xj = 1 j Sj ; r) = fj1 2 1 + (1 ; 2p) ( ) 1
2
1.5 Message passing algorithm in log domain
Next we compute LLRposterior(xj ) = log pp((xxjj =1
=0jr;Sj )
jr;Sj ) . Before we take the log
of the expressions in (20) we use the following fact:
Y Y ! X !
ai = sgn(ai )  exp log(jai j) (29)
i i i
and the following de nition:
x;1
tanh( x2 ) = eex + 1 (30)
First
LLRposterior(xj ) = log pp((xxj = 0jr; Sj )
j = 1jr; Sj )
Y ; 
1+ qmn0 ; qmn0
0 1

Y
= log pp((rrj jjxxj = 0) + log
= 1)
n0 = N m nj
Y( )
(31)
j j m2M (j ) 1 ; (qmn 0 ; qmn0)
0 1

n0 =N (m)nj
q
0 ; qmn 0 ) = log qmn00 , simple substitution
0
Letting qmn0 = qmn
0 1
0 and LLR(qmn 0
1
mn
gives qmn0 = tanh(LLR( qmn 0 )). With this one can write (34) as
0

2rj + X smj e Amj


2 log 11 +
; s eAmj
m2M (j ) mj
X
log 11 ;
Amj
= 2r2j ; smj e
+ s eAmj
m2M (j ) mj
X  smj eAmj ; 1 
= 2r2j ; log ; smj eAmj + 1 (32)
m2M (j )

8
where
Y Y ; 
smj = sgn(qmn0 ) = sgn LLR(qmn
0
0)
n0 =N (m)nj n0 =N (m)nj
X
log(j tanh( LLR(qmn0 ) )j)
0
Amj = 2 (33)
n0 =N (m)nj
Referring to (34)The argument of the log() in (35) is always positive. The
equation (35) can be simpli ed further to

2rj ; X s log ; tanh( Amj )

2 mj 2
m2M (j )

2rj ; X s log j tanh( Amj )j

2 mj 2 (34)
m2M (j )
; 
Letting (x) = log j tanh( x2 )j (37) becomes
X
LLRposterior(x ) = 2rj ;
j 2 smj (Amj ) (35)
m2M (j )
where
X X
log(j tanh( LLR(2qmn0 ) )j) =
0
Amj = (LLR(qmn
0
0 )) (36)
n0 =N (m)nj n0 =N (m)nj
This leads to the log-domain decoding algorithm given in Section ZZZ. For
the binary symmetric channel the rst iteration calculation (41) in log domain,
LLRposterior(xj ) = log pp((xxjj =0jr;Sj ) . The rst iteration calculations are easy to
=1jr;Sj )
calculate given the hard decision results of the parity checks:
0 ;  odd ;  even 1
posterior @ fj0 1 ; (1 ; 2p)jN (m)j;1 jM (j)j 1 + (1 ; 2p)jN (m)j;1 jM (j)j A
LLR (xj ) = log 1 ;  odd ;  even
fj 1 + (1 ; 2p)jN (m)j;1 jM (j)j 1 ; (1 ; 2p)jN (m)j;1 jM (j)j
(37)
!
f0
= log fj1
j
   
+ jM (j )j log 1 ; (1 ; 2p)jN (m)j;1 + jM even (j )j log 1 + (1 ; 2p)jN (m)j;1
odd
   
; jM odd(j )j log 1 + (1 ; 2p)jN (m)j;1 ; jM even (j )j log 1 ; (1 ; 2p)jN (m)j;(38)
1

and

9
!
f
LLRposterior(xj ) = log fj1
0

j
h    i
+ jM odd(j )j log 1 ; (1 ; 2p)jN (m)j;1 ; log 1 + (1 ; 2p)jN (m)j;1
h  jN m j;   jN m j; i
+ jM even (j )j log 1 + (1 ; 2p) ( ) 1
; log 1 ; (1 ; 2p) ( )
(39):
1

Let    
Cj1 = log 1 ; (1 ; 2p)jN (m)j;1 ; log 1 + (1 ; 2p)jN (m)j;1 (40)
which can precomputed based on the value of p and is independent of j . Recall
(9), so,
  ; 
LLRposterior(xj ) = (1 ; 2rj ) log 1 ;p p + Cj1 jM odd(j )j ; jM even (j(41)
)j
so the rst iteration requires only a parity check calculation and evaluation of
two constants that depend on p.

10
1.5.1 Probability domain decoding algorithm
0) Initialization. The variables qmn are initialized to
0

for j = 0; : : : ; N ; 1
( rj +1)2
qmj
0
= fi0 = (21 ) = exp 
2 1 2
2 2

rj ; ( 1)2
qmi
1
= fi1 = (21 ) = exp 
2 1 2
2 2

end
1) Parity node updates. Let qmj = qmj
0
; qmj
1
.
for j = 0; : : : ; N ; 1
for m 2 M (j ) Y
rmj = qmi0
i0 2N (m)nj
rmj
0
= 21 (1 + rmj )
rmj
1
= 12 (1 ; rmj )
end
end
2) Bit node updates. The constants and are chosen to make q0 + q1 = 1
for j = 0; : : : ; N ; 1 Y
qmj
0
= mj fj0 rm0 j 0

Y
m0 2M j nm( )

qmj
1
= mj fj1 rm1 0 j
m0 2M (j )nm
end
The qjb 's are updated as:
for j = 0; : : : ; N ;Y1
qj0 = j fj0 rm0 0 j
m0 Y
2M (j)
qj = j fj
1 1
rm1 0 j
m0 2M (j )
end
3) Verify parity checks.
for j = 0; : : : ; N ; 1
If (qj0 > 0:5) then x^j = 0
else x^j = 1
end

11
Does H T x^ = 0? If yes, done.
4) If done, quit. If not, got to 1) .

12
1.5.2 Log domain decoding algorithm
X
Let (x) = log(tanh( x2 )) and Amj = (LLR(qmn
0
0 )).
n0 2N (m)nj
0) Initialization. The variables qmj
0
are initialized to
for j = 0; : : : ; N ; 1
) = log( qqmj
mj ) = 2rj
0
LLR(qmj0
1 2

end
1) Parity node updates.
for j = 0; : : : ; N ; 1
Rmj = sj (;Amj )
end
2) Bit node updates.
for j = 0; : : : ; N ;X
1
LLR(qj ) =
0
(Rmj )
m2M (j )
end
The LLR(qmj
0
)'s is updated as:
for j = 0; : : : ; N ; 1
LLR(qmj0
) = LLR(qj0) ; Rmj
end
3) Verify parity checks.
for j = 0; : : : ; N ; 1
If LLR(qj0 ) > 0 then x^j = 0
else x^j = 1
end
Does H T x^ = 0? If yes, done.
4) If done, quit. If not, got to 1) .

13

Você também pode gostar