33 visualizações

Enviado por vmaiz

- Filename: December 2014 question & solution.pdf
- Z05_REND6289_10_IM_MOD5
- 557966b408aeacff2002f4b1
- Turbo Tutorial c
- 68313_apdx7
- FARMERS TURNED INTO BRICK WORKERS: A STUDY ON THEIR SOCIAL STATUS CHANGE
- FLT1ques
- 322893654 Simulation of Convolutional Encoder
- [IJCST-V4I1P12]:Vivek Solavande
- ASILOMAR 2009 Blind Signal Separation and Identification of Mixtures of Images ISSN 10586393 Print ISBN 9781424458257 DOI 10.1109 Slash ACSSC.2009
- 05594213
- syllabus
- Programe 1-16-34
- Cs111 Matlab More Programming
- Lab 6 - Student - Designing Pitch Controller Using State-Space Method
- LASylF08
- MA 2013 Unsolved
- Solved Example 2
- topic2
- Tutorial Mathcad

Você está na página 1de 31

Codes I1

Abstract – In this paper, which is the first part in a series of three, a formal the-

ory and construction methods for low-density convolutional (LDC) codes (including

turbo-codes) are presented. The principles of iterative decoding of LDC codes are

described. Some simulation results are also presented. In the following two parts

of this work the authors intend to do a statistical analysis of LDC codes and a

1. INTRODUCTION

codes was developed in [2-6]. These codes form a large family of convolutional

codes that includes, in particular, the well known turbo-codes [7]. In this paper,

these codes and describe a way to construct them. Then, a principle of correct

1

The material in this paper was presented in part at the Sixth International Workshop on

Algebraic and Combinatorial Coding Theory, Pskov, Russia, September 6-12 1998.

1

graph, analogously to Tanner [8]. In the end of the paper, simulation results are

Markov equations, will be studied. For these codes a lower bound on the free

distance, and upper bounds for decoding error probability when maximum likelihood

the third paper. This is the most complicated part of the theory of LDC codes.

been done by Zyablov and Pinsker [9] and by Margulis [10]. In the first one, it was

proved that there exist low-density block codes, such that, with iterative decoding,

after transmission over a binary symmetric channel, the number of errors that can

be corrected is increasing linearly with the block length. We can understand the im-

portance of this result if we consider that the complexity, per information symbol, of

iterative decoding is practically independent of the block length. In the second one,

a method for constructing low-density block codes, where the number of correctable

errors, using iterative decoding, is growing linearly with the block length.

First we will give the definition of a low-density block code. Let v[0,N ) =

block code of transmission rate R = K/N, where K and N are positive integers.

2

Definition 1 A rate R, length N, binary block code defined by its transposed parity-

wH (hn ) ≪ N − K, (1)

where wH (·) denotes the Hamming weight. If all rows of H T have the same weight

and the same applies for all columns, then the code is called a homogeneous low-

matrices didn’t necessarily have full rank. In addition to homogeneous codes we will

also consider semi-homogeneous low-density codes, for example when all even rows

of H T have weight ν, all odd rows have weight ν − 1, and all columns have the same

weight.

and

the general case time-varying, rate R = b/c, b < c, convolutional encoder. Let

.. .. ..

. . .

T T T

. . . H (−1) H (0) . . . Hm (ms − 1) ...

HT = 0 1 s

, (4)

T T T

... H0 (0) H1 (1) ... Hms (ms ) . . .

.. .. ..

. . .

3

where HiT (t), i = 0, 1, . . . , ms , are binary c × (c − b) submatrices, be the transposed

infinite parity-check matrix of the convolutional code, also called the syndrom for-

mer. The value ms is the syndrom former memory. We suppose that two conditions

are fulfilled:

(a)

which is fulfilled by the last (c − b) rows of H0T (t) being linearly independent;

(b)

T

Hm s

(t) 6= 0, t ∈ Z. (6)

Then the equation vH T = 0, which defines the code words v, can be written in

T

vt H0T (t) + vt−1 H1T (t) + . . . + vt−ms Hm s

(t) = 0. (7)

Equation (7) can be used to define the encoder. Particularly, if the first b symbols

and the other (c − b) symbols are defined as in (7), the encoder is a systematic

encoder.

In principle, condition (6) in the definition of the syndrom former can be omitted.

Then the syndrom former memory would not be defined and we would have to

4

Definition 2 The rate R = b/c convolutional code defined by its infinite syndrom

wH (hn ) ≪ (c − b) ms , n ∈ Z. (9)

If all rows of the syndrom former have the same Hamming weight and the same

applies for all columns, then the LDC code is called homogeneous.

As in the block code case, we will also consider semi-homogeneous LDC codes. A

homogeneous, rate R, LDC code with syndrom former memory ms , for which each

row in H T has weight ν, and each column has weight ν/ (1 − R), where ν/ (1 − R)

matrix of a rate R = 1/2 block code, having one one in each row and two ones in

each column, is cut as shown in Figure 1a. Then the lower part is moved, as shown

infinite matrix that has one one in each row and two ones in each column. This

matrix doesn’t satisfy the syndrom former conditions (5) and (6). If we append

the submatrix (11)T at the left of each pair of rows, as shown in Figure 1c, we get

the syndrom former of a homogeneous LDC code, having two ones in each row and

5

four ones in each column, satisfying condition (5), but still not condition (6). To

do so, we append the submatrix (01)T at the right of each pair of rows, as shown

semi-homogeneous LDC (2,2.5,5)-code, having two ones in each even row, three ones

Obviously, if the parity-check matrix of the initial block code would have two ones in

each even row, one one in each odd row, and three ones in each column, the method

an alternative way to construct LDC codes that will allow us to create, not only

3. TURBO-CODES

Definition 3 An infinite matrix S = (si,j ), i, j ∈ Z, si,j ∈ {0, 1}, that has one one

in each row, one one in each column, and satisfies the causality condition

The identity scrambler has ones only along the diagonal, i.e. si,i = 1, i ∈ Z. For the

some T , the scrambler is called periodical with period T . Most practical scramblers

6

are periodical. Particularly the standard n × n block interleaver can be represented

Definition 4 An infinite matrix S = (si,j ), i, j ∈ Z, si,j ∈ {0, 1}, that has one one

in each column and at least one one in each row, and that satisfies the causality

Generally speaking, multiple convolutional scramblers do not only permute the input

symbols, but they also make copies of them. As a natural generalization of the

condition

as entries, then each column of this scrambler has exactly one one, and the rows

will have d/c ones on average. This scrambler maps input sequences consisting of c-

tuples onto output sequences consisting of d-tuples. The ratio RS = d/c is called the

rate of the scrambler. If all rows have the same number of ones, then the scrambler

is called homogeneous.

Definition 5 Let Θ be the set of sub-matrices of S such that Sk,l 6= 0. Then the

δ = max (l − k) (12)

k,l:Sk,l ∈Θ

XX

Σ = max wH (Sk,l ) (13)

p

k<p l≥p

7

where wH (Sk,l ) denotes the Hamming weight of (the number of ones in) the sub-

matrix Sk,l .

in theory (see the second part of this work), we are in practical situations only

interested in periodical scramblers having finite delay. The size of the scrambler

is equal to the maximal number of input symbols that the scrambler must keep in

its memory. We will consider mainly homogeneous scramblers, with the additional

condition that the diagonal sub-matrices, Sk,k , all have the same Hamming weight.

Such scramblers always keep the same number of input symbols in its memory, i.e.

P P

k<p l≥p wH (Sk,l ) is constant for all p.

Now we will define two operations - column interleaving of matrices and row-

(1) (2) (i)

S (1) = Sk,l and S (2) = Sk,l , where Sk,l , i = 1, 2, are c × d(i) sub-matrices,

(1) (2)

Sk,2l = Sk,l , Sk,2l+1 = Sk,l . (14)

Since the entries Sk,2l and Sk,2l+1 of the matrix S have sizes c × d(1) and c × d(2)

respectively, we can merge these two entries into one of size c × d(1) + d(2) .

The column interleaving of two scramblers of rates d(1) /c and d(2) /c gives a rate

d(1) + d(2) /c scrambler.

8

Definition 7 The row-column interleaving S = S (1) S (2) of two infinite matrices

(1) (2) (i)

S (1) = Sk,l and S (2) = Sk,l , where Sk,l , i = 1, 2, are c(i) × d(i) sub-matrices, is

(1)

S2k,2l = Sk,l , S2k,2l+1 = 0(c(1) ×d(2) ) ,

(2) (15)

S2k+1,2l = 0(c(2) ×d(1) ) , S2k+1,2l+1 = Sk,l .

Here 0(c×d) means the all zero c × d matrix. If we merge these four sub-matrices, we

can regard S as a matrix with entries of size c(1) + c(2) × d(1) + d(2) .

The row-column interleaving of two scramblers of rates d(1) /c(1) and d(2) /c(2) gives

a rate d(1) + d(2) / c(1) + c(2) scrambler. The generalization of Definitions 6 and

forward.

Our definition of a turbo-code is different from the definition in [7], and include

the codes therein as a special case. If RS = d/c is the rate of the scrambler, and

b

R = 1 − RS (1 − Rb ) = (16)

c

is the rate of the turbo-code. We note that to fulfill condition (9) of LDC codes it

is sufficient that the delay δ (if it is finite) of the scrambler S is much larger than

the syndrom former memory, mb , of the basic code. If the delay δ is infinite, it

9

memory. This means that the number of input symbols the scrambler keeps in its

memory is much larger than the memory of the basic convolutional code. So, the

codes.

matrix we choose a 5 × 5 matrix having one one in each column and row, Figure 2a.

Then, analogously to Figure 1b, by “unwrapping” the lower part of the matrix and

Figure 2b. To avoid direct mapping of input symbols to the output of the scrambler,

we add zeros on the left side, hence si,i = 0, i ∈ Z. The scrambler has delay δ = 5

and size Σ = 3. Column interleaving of the identity scrambler and the scrambler of

. . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..

. . . . . . . . . . . . . . . . .

... 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 ...

... 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 ...

... 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 ...

S= ... 0

. (17)

0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 ...

... 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 ...

... 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 ...

.. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..

..

. . . . . . . . . . . . . . . . .

rate Rs = 4/2 scrambler. If we would use such a scrambler, together with a trivial,

10

matrix

.. .. .. .. .. .. .. .. .. .. .. .. ..

. . . . . . . . . . . . .

... 1 1 1 1 0 0 0 0 0 0 0 0 ...

Hb = ... 0 0 0 0 1 1 1 1 0 0 0 0 ... . (18)

... 0 0 0 0 0 0 0 0 1 1 1 1 ...

.. .. .. .. .. .. .. .. .. .. .. .. ..

. . . . . . . . . . . . .

as component code, we would get a homogeneous LDC code of rate R = 1/2, whose

syndrom former SHbT has two ones in each row and four ones in each column. This

turn filled by input symbols. The filling of the tables proceeds row-wise. When

one table is full, its contents is read out column-wise. At the same time, the other

... 0 1 2 3 4 5 6 7 8 9 ...

.. .

.. .. .. .. .. .. .. .. .. .. ..

. . . . . . . . . . .

−4 ... 1 0 0 0 0 0 0 0 0 0 ...

−3 ... 0 1 0 0 0 0 0 0 0 0 ...

−2 ... 0 0 0 1 0 0 0 0 0 0 ...

−1 ... 0 0 1 0 0 0 0 0 0 0 ...

S= 0 ... 0 0 0 0 1 0 0 0 0 0 ... , (19)

1 ... 0 0 0 0 0 1 0 0 0 0 ...

2 ... 0 0 0 0 0 0 0 1 0 0 ...

3 ... 0 0 0 0 0 0 1 0 0 0 ...

4 ... 0 0 0 0 0 0 0 0 1 0 ...

5 ... 0 0 0 0 0 0 0 0 0 1 ...

.. .. .. .. .. .. .. .. .. .. .. . .

. . . . . . . . . . . .

where the vertical indexes are input symbol indexes, and the horizontal indexes are

output symbol indexes. By column interleaving the identity scrambler and scrambler

11

(19) we get the following rate 2/1 scrambler,

. . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..

. . . . . . . . . . . . . . . . . . .

... 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 ...

... 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 ...

... 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 ...

S= ... 0 0 0 0

. (20)

0 0 1 0 0 0 0 0 0 0 0 1 0 0 ...

... 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 ...

... 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 ...

.. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..

. . . . . . . . . . . . . . . . . . .

Row-column interleaving of scrambler (20) and two identity scramblers gives the

. . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . .

... 1 ...

... ...

... ...

... 1 1 ...

... 1 ...

... 1 ...

S=

... 1 1 ...

,

(21)

... 1 ...

... 1 ...

... 1 ...

... 1 ...

... 1 ...

. . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . .

with rate 4/3. If this scrambler is combined with the basic code with the following

syndrom former,

.

.. .. .. .. .. .. .. .. ..

. . . . . . . .

... 1 0 0 0 1 0 0 0 ...

... 0 1 0 0 0 1 0 0 ...

... 1 0 1 0 1 0 0 0 ...

... 0 1 0 1 0 1 0 0 ...

T

Hb = . (22)

... 0 0 1 0 0 0 1 0 ...

... 0 0 0 1 0 0 0 1 ...

... 0 0 1 0 1 0 1 0 ...

... 0 0 0 1 0 1 0 1 ...

.. .. .. .. .. .. .. .. ..

. . . . . . . . .

we get the syndrom former H T = SHbT of the rate 1/3 turbo-code. The correspond-

12

We note that (22) corresponds to the rate 2/4 convolutional code, whose syndrom

former is a row permutation of the matrix, which is the result of the row-column

interleaving of two syndrom formers for the (7,5) convolutional code. In general,

the interleaver and the basic code of the rate 1/3 turbo-code are more complicated,

but the method of constructing the syndrom former is the same as in this example.

v = v0 , v1 , . . . , vi , . . . , vi ∈ GF (2) , (23)

r = r0 , r1 , . . . , ri , . . . , (24)

be the received sequence, where ri are independent Gaussian variables with mathe-

p N0

E (ri ) = Es (1 − 2vi ) , V ar (ri ) = , (25)

2

(0)

respectively. Here Es = Eb R is the energy per transmitted symbol. Let πi ,

(0)

i = 0, 1, . . . be the apriori probability that vi = 0, i.e. πi = P (vi = 0), and

let f (ri | vi ) be the probability density function of ri conditioned on that the trans-

mitted symbol is vi . The goal of the decoding is to calculate, for all symbols vi ,

that the transmitted code sequence of the convolutional code C is v and that the

13

received sequence is r. The calculation of the sequence of aposteriori probabilities

πiapost , i = 0, 1, . . ., for each symbol vi in the code sequence, is called APP or soft

v̂i = (26)

1, otherwise,

for each i, the decoder calculates, not the statistic πiapost , but its surrogate π̃iapost .

The calculation of π̃iapost is iterative, such that each new value defines πiapost more

accurately.

We will show that it is possible to organize the iterative procedure such that, at

least in the first iteration steps, the calculated statistics are correct aposteriori prob-

abilities, given r, and the condition of correctness is violated only in later iteration

steps. Below, we consider decoding of homogeneous LDC codes, when each row of

the syndrom former contain ν ones and each column ν/ (1 − R) ones. Then each

symbols.

Let us consider the decoding of a rate R = 1/3 homogeneous LDC code with

ν = 2, when each symbol is included in two parity-check equations, and each equa-

tion connect three symbols. We can represent this code as a set of triangles, Figure

4, where each triangle corresponds to a parity-check equation, and each apex corre-

l = 0, 1, . . ., such that each apex is part of two triangles, one with even number and

14

def (0)

one with odd. Let the apriori probabilities πiapr

k

= πik and the received sequence r

(1)

be known. In the first iteration step we calculate two aposteriori probabilities, πi0 ,j0

(1)

and πi0 ,j1 , for symbol vi0 , which enters the j0 th and the j1 th parity-check equation.

The first, “even”, one is calculated by using the symbols in the parity-check equation

πi0 ,even = πi0 ,j0 = P (vi0 = 0 | ri0 , ri2 , ri3 ) . (27)

πi0 ,odd = πi0 ,j1 = P (vi0 = 0 | ri0 , ri1 , ri4 ) . (28)

Obviously, we use the apriori probabilities πi0 , πi2 and πi3 in the calculation of

(1) (0) (0) (0) (1)

πi0 ,even , and πi0 , πi1 and πi4 in the calculation of πi0 ,odd . Analogous calculations are

carried out for all symbols vik , k = 0, 1, . . . Then, as result of the first iteration we

get, for each symbol vik , two aposteriori probabilities, one “even” and one “odd”.

(0)

These statistics will, together with the apriori probabilities πik , be used in the

second iteration step. Continuing the iteration process, we get, after the nth step,

(n) (n)

the aposteriori probabilities πik ,even and πik ,odd for each symbol vik . These statistics

(0)

will, together with πik , be used in the (n + 1)th iteration step.

For the description of the decoding process we need the following additional

definitions, here given for symbol vi0 but the generalization to vik , k = 1, 2, . . . is

trivial.

√

f (ri0 | vi0 = 0)

def 4 Es ri0

Fi0 = = exp , (29)

f (ri0 | vi0 = 1) N0

(0) (0)

def Li0 Fi0 − 1 πi0

(0) 1 (0) (0) def

Λi0 = (0) = tanh ln Li0 Fi0 , Li0 = (0)

(30)

Li0 Fi0 + 1 2 1 − πi0

15

Following Gallager [1], we get for the “odd” likelihood ratio

(0) (0)

(1) (0) 1 + Λi1 Λi4

Li0 ,odd = Li0 Fi0 (0) (0)

(31)

1 − Λi1 Λi4

(1)

(Li0 ,even is calculated analogously). In the following iteration steps we have

(n)

Li0 ,odd − 1

(n) def 1 (n)

Λi0 ,odd = (n)

= tanh ln Li0 ,odd , n = 1, 2, . . . (32)

Li0 ,odd + 1 2

where

(n) (n−1) (n−1)

(n) def πi0 ,odd (0) 1 + Λi1 ,even Λi4 ,even

Li0 ,odd = (n)

= Li0 Fi0 (n−1) (n−1)

. (33)

1 − πi0 ,odd 1 − Λi1 ,even Λi4 ,even

(n) (n−1)

In the same way, the likelihood ratio Li0 ,even depends on the statistics Λi2 ,odd and

(n−1) (0)

Λi3 ,odd calculated in the previous iteration step, and on the original statistic Λi0 . It

(1) (0) (0) (0)

follows from Figure 4 that the likelihood ratio Li0 ,odd depends on Λi0 , Λi1 and Λi4 ,

(2) (0) (0) (0) (0) (0) (0)

and that the likelihood ratio Li0 ,odd depends on Λi0 , Λi1 , Λi4 , Λi5 , Λi12 , Λi10 and

(0) (2)

Λi11 . We can say that πi0 ,odd is the correct aposteriori probability of vi0 = 0 given

the received symbols ri0 , ri1 , ri4 , ri5 , ri10 , ri11 and ri12 . We can also see in Figure 4

that the condition for correct calculation of the “odd” aposteriori probability that

(3) (3)

vi0 is violated in the fourth iteration step. In fact, both the statistics Λi1 and Λi4

(4)

depend, in particular, on ri26 , ri27 and ri38 . Therefore, when we calculate Li0 ,odd , the

(0) (0) (0)

statistics Λi26 , Λi27 and Λi38 enters (33) twice, which unwarrantably intensifies their

significance. This is a corollary of the fact that the graph describing this LDC code

has a loop of length 4, and we say that when working with symbol i0 only the first

to have correct aposteriori probabilities at least in the first iteration steps. For

16

(0)

the same reason, we use the original value of the statistic Λi0 when we calculate

(n)

likelihood ratio Li0 according to formula (33). In practice, when realizing the

(n)

algorithm, it is convenient to operate, not with the likelihood ratios Li0 ,even and

(n) (n) (n)

Li0 ,odd , but with the statistics Λi0 ,even and Λi0 ,odd , the latter defined by (32). Then

(33) is equivalent to

(0) (n−1) (n−1)

(n) Λi0 + Λi1 ,even Λi4 ,even

Λi0 ,odd = (0) (n−1) (n−1)

, (34)

1 + Λi0 Λi1 ,even Λi4 ,even

(n)

and Λi0 ,even is defined analogously. In the last, Ith, iteration step the calculation of

(I)

Λi0 is done by using both the “even” and “odd” equations above. Namely, first we

(0) (I−1) (I−1)

(I) Λi0 + Λi1 ,even Λi4 ,even

Λi0 ,odd = (0) (I−1) (I−1)

, (35)

1 + Λi0 Λi1 ,even Λi4 ,even

(I) (I−1) (I−1)

(I) Λi0 ,odd + Λi2 ,odd Λi3 ,odd

Λi0 = (I) (I−1) (I−1)

. (36)

1 + Λi0 ,odd Λi2 ,odd Λi3 ,odd

The rule

(I)

0, if Λi0 > 0

v̂i0 = (37)

1, otherwise,

is used to make a decision about vi0 .

We have described the iterative decoding process in the simple case, when each

symbol enters two parity-check equations and each equation contains three symbols.

The description of the decoding process in the general case is not complicated,

although the graph describing the code can have a complicated form. For example

if each symbol is part of three equations and each equation contains six symbols, then

each equation corresponds to a hexagon and each apex enters three hexagons. In

17

each iteration step we then calculate three aposteriori probabilities, for each symbol

1/2, which is the codes we used in all simulations, and in the analysis that will

appear in the following parts. Let I (j) be the set of all i, such that vi is part of

the jth parity-check equation, and let J (i) be the set of parity-check equations

j, that include symbol vi . IJ (i) is the set of i′ , such that vi′ participates in at

least one of the parity-check equations that include vi . In the nth iteration step,

(n) (n)

n = 1, 2, . . . , I −1, we calculate, for each i, ν statistics Λi,j . In the calculation of Λi,j

(0) (n−1)

we use, except the statistic Λi , the statistics Λi′ ,j ′ such that i′ ∈ IJ (i), i′ 6= i and

(I)

j ′ ∈ J (i), j ′ 6= j. Only in the last, Ith, iteration step we calculate the statistic Λi ,

(I−1)

using Λi′ ,j ′ for all j ′ ∈ J (i) and i′ ∈ IJ (i), i′ 6= i, and make a decision according

to (37). In the following section, we describe the decoding algorithm, used in the

5. DECODING ALGORITHM

(n)

that consecutively calculates the statistics Λi,j , i = 0, 1, . . ., j ∈ J (i), n = 1, 2, . . . , I.

(0)

First, given the apriori probabilities πi and the received symbols ri , the decoder

(0)

calculates the starting statistics Λi . Using these statistics, the first processor cal-

(1)

culates, for each i and each j ∈ J (i), the ν statistics Λi,j , analogous to the “odd”

and “even” statistics in the case when ν = 2, considered in the previous section. The

second processor operates with a delay of 2ms symbols, i.e. when the first processor

(1) (2)

calculates Λi,j , j ∈ J (i), the second processor calculates Λi−2ms ,j , j ∈ J (i − 2ms ).

18

In the same way, the nth processor operates with a delay of 2ms (n − 1) with respect

(n) (0)

to the first one. To calculate Λi,j we use Λi and all ν − 1 equations j ′ ∈ J (i),

(n,1)

j ′ 6= j. This calculation is done in ν steps, indexed q, such that Λi,j depends on

(0) (n,2) (0)

equation j ′ = j1 and on Λi , Λi,j depends on equations j ′ = j1 , j2 and on Λi ,

(n)

and so on. Equation jq = j is excluded when calculating Λi,j . Finally we have

(n,ν) (n) (0)

Λi,j = Λi,j that depends on Λi,j and on the ν − 1 equations j ′ ∈ J (i), j ′ 6= j. The

(I)

final, Ith, processor calculates the statistic Λi that depends on all ν equations, and

makes a hard decision v̂i . A formal description of the algorithm for rate R = 1/2

• Calculate

(0) (0) √

(0) Li Fi − 1 (0) πi 4 Es ri

Λi = (0)

, Li = (0)

, Fi = exp

Li Fi + 1 1 − πi N0

– Let in = i − 2ms (n − 1)

– For each q = 1, 2, . . . , ν

(n′′ ,q′′ ) Q (n′ )

Λin ,jp + i′ ∈I(jq ),i′ 6=in Λi′ ,jq

(n′′ ,q′′ ) Q

(n,q) (n′ ) p 6= q

Λin ,jp = 1+Λin ,jp i′ ∈I (jq ),i′ 6=in Λi′ ,jq (38)

(n′′ ,q′′ )

Λin ,jp p=q

where

′ n − 1 if i′ > in

n =

n otherwise,

′′ ′′ (n − 1, ν) if q = 1

(n , q ) =

(n, q − 1) otherwise

19

and

(n′′ ,(p+1)mod ν ) (0) (n,ν) (n)

Λin ,jp = Λin , Λin ,jp = Λin ,jp .

(I)

0 if ΛiI ≥ 0

v̂iI =

1 otherwise,

where

(I,ν−1) Q (n′ )

(I) def ΛiI ,jν + i′ ∈I(jν ),i′ 6=iI Λi′ ,jν

ΛiI = (I,ν−1) Q (n′ )

,

1 + ΛiI ,jν ′ ′ Λ ′

i ∈I(jν ),i 6=iI i ,jν

I − 1 if i′ > iI

n′ =

I otherwise.

6. SIMULATION RESULTS

neous, rate R = 1/2, LDC (ms , ν, 2ν)-codes with ν = 2, 2.5, 3, 4 and some different

iterations for LDC (ms , 3, 6)-codes and for different values of signal-to-noise ratios.

the signal-to-noise ratio for the different codes. The number of iterations I is 60 in

all cases except the few points where the simulated value was not reliable, because

of very low error probability. There we used the largest value of I < 60 that was

reliable, i.e. we use a pessimistic estimation. In Figure 7 we show how the number

of iterations grows, for LDC (ms , 3, 6)-codes, with ms for different signal-to-noise

20

The dotted and dash-dotted lines in Figure 6 are the simulation results, given in

[11] and [12] respectively, of the decoding of rate R = 1/2 conventional turbo-codes.

The block lengths were N = 2048 and N = 1024 respectively, the component codes

all had memory m = 4, and the number of iterations used were I = 5 and I = 12

and a system using an LDC (ms , ν, 2ν)-code and the decoding procedure described

Turbo LDC

Encoder memory O (N) O (ms )

Decoder memory O (N2m ) O (Ims ν)

Number of operations per decoded symbol O (I2m ) O (Iν 2 )

Decoding delay N Ims

From this table and Figure 6 we can conclude that, in terms of encoder memory,

the system using the LDC (ms , 3, 6)-codes performs better than the system using

conventional turbo-codes, but in terms of decoding delay and decoder memory, the

situation is the opposite. The number of operations per decoded symbol is in the

block codes is made. From this comparison follows that to have a bit error probabil-

ity similar to that of, for example, the LDC (4097, 2.5, 5)-code, the Gallager codes

In Section 4 we showed that, the graph used to describe the decoding of symbol

i0 for the LDC code considered there, has a loop of length 4. Therefore, when we

calculate the aposteriori probability for this symbol, only the first three iterations

are correct. In the general case, the number of correct iterations, n, can be different

for different symbols in the code. Table 1 shows the fraction of symbols having n

21

correct iterations, n = 0, 1, 2, 3, 4, 5, for some of the LDC (ms , ν, 2ν)-codes that we

performance of the codes will be given in the second part of this work.

7. CONCLUSION

block and convolutional codes, LDC codes are in some ways similar to low-density

block codes, and in some ways different. Therefore, some mathematical methods

of analysis are applicable for both types of codes, but some methods are applicable

only for one of them. Particularly, the method of generating functions, that Gallager

used for lowerbounding the minimal distances of low-density codes, is not applicable

for lowerbounding the free distances of LDC codes. In the same way, the methods

of using Markov processes in the analysis, that will be used in the second part of

this work for lowerbounding the free distance and upperbounding the decoding error

probability, are not applicable to low-density block codes. On the other hand, the

The authors are grateful to R. Johannesson for his help in the formulation of some

REFERENCES

Massachusetts, 1963.

22

[2] A. Jimenez and K. Sh. Zigangirov, “Periodic Time-Varying Convolutional

[4] K. Engdahl and K. Sh. Zigangirov, “On the Statistical Theory of Turbo-

tional Codes”, Proceedings WCC-99, Paris, France, Jan. 1999, pp. 379-392.

plexity of Gallager’s Low Density Codes”, Probl. Inform. Transm., vol. 11,

23

[10] G. A. Margulis, “Explicit Construction of a Concentrator”, Probl. Inform.

24

LIST OF FIGURE CAPTIONS

Figure 5: Simulated bit error probability Pb for different signal-to-noise ratios (in-

dicated in the plots, in dB) for the LDC (ms , 3, 6)-codes as a function of I,

Figure 6: Simulated bit, Pb , (dashed lines) and burst, PB , (solid lines) error proba-

bilities for the LDC (ms , ν, 2ν)-codes as a function of the signal-to-noise ratio.

For all the plots ms has the values, starting from the top and continuing for

as many solid lines as there is in the plot; 129, 257, 513, 1025, 2049 and 4097.

The dotted and dash-dotted lines are conventional rate 1/2 turbo-codes from

Figure 7: Number of iterations, I, needed in the simulations of the LDC (ms , 3, 6)-

Table 1: The fraction of symbols, for some different ν and ms , having n correct

iterations.

25

(a) (b) (c) (d)

1 0 0 0 1 1 0 0 0 1 1 0 0 0 0

1 0 0 0 0 0 1 0 1 0 0 1 0 1 0 0 1 0 1

0 0 1 0 1 0 0 0 1 1 0 0 0 1 1 0 0 0 0

0 1 0 0 0 0 0 1 1 0 0 0 1 1 0 0 0 1 1

1 0 0 0 0 1 0 0 1 0 1 0 0 1 0 1 0 0 0

0 0 0 1 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1

0 0 1 0 0 0 1 0 1 0 0 1 0 1 0 0 1 0 0

0 1 0 0 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1

0 0 0 1 1 0 0 0 1 1 0 0 0 1 1 0 0 0 0

0 0 1 0 1 0 0 1 0 1 0 0 1 0 1

Figure 1:

(b)

0 0 0 1 0 0

(a) 0 0 0 1 0 0

0 0 0 0 0 1

0 0 1 0 0 0 0 0 1 0 0

0 0 0 1 0 0 1 0 0 0 0

0 1 0 0 0 0 0 0 1 0 0

1 0 0 0 0 0 0 0 1 0 0

0 0 0 0 1 0 0 0 0 0 1

0 0 0 1 0 0

0 1 0 0 0 0

0 0 0 1 0 0

Figure 2:

26

ut (0)

vt

(1)

+ vt

(2)

+ vt

+

2 × 2 block

+

interleaver

+

Figure 3:

r r r r

i17 i7 i6 i5 i12 i28

j7 r j2 j3 r j6

i18 r r i2 i1 r r i27

i0 i38

j1 r j0 j11 r

r r r r

i19 i3 i4 i26

j9 r j4 j5 r j8

i20 r i8 r i9 i10 r i11 r i25

Figure 4:

27

ν=3 ν=3

0

ms = 129 0

ms = 513

10 10

1.0 1.0

Pb

Pb

−2

10 1.4 −2

10

1.6 1.2

−4

10

−4

10 1.3

1.8

−6 −6

10 10 1.4

−8 −8

10 10

0 20 40 60 I 0 20 40 60 I

ν=3 ν=3

0

ms = 1024 0

ms = 2049

10 10

1.0 1.0

Pb

Pb

−2

10 1.1 −2

10 1.1

−4

1.15 −4

1.13

10 10

1.2 1.17

−6 −6

10 10

−8 −8

10 10

0 20 40 60 I 0 20 40 60 I

Figure 5:

28

ν=2 ν = 2.5

0 0

10 10

−2 −2

10 10

PB and Pb

PB and Pb

−4 −4

10 10

−6 −6

10 10

−8 −8

10 10

1 2 3 4 1 1.2 1.4 1.6 1.8 2

Eb /N0 [dB] Eb /N0 [dB]

ν=3 ν=4

0 0

10 10

−2 −2

10 10

PB and Pb

PB and Pb

−4 −4

10 10

−6 −6

10 10

−8 −8

10 10

1 1.2 1.4 1.6 1.8 2 1 1.2 1.4 1.6 1.8 2

Eb /N0 [dB] Eb /N0 [dB]

Figure 6:

29

ν=3 Pb = 10−2 ν=3 Pb = 10−3

I 60 I 60

50 50 1.15

1.2

40 40 1.3

30 1.15 30

20 20

1.4 1.3

10 1.6 10

0 0

7 8 9 10 11 12 log2 ms 7 8 9 10 11 12 log2 ms

Figure 7:

30

number of correct iterations, n

ν ms 0 00000 1 00000 2 00000 3 00000 4 00000 5 00000

2 129 0 0 0.102 0.586 0.312 0

257 0 0 0.0625 0.438 0.491 0.00879

513 0 0.00586 0.0459 0.272 0.626 0.0503

1025 0 0.00146 0.0112 0.120 0.569 0.296

3 129 0.0104 0.979 0.0104 0 0 0

257 0.00521 0.840 0.155 0 0 0

513 0.00651 0.605 0.388 0 0 0

1025 0.00130 0.374 0.623 0 0 0

4 129 0.471 0.529 0 0 0 0

257 0.411 0.589 0 0 0 0

513 0.390 0.610 0 0 0 0

1025 0.386 0.614 0 0 0 0

Table 1:

31

- Filename: December 2014 question & solution.pdfEnviado porVishak H Pillai
- Z05_REND6289_10_IM_MOD5Enviado porJosé Manuel Orduño Villa
- 557966b408aeacff2002f4b1Enviado porAndi Pratama
- Turbo Tutorial cEnviado porUchie
- 68313_apdx7Enviado porronalddelacruz
- FARMERS TURNED INTO BRICK WORKERS: A STUDY ON THEIR SOCIAL STATUS CHANGEEnviado porIntegrated Intelligent Research
- FLT1quesEnviado porKapilMalhotra
- 322893654 Simulation of Convolutional EncoderEnviado porSalem Alsalem
- [IJCST-V4I1P12]:Vivek SolavandeEnviado porEighthSenseGroup
- ASILOMAR 2009 Blind Signal Separation and Identification of Mixtures of Images ISSN 10586393 Print ISBN 9781424458257 DOI 10.1109 Slash ACSSC.2009Enviado porshadowbraz
- 05594213Enviado porThuận Lâm Chi
- syllabusEnviado porvery big
- Programe 1-16-34Enviado pornavi424
- Cs111 Matlab More ProgrammingEnviado porchmsno111
- Lab 6 - Student - Designing Pitch Controller Using State-Space MethodEnviado porAndrew Bola
- LASylF08Enviado poranon-681268
- MA 2013 UnsolvedEnviado porhimanshuvermac3053
- Solved Example 2Enviado porAndrei Florea
- topic2Enviado porJustin Deleon
- Tutorial MathcadEnviado pordavismoody
- 45079427_090330_035754_GHEM01-2009Enviado poraaditya01
- IJNME-Arch-Free.pdfEnviado porJinho Jung
- Matlab BasicsEnviado porDeepak Prakash Jaya
- Introduction to REnviado porRahul Sharma
- 4Enviado porHitesh Soni
- ASTROS_Theoretical_Manual.pdfEnviado porrobcfu
- Alkire Seth DRAFTEnviado porMauricio Spinola
- IAS Maths SyllabusEnviado poryaswanthece084177
- Long Long Thesis on NmfEnviado porliquid.plejade
- 1Enviado poryamiyuugi

- Performance of Low Density Parity Check Codes as a Function of Actual and Assumed Noise LevelsEnviado porvmaiz
- Asymptotical Analysis and Comparison of Two Coded Modulation Schemes Using PSK Signaling - Part IIEnviado porvmaiz
- Coding Time-Varying Signals Using Sparse, Shift-Invariant RepresentationsEnviado porvmaiz
- Mathematical Analysis of an Iterative Algorithm for Low-Density Code DecodingEnviado porvmaiz
- Phd Thesis Brian. Linear Systems Analysis and Decoding of Convolutional CodesEnviado porvmaiz
- On the Calculation of the Error Probability for a Multilevel Modulation Scheme Using QAM-SignalingEnviado porvmaiz
- Some Results Concerning Design and Decoding of Turbo-CodesEnviado porvmaiz
- On the Theory of Low-Density Convolutional Codes IIEnviado porvmaiz
- The Chernoff Bounding Parameter for a Multilevel Modulation Scheme Using PSK-SignalingEnviado porvmaiz
- A Turbo Codes Ryan)Enviado porSuhan Choi
- A Comparison Analysis of Hexagonal Multilevel QAM and Rectangular Multilevel QAMEnviado porvmaiz
- Analytic Expressions for the Bit Error Probabilities of Rate 1-2 Memory 2 Convolutional EncodersEnviado porvmaiz
- A Fast Maximum-Likelihood Decoder for Convolutional CodesEnviado porvmaiz
- Asymptotic Analysis of Superorthogonal Turbo CodesEnviado porvmaiz
- Asymptotical Analysis and Comparison of Two Coded Modulation Schemes Using PSK Signaling - Part IEnviado porvmaiz

- Jvc Fl2 Chassis Lt32x676 Lcd Tv SmEnviado porngoclinhdtdd
- examslip (3)Enviado pordawood
- What the Bleep - QuotesEnviado pormiha3la33
- 360ps6solns.pdfEnviado porMarc Jason Yu
- Essentials of Food Sanitation 001Enviado porMalleboina Pramod
- ssadvancement-handbook.pdfEnviado porfharahehe
- Acting CVEnviado poremilywestlake
- Brun - four Color theoremEnviado porWilliam Leighton Dawson
- 5 Lyndon Baines Johnson Great Society SpeechEnviado porgabenglishalza
- N10 Teaching-plan-Ayuste - Copy.docxEnviado porRikkard Ambrose
- Mil Std 648dEnviado porGeorgeKKonsolas
- Component Based Testing - Overall ProcessEnviado porSandeep Kotni
- Mediapedia AcrylicsEnviado porEduardo Pujol
- Phase Characteristics & Vibration AnalysisEnviado porChristopher Garcia
- Maternity 2Enviado porYulianingsari Pramesthirini
- Links to WisdomEnviado porRavenCrow
- 28-01 - (T) Landscape PlantingEnviado porkashif
- LedEdit_operating_manual.pdfEnviado porHoria Georgescu
- Over the BeachEnviado porBob Andrepont
- My Statement of PurposeEnviado porTran Tuan Thanh
- t3 Respiratory SyndromeEnviado portaichi7
- APEX SessionsEnviado porRamesh Ponnada
- High Yield Internal MedicineEnviado porHenakhan0
- 6Enviado porVlademir Ramos
- Common Chemical Reactions LabEnviado porCesarJ.NavarroC
- 87343135-Turbine-Stress-Evaluator.pdfEnviado porLakshmi Narayan
- this-earth-of-mankind-studyguide.pdfEnviado porMikeJonathan
- PteranodonEnviado porMarathondy Al Fahmi
- Earth Lost (Earthrise, Book 2) by Daniel ArensonEnviado porClaudiu SmN
- Bleomycin.pdfEnviado porTowfika Islam