Você está na página 1de 12

Forschungszentrum Telekommunikation Wien

Theory and Design of

Turbo and Related Codes


Lecture 7
Jossy Sayir & Gottfried Lechner
http://userver.ftw.at/~jossy/turbo/index.html

Last Lecture
We introduced another representation of a linear
code the trellis diagram
We derived two algorithms:
- Viterbi algorithm find the most likely codeword
- BCJR algorithm find the most likely symbol
From previous lectures we know that we need large
block lengths to achieve good performance.
What about the complexity of Viterbi and BCJR
decoding?
ftw. 2004

From the Website of


The John Hopkins University

Trellis Construction from


Parity-Check Matrix (taken from last lecture)
(Bahl, Cocke, Jelinek & Raviv, 1974)
1 0 1 1 1 1
Parity-Check Matrix: H = 0 1 1 0 1 0
0 0 0 1 0 1
State s = s1 s2 s3
si is 0 if parity-check equation i is fulfilled

Fred Jelinek

s1s2s3
111
011
101
001
110
010
100
000

ftw. 2004

0
1

Complexity of Trellis Decoding


The complexity of trellis decoding is approximately
proportional to the size of the trellis, where the
size is the number of transitions (=block length)
times the number of states.
For a linear block code with M parity-check equations
and block length N the complexity is
and the complexity per symbol is

Therefore, the block length is limited by complexity


constraints!
ftw. 2004

Complexity of Trellis Decoding


By constructing a code where the number of states is
constant, we could achieve linear decoding complexity
(with respect to the block length N).
What are the constraints for a parity-check matrix
to get a trellis with constant number of states?
const

1
0
0
0
0
0

0
1
0
0
0
0

1
0
0
0
0
0

1
1
0
0
0
0

0
0
0
0
0
0

1
1
0
0
0
0

0
1
0
0
0
0

1
0
0
0
0
0

0
0
0
0
0
0

0
0
0
0
0
0

1
1
0
0
0
0

1
0
0
0
0
0

1
0
1
1
0
1

0
1
0
1
1
1

1
0
1
0
1
1

0
0
0
0
0
0

0
0
0
0
0
0

0
0
0
0
0
0

0
0
0
0
0
0

0
0
0
0
0
0

0
0
0
0
0
0

0
0
0
0
0
0

0
0
0
0
0
0

0
0
0
0
0
0

const
These methods to achieve linear complexity are useless.
ftw. 2004

Parity-Check Matrix with


Band Structure

equation A is
always fulfilled
after this symbol

H=

1
0
0
0
0

0
0
0
0
0

1
1
0
0
0

0
0
0
0
0

1
1
1
0
0

1
0
0
0
0

0
1
1
1
0

0
1
0
0
0

0
0
1
1
1

0
0
1
0
0

0
0
0
1
1

0
0
0
1
0

0
0
0
0
1

0
0
0
0
1

A
B
C
D
E

equation E is
always fulfilled
before this symbol

In this example, there are at most 3 parity checks not


fulfilled. Therefore, the maximum number of states is
23=8.
This is independent of the block length N.
ftw. 2004

Parity-Check Matrix with


Band Structure 2

H=

1
0
0
0
0

0
0
0
0
0

1
1
0
0
0

0
0
0
0
0

1
1
1
0
0

1
0
0
0
0

0
1
1
1
0

0
1
0
0
0

0
0
1
1
1

0
0
1
0
0

0
0
0
1
1

0
0
0
1
0

0
0
0
0
1

0
0
0
0
1

With this structure every symbol of the codeword is


related to its neighbouring symbols.
An efficient way of generating such a code is using a
filter. The code is the convolution of the information
sequence and the filter response.

ftw. 2004

Convolutional Encoder
Systematic Codes
u
u

parallel
to
serial

D
+

u0, p0, u1, p1, u2, p2, ...

pk = uk + uk-1 + uk-2
u0 p0 u1 p1 u2 p2 ...

H=

1
0
0
0
0

0
0
0
0
0

1
1
0
0
0

0
0
0
0
0

1
1
1
0
0

1
0
0
0
0

0
1
1
1
0

0
1
0
0
0

0
0
1
1
1

0
0
1
0
0

0
0
0
1
1

0
0
0
1
0

0
0
0
0
1

0
0
0
0
1

ftw. 2004

Other Types of Convolutional Codes


Non systematic encoder
+
u

p(1)
parallel
to
serial

D
+

p(1)0, p(2)0, p(1)1, p(2)1, ...

p(2)

Recursive systematic encoder (RSC)


p
+
u

D
+

parallel
to
serial

u0, p0, u1, p1, u2, p2, ...

ftw. 2004

Trellis of a Convolutional Code


For a general linear block code, a state was defined
as an indicator of the fulfilled parity-check equations.
For a convolutional code only few parity-checks are
active.
It is natural to define a state of a convolutional code
by the content of the delay line.
The structure of the encoder is time-invariant
therefore, all trellis stages are identical.

ftw. 2004

Trellis of a Convolutional Code


Example
+
u

D1

D0

parallel
to
serial

u0, p0, u1, p1, u2, p2, ...

0
1

D1D0
00

00
11

00
11

00
11

01

11

11

11

00

00

00

10

10

10

10

01
01

01
01

01
01

11

10

10

10

ftw. 2004

Influence of Other Symbols


In principal the block length of convolutional codes is
infinite. However, the performance of this code is
not very good.
In fact only the neighbouring symbols influence the
current symbol and that reduces the effective block
length.
The effective block length can be increased by
increasing the length of the delay line, but this would
lead to more states and therefore, higher complexity
per symbol...
ftw. 2004

Parallel Concatenation - Turbo Codes

By using two parallel


concatenated systematic
codes, separated by an
interleaver, the effective
block length is increased.

C. Berrou, A. Glavieux and P. Thitimasjshima Near Shannon Limit ErrorCorrecting


Coding and Decoding: Turbo Codes, ICC 1993, Genve, Switzerland
ftw. 2004

Turbo Codes Encoder


u

ch

RSC 1

p1

ch

yu

yp1

RSC 2

p2

ch

yp2

How can this code be decoded?


ftw. 2004

Photo from http://www.cjbyron.com/3000GTTurbo.htm

Why Turbo?

The turbo engine uses a feedback to increase the


performance of the overall system.
The turbo decoder works similar.

ftw. 2004

How to Decode a Turbo Code


The key idea is not to decode the complete code
(which would be complicated), but to decode the two
component codes alternating and feed back
information gained by the other decoder.
yu
yp1

BCJR

app1 +

BCJR

app2+

extr1

extr1

yp2

yu

extr2

-1
-1

extr2
app2

ftw. 2004

Complexity of Turbo Decoding


By decoding every component code instead of the
overall code we achieve a decoding complexity per
symbol that is constant.
Using a larger interleaver results in a larger block
length.
Using longer (more memory) component codes results
in higher decoding complexity.

ftw. 2004

Serial Concatenation
Parallel concatenation is not the only way of combining
codes.
Instead of using two codes in parallel we use two
codes that are serially concatenated (again separated
by an interleaver)

from source

outer

inner

to channel

ftw. 2004

Serial Concatenation - Decoder


Decoding is similar to parallel concatenated codes.
But here, the outer code receives no messages
directly from the channel.
yu
yp

inner

app2+

outer

app1 +

extr2

extr1

-1

extr2

extr1
app1

ftw. 2004

General Decoding Model


ch

Dec 1

app1 +

extr1

ch
+

Dec 2

app2
+

- extr2

-1

The decoder of parallel concatenated, serial


concatenated and LDPC codes is a special case of this
decoding model.
ftw. 2004

10

Parallel Concatenation
ch

BCJR 1

app1 +

extr1

ch
+

BCJR 2

app2
+

- extr2

-1

ftw. 2004

Serial Concatenation
ch

inner

outer

app1
+

app2
+

extr1

- extr2

-1

ftw. 2004

11

LDPC Decoder
ch

var

chk

app1
+

app2
+

extr1

- extr2

-1

The decoder structure of an LDPC code is the same as


for a serial concatenated code. But the LDPC encoder
can not be represented like this.
ftw. 2004

Factor Graphs
Factor graph of a convolutional code

Factor graph of a turbo code

B.J. Frey, F.R. Kschischang, and H.-A. Loeliger, "Factor Graphs and the SumProduct Algorithm, IEEE Trans. Inform. Theory, Feb. 2001.
ftw. 2004

12

Você também pode gostar