Você está na página 1de 21

DESIGN SPACE EXPLORATION OF HARD DECISION VITERBI DECODING : ALGORITHM AND VLSI IMPLEMENTATION

INTRODUCTION
1.

HISTORY OF ERROR CORRECTING CODES


WHY ERROR CORRECTING CODES ? WHY CONVOLUTION CODES ? CONSTRAINT LENGTH GENERATING POLYNOMIAL TRELLIS DIAGRAM REPRESENTATION BRANCH-METRIC UNIT PATH-METRIC UNIT

2.

CONVOLUTION CODES GENERATION


3.

VITERBI ALGORITHM DESIGN


SURVIVING MEMORY UNIT

4.

DESIGN CONSTRAINTS

Brain itself has an error-correcting code within it , compares with all the possible states and makes out the best pick!!!!

Code--->corrupted--->error correction--->corrected code


Text--->ppt(corruption)--->brain--->corrected word

Relationship with the ancient history :

Jewish scribes devised a complex error detection process to help prevent mistakes.

Steps:
1.
2. 3.

Count the number of words in the manuscript


Count number or columns and rows Writing down the first words of each line

Examples: google search, iphone (siri software) etc.

Why Convolution Codes?

Concept of FEC (forward error correction) Simpler compared to cyclic codes and linear block codes (division operation) In block-codes information bits are followed by parity bits Easy to implement both in software as well as hardware Generating polynomials are fixed in block and cyclic codes Decoding algorithm remains the same irrespective of generating polynomials Faster and have lead to a new generation correcting codes called TURBO CODES ,of which convolution encoder is a major part.

Convolution codes generation


In telecommunication, a convolutional code is a type of error-correcting code in which :

each m-bit information symbol (each m-bit string) to be encoded is transformed into an n-bit symbol, where m/n is the code rate (n m) and the transformation is a function of the last k information symbols, where k is the constraint length of the code.

Two types of convolution encoders :

Example of Convolution Encoder

The above figure presents the convolutional encoder m =1, n = 3, codification rate m/n =1/3 and constraint length k = 7. Generator polynomials are represented by equations (1)(2)(3).
G0 =(133) 8 = (1011011) 2 G1 =(165) 8 = (1110101) 2 (1) (2)

G2 =(171) 8 = (1111001) 2
Out A = D 0 + D 3 + D 4 + D 5 + D 6 Out B = D 0 + D 2 + D 4 + D 5 + D 6 Out C = D 0 + D 3 + D 4 + D 5 + D 6

(3)
(4) (5) (6)

G0 generates out A, G 1 out B and G 2 out C (equations (4)(5)(6)).

Sate Transitions

For a convolutional encoder with one input bit (m = 1), there are only two paths that merge at each node There are only two states that can change to the same state. The states differ in the least significant bit (D 0) and the input bit must be the same. A butterfly can represent the state transitions to each state

Trellis Diagram
A Trellis diagram is a state diagram. A trellis diagram for a convolutional code, in which 1 bit is shifted at a time into the shift register, with k stages has 2^ k1 states. The trellis diagram for the convolutional encoder is

VITERBI ALGORITHM
Euclidean distance:

In the Viterbi algorithm, comparison between path metrics is required to determine the survivor path. For a 1/3 code rate, the Euclidean distance calculates the distance between the 3 received noisy received symbol (A i,B i,C i) and the ideal output symbol (A,B,C) of the transition between two states of the trellis diagram

Decoding Length :
For long sequences, the Viterbi algorithm requires large decoding delays and, as so, large amount of memory because paths must be stored before being discarded

For a code rate of 1/n, a set of 2 k-1 paths must be stored after each decoding step. The Viterbi algorithm tries to find a path of the trellis diagram, where the sequence of output symbols approximately matches the received sequence As two paths can merge at each node two path metrics are computed for each node , Only one path will survive, the survivor presents the minimum distance to the received sequence

Basic structure of a Viterbi decoder

The Main functional Units Of Viterbi Decoder are :


1.

Branch metric unit

2.
3.

Add Compare Select unit


Surviving memory unit

Branch Metric Unit

BMU is typically the smallest unit of Viterbi decoder. Its complexity increases exponentially with n The complexity increases linearly with softbits. So, the area and throughput of the BMU can be completely described by these two factors.

Area required for the implementation of BMU is given by => 2^n(HA(area) + (FA(area) *(softbits-1)))+softbits*n*inv(area)

Add Compare Select Unit


PMU is a critical block both in terms of area and throughput The key problem of the PMU design is the recursive nature of the add-compare-select (ACS) operation

In order to increase the throughput or to reduce the area, optimizations can be introduced at algorithmic, word or bit level
Word level optimizations work on folding (serialization) or unfolding (parallelization) the ACS recursion loop

In the folding technique, the same ACS is shared among a certain set of states. This technique trades off throughput for area
With unfolding, two or more trellis stages are processed in a single recursion (this is called look ahead). If look ahead is short then area penalty is not high

Compare-select-add (CSA) is a retimed variation of the ACS operation

PathMetric Block

As mentioned before, there are several options to implement the ACS operation,Radix-2 ACS is a basic design with lowest area and throughput ,Other techniques such as CSA and Radix-4 provide a higher throughput than radix-2 ACS at the cost of higher area

Surviving Memory Unit

The survivor sequence update and storage unit (briey called survivor memory unit, SMU) receives decisions from the PMU and produces the decoded sequence. The implementation techniques for this unit can be divided in two classes: standard and adaptive.

TBD determines the number of memory access operations required by the SMU. The value of the TBD parameter depends on the transmission channel conditions On noisy channels, trace-back paths needs longer to merge, so (since usually a Viterbi decoder is designed for a worst case) TBD values become larger SMU may run redundant memory access operations, since paths tend to merge faster. Adaptive SMU implementations are used to reduce the number of these redundant accesses.
Adaptive techniques require additional hardware to be added to the SMU, on average power dissipation of the unit can be reduced due to a lower number of memory accesses.

The TB stores the decisions in an SRAM and reads them in reverse order during the traceback cycle,as the update rate of the memory is much less frequent

Design Constraints
Behavioral Versus Structural Synthesis:

A commonly agreed issue in VLSI design is the proper choice of HDL description and coding style. Structural description of combinational logic was especially popular at the times when synthesis tools were not intelligent enough. PMU contains a feedback loop, therefore, in general, throughput cannot be increased by pipelining some look-ahead techniques allow loop unrolling and pipelining, but are extremely expensive in terms of area and power consumption The most critical design parameters for the PMU are the input bitwidth, i.e., softbits and the number of bits to process per clock cycle

Critical Factors in PMU Design:

CONCLUSIONS

A comprehensive analysis of the Viterbi decoder design space is presented Summarized the importance of the different subunits of the decoder depending on the optimization criteria Comparison of different adder architectures for the PMU implementation

THANK YOU

Você também pode gostar