Você está na página 1de 4

Signal Detection in M-ary Communication with AWGN

Problem statement
• Assumptions:
– M-ary signaling: a set of M messages {dk }Mk=1 are transmitted by finite-energy waveforms/pulses
{s1 (t), · · · , sM (t)}.
– AWGN channels: The received signal r(t) at the front-end of the detector contains two com-
ponents: the un-contaminated transmitting signal sk (t) and additive white Gaussian noise
n(t) with PSD Sn (f ) = N2o .
– Transmission rate: symbol/pulse rate Rs = 1/Ts ; bit rate Rb = Rs × log2 M bits/sec.
– Synchronization is assumed at the receiver.
– At each Ts second, the receiver decides the transmitted massage dˆ ∈ {dk }M k=1 from the
received signal r(t) = sk (t) + n(t).
• Problem:
How to design an optimal receiver (decision rules) that minimizes the error probability Pe =
P (dˆ 6= dk ) given the received signal?
• Conceptual system block diagram
n(t)

s(t) r(t) dˆ -
?
d - - +
Transmitter - Detector

{dk } {sk (t)} {sk (t) + n(t)} {dk }

Signal space representation


• Signal space: S consists of all the possible waveforms of the received signal r(t) in each Ts period,
i.e., S = {s1 (t) + n(t), · · · , sM (t) + n(t)}
• Orthonormal basis signal set: Φ = {φ1 (t), · · · , φn (t)} constructed from {s1 (t), · · · , sM (t)}
R∞
• Signal representation by vectors: r(t) ⇔ r = (r1 , · · · , rn ) where rj = −∞ r(t)φj (t)dt, j =
R∞
1 · · · n; si (t) ⇔ si = (si1 , · · · , sin ) where sij = −∞ si (t)φj (t)dt, i = 1 · · · M , j = 1 · · · n;
R∞
• Noise vector: n(t) ⇔ n = (n1 , · · · , nn ) where nj = −∞ n(t)φj (t)dt, j = 1 · · · n. Each nj is
Gaussian with distribution N (0, No /2) and (n1 , · · · , nn ) are joint Gaussian and independent
• Signal detection: choose dˆ based on r in relation to {s1 , · · · , sM }.

Probability of correct decision


Let C represent the correct decision.
• When the receiver decides dˆ = dk , P (C|r) = P (dk |r)
Proof: Given r, there are two possible events when the receiver decides dˆ = dk : {E1 : d = dk }
and {E2 : d 6= dk }. P (E1 ) + P (E2 ) = 1, and “C = true” is equivalent to “E1 happens”.
Therefore, P (C|r) = P (E1 ) = P (d = dk |r)
• Optimal detection
R means max P (C) ⇒ max P (C|r)
Proof: P (C) = P (C|r)pr (r)dr

1
Optimal decision rule
• dˆ is to be chosen from {d1 , · · · , dM } to maximize P (C), which is equivalent to maximize P (C|r)

• Maximum a posteriori (MAP) rule: dˆ = dk if P (dk |r) ≥ P (di |r) for all i 6= k

Decision P (C|r)

r dˆ = d1 P (d = d1 |r) dˆ
- .. .. -
. .
{sk + n} {dk }
dˆ = dk P (d = dk |r)
.. ..
. .
dˆ = dM P (d = dM |r)

Decision functions
P (di )pr (r|di )
• A posteriori probability: P (di |r) = pr (r) by Baye’s mixed rule

• P (di ): a prior;
r = si + n: each si is deterministic, n is an n-dimensional Gaussian random vector
max
• MAP decision rule: dˆ = arg J(di ) = P (di )pr (r|di )
di

pr (r|di )
= pr (r = si + n)
= pn (n = r − si )
1 2
= e−|n| /No |n=r−si
(2πNo )n/2
1 2
= n/2
e−|r−si | /No
(2πNo )
P (di ) 2
J(di ) = n/2
e−|r−si | /No
(2πNo )
No No 1
ln J(di ) = ln P (di ) − |r − si |2 + constant
2 2 2 
No 1
= ln P (di ) − |r|2 + |si |2 −2r · si  + constant

2 2 |{z}
Ei
No 1 1
= ln P (di ) − Ei +r · si − |r|2 + constant
| 2 {z 2 } 2
ai

• New decision functions:

J1 (di ) = bi = ai + r · si
J2 (di ) = N ln P (di ) − |r − si |2

2
Optimal receiver implementations
max
MAP: dˆ = arg J1 (di ) = bi = ai + r · si
di
• Implementations by correlators:
Z ∞ Z Ts
I. r · si = r(t)si (t)dt = r(t)si (t)dt
−∞ 0
n n
!
X X Z Ts
II. r · si = rj sij = r(t)φi (t)dt si,j
j=1 j=1 0

• Implemtations by matched filter:


Z Z
I. r · si = r(τ )si (τ )dτ = r(τ )si (t − (Ts − τ ))dτ |t=Ts = r(t) ∗ h(t)|t=Ts
n
X n Z
X  n
X
II. r · si = rj sij = r(τ )φi (t − (Ts − τ ))dτ |t=Ts si,j = (r(t) ∗ h(t)|t=Ts ) si,j
j=1 j=1 j=1
     
r · si si (t) h(t) = si (Ts − t)
is the output at t = Ts of a filter h(t) matched to , i.e., ,
ri φi (t) h(t) = φi (Ts − t)
when r(t) is applied to its input.
• Block diagrams of optimal M-ary receivers

3
Probability of error
• MAP: max J2 (di ) = No ln P (di ) − |r − si |2
| {z }
li2

• Decision regions: Ri = {r : J(di , r) > J(dk , r)} for all k 6= i.


Therefore, the MAP decision rule is equivalent to “ choose dˆ = di if r ∈ Ri ”
• When digital data are transmitted with equal probabilities (P (di ) = 1/M for i = 1, · · · , M ), the
MAP rule is the same as the nearest neighbor rule: min li = |r − si |
PM PM
• Error probability: Pe = i=1 P (di )P (e|di ) = i=1 P (di )P (r ∈ / Ri )
Eb
• Pe is usually evaluated with respect to No where

M
X M
X
average symbol energy: E s = P (di )Ei = P (di )|si |2
i i
Es
average bit energy: Eb =
log2 M

The average signal power is Ps = E b Rb = E s Rs , and the average noise power is Pn = No B


where B is the channel bandwidth.

Examples
• M-ary PSK (section 1-3.3)
• M-ary QASK (section 1-3.5)

Você também pode gostar