Você está na página 1de 21

CHAPTER 6

PULSE-CODE MODULATION

Dr.Eng. Fawzy Abujarad

Al-Azhar University

Faculty of Engineering and Information Technology


Computer & Communication Engineering
Table of Contents

Table of Contents ii

1 Digitization Techniques for Analog Messages 1


1.1 PULSE-CODE MODULATION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Quantization Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.2 Quantization Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.2 PULSE-CODE MODULATION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.2.1 Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.2.2 Quantization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.2.3 Encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.2.4 Transmission Bandwidth and the Output SNR . . . . . . . . . . . . . . . . . . . . . . . . 16

ii
Chapter 1

Digitization Techniques for Analog


Messages

Contents

• Pulse-Code Modulation PCM Generation and Reconstruction

• Quantization Noise

• PCM with Noise

• Decoding Noise

• Error Threshold PCM versus Analog Modulation

• Terrestrial, Mobile and Satellite Communication Networks

1.1 PULSE-CODE MODULATION


PCM is a digital transmission system with an analog-to-digital converter (ADC) at the input and a digital-to-
analog converter (DAC) at the output.
Figure 1.1a diagrams the functional blocks of a PCM generation system. The analog input waveform x(t)
is lowpass filtered and sampled to obtain x(kTs , ). A quantizer rounds off the sample values to the nearest
discrete value in a set of q quantum levels. The resulting quantized samples xq (kTs ) are discrete in time (by
virtue of sampling) and discrete in amplitude (by virtue of quantizing). To display the relationship between
x(kTs ) and xq (kTs ), let the analog message be a voltage waveform normalized such that |x(t)| ≤ 1V. Uniform
quantization subdivides the 2-V peak-to-peak range into q equal steps of height 2/q V, as shown in Fig. 1.1b.
The quantum levels are then taken to be at ±l/q, ±3/q, ..., ±(q −l)/q in the usual case when q is an even integer.
A quantized value such as xq (kTs )) = 5/q corresponds to any sample value in the range 4/q < xq (kTs ) < 6/q.
Next, an encoder translates the quantized samples into digital code words. The encoder works with M-ary
digits and produces for each sample a codeword consisting of ν digits in parallel. Since there are M ν possible M-
ary codewords with ν digits per word, unique encoding of the q different quantum levels requires that M ν ≥ q.
The parameters M, ν, and q should be chosen to satisfy the equality, so that

q = M ν , ν = logM q or L = MR R = logM L

Thus, the number of quantum levels for binary PCM equals some power of 2, namely

L = 2R , R = log2 L

1
1.1. PULSE-CODE MODULATION 2

Finally, successive codewords are read out serially to constitute the PCM waveform, an M-ary digital signal.
The PCM generator thereby acts as an ADC, performing analog-to-digital conversions at the sampling rate
fs = l/Ts . A timing circuit coordinates the sampling and parallel-to-serial readout.

200

400

600

800

1000

1200

200 400 600 800 1000 1200 1400 1600

Figure 1.1: (a) PCM generation system; (b) quantization characteristic.

Each encoded sample is represented by a ν-digit output word, so the signaling rate becomes r = Rfs , with
fs ≥ 2W . Therefore, the bandwidth needed for PCM baseband transmission is
1 1
BT ≥ r = Rfs ≥ RW
2 2
Fine-grain quantization for accurate reconstruction of the message waveform requires q À 1, which increases
the transmission bandwidth by the factor R = logM L times the message bandwidth W.
Now consider a PCM receiver with the reconstruction system in Fig. 1.2a. The received signal may be
contaminated by noise, but regeneration yields a clean and nearly errorless waveform if (S/N )R is sufficiently
large. The DAC operations of serial-to-parallel conversion, M-ary decoding, and sample-and-hold generate the
analog waveform xq (t) drawn in Fig. 1.2b. This waveform is a ”staircase” approximation of x(t), similar
to flat-top sampling except that the sample values have been quantized. Lowpass filtering then produces the
smoothed output signal
yD (t), .which differs from the message x(t) to the extent that the quantized samples differ from the exact
sample values x(kTs ). Perfect message reconstruction is therefore impossible in PCM, even when random noise
has no effect. The ADC operation at the transmitter introduces permanent errors that appear at the receiver
1.1. PULSE-CODE MODULATION 3

100

200

300

400

500

600

700

800

900

1000

1100

200 400 600 800 1000 1200 1400 1600

Figure 1.2: (a] PCM receiver; [b) reconstructed waveform.

as quantization noise in the reconstructed signal. We’ll study this quantization noise after an example of PCM
hardware implementation.

1.1.1 Quantization Process


Amplitude quantization is defined as the process of transforming the sample amplitude x(nTs ) of a message
signal x(t) at time t = nTs , into a discrete amplitude ν(nTs ) taken from a finite set of possible amplitudes.
The input-output relationship of the quantizer is represented by a staircase function shown in Fig. 1.3.
A sample x is specified by index k if it falls into the interval.

Ik : {xk−1 < x ≤ xk } , k = 2, · · ·, N − 1

The code k is transmitted and the receiver transforms it into an amplitude yk . The amplitudes xk are called
the decision levels and the amplitudes yk are called the reconstruction levels. The quantization error introduced
by the process is q = x - y. If x represents samples from a random process, then we could consider the errors
introduced by the quantizer as a form of additive quantization noise. It is important to recognize the fact that
the noise would not be any different if the sampling was performed after quantization.
The length of the decision interval Ik (i.e. xk - xk−1 ) is denoted by ∆k , the step size. Two special decision
levels xo and xN , known as the overload levels, are defined such that x1 − xo = x2 − x1 , and, xN − xN −1 =
xN −1 − xN −2 . Note that y1 is the reconstruction level whenever x < x1 andyN is the reconstruction level
1.1. PULSE-CODE MODULATION 4

50

100

150

200

250

300

50 100 150 200 250 300 350 400

Figure 1.3: Quantizer.

forx > xN −1 . If the signal is limited to the range xo to xN , then we have a no overload situation and the
resulting noise is known as granular noise. The term overload distortion is used to denote the quantization
noise caused by the input signal falling outside this range.
The first example of PCM encoding In this example the signal is quantized in 11 time points using 8
quantization segments. All the values that fall into a specific segment are approximated with the corresponding
quantization level which lies in the middle of a segment. The levels are encoded using this table: The first

Table 1.1: Quantization levels with belonging code words

Level Code word


0 000
1 001
2 010
3 011
4 100
5 101
6 110
7 111

Figure (Fig. 1.4) shows the process of signal quantizing and digitizing. The samples shown are already quantized
- they are approximated with the nearest quantization level. To the right of each sample is the number of its
quantization level. This number is converted into a 3-bit code word using the above table. The second Figure
(Fig. 1.5) shows the process of signal restoration.The restored signal is formed according to taken samples.
It can be noticed that the restored signal diverges from the input signal. This divergence is a consequence of
quantization noise. It always has the same intensity, independent from the signal intensity. If the signal intensity
drops, the quantization noise will be more noticeable (the signal-to-noise ratio will drop). PCM encoded signal
in binary form: 101 111 110 001 010 100 111 100 011 010 101
Basic compression methods
1.Reducing number of quantization levels
The number of quantization segments can be reduced by joining two neighboring segments into one. This
1.1. PULSE-CODE MODULATION 5

20

40

60

80

100

120

140

160

180

200

220

50 100 150 200 250 300

Figure 1.4: Quantization and digitalization of a signal.

means that finally we will have 4 quantization segments unlike the previous case in which we had 8 segments.
Four quantization segments can be coded using 2-bit code words, this will be shown in the table below.

Table 1.2: Quantization levels with belonging code words (after compression)

Level Code word


0 00
1 01
2 10
3 11

The Figure (Fig. 1.6) shows the reconstructed signal after compression. It still has the same basic contours,
but the distortions are greater due to coarser approximation - the quantization noise has increased. This is due
to the fact that the quantization step is now double in size than with the uncompressed PCM. The compressed
signal is PCM encoded as follows: 10 11 11 00 01 10 11 10 01 01 10 After compression we have 22 bits, that
means 33% reduction in size with compression ratio of 1.5:1 .
Practical use of this method In practice, a PCM encoded audio signal is compressed at higher rates -
for example, from 16 to 8 bits per sample (a rate of 2:1). This compression standard (called the A-law) uses
nonlinear quantization. The quantization levels are not evenly distributed across the quantization range - they
are denser near the zero level and sparser close to the maximum level. This way the quantization noise is
reduced for lower-intensity signals.
2. Reducing the number of samples
Another basic method of compression is to reduce the number of samples. The number of samples can
be reduced in the way that each two segments are replaced with one sample which is equal to their average.
After the number of sample has been reduced in that way we halved number of samples which means that the
sampling frequency has been halved. And because the bandwidth of the restored signal is directly proportional
to the sampling frequency (B=0.5Fs), the net result is that it got halved as well. In our example this resulted
in the loss of the highest frequency component of the signal: In practice we have to decide which frequencies
are relevant to our application and which can be left out to achieve a reasonable size of the recording.
1.1. PULSE-CODE MODULATION 6

20

40

60

80

100

120

140

160

180

200

220

50 100 150 200 250 300

Figure 1.5: Process of restoring a signal.

The second example


The second example - a more quantitative approach to the PCM, meaning that in following text an approx-
imate approach to the calculating will be shown. An example of an analog signal is shown in the Fig 1.8 . Now
let’s take values of analog signal in discrete time intervals.

Table 1.3: Analog signal values taken in discrete time intervals.

Time Analog signal value [V]


0 2.12
1 1.84
2 -0.08
3 -1.07
4 -0.02
5 0.42
6 1.80
7 1.30
8 1
9 -0.5
10 -1.12

The first step was to take values of signal in discrete time intervals, now it is time for amplitude quantization.
In the first example code word was an binary form of quantization level (level: 3 code word: 011 see table:
related with first example). Encoding in the second example is different, it will be explained below.
2mmax
∆=
L
mmax -maximum input voltage L- number of quantization levels, ∆ quantization step

L = 2R
R (n-number of encoding bits) mmax -maximum = 2.1[v], R = 4 ⇒ L = 16 → ∆ = 0.275
1.1. PULSE-CODE MODULATION 7

50

100

150

200

250
50 100 150 200 250 300

Figure 1.6: Compressed and restored signal with a restored sample.

Segments are numbered in the way that segment 0 represents the lowest input voltage range. There are
positive and negative values of an analog input signal so 1 bit of a code word will be used to code the sign of
the value and the other bits (3 in this example) will represent a binary form of a discreet amplitude value which
is determined by quantization step. The most significant bit(MSB) represents the sign of input value, and rest
of code word is a binary encoded number which was the result of amplitude quantization.
FORMAT OF CODE WORD: Now, lets see example of coding two analog values: Ex1. x=2,10 [V],∆ =
0.275 ¹ º ¹ º
|x| 2.1
N= = = b7.6363c = 710 = 1112
∆ 0.275
x - analog value N - result of amplitude quantization, b..c cuts of decimal places, example:
Code word:x is positive voltage so MSB is 1, three bits are binary coded result of amplitude quantization
(N=111). So, code word in this case is : 1111
For x=-1,64 [V],∆ = 0.275 and N = 510 = 1012
Code word: x is negative voltage so MSB is now 0, three bits are binary coded result of amplitude quanti-
zation (N=101). So, code word in this case is : 0101
As was presented the quantization level is the value which lies in the middle of the segment. So every
quantization level in this example has a belonging voltage representative.In the following table quantization
levels with belonging voltage representatives and code words will be shown. So after a code word is calculated it
is possible to find in which quantization level the observed voltage lies. It can be noticed that unlike first example
the code word in general is not a binary represented quantization level, positive values of quantization levels
are just binary representatives, while negative values are represented binary in different way which corresponds
to the encoding process. After we established the quantization levels and belonging voltage representatives it
can be shown how the example analog input is quantized. After quantization levels are formed we can see how
the input signal is quantized and digitized.
Finally, PCM encoded input signal in binary form looks like this: 1111 1110 0000 0100 0000 1001 1110 1100
1011 0001 0100 We used 44 bits to encode this signal.
If we want to restore the original signal we will have to follow the digital PCM recording and using quan-
tization levels representatives form an analog output. Restoring a signal is shown in the chart below. As it
can be seen that there is a slight divergence of restored signal from the input signal. This divergence is the
result of quantization because the process of quantization is followed by quantization noise. Now, let’s calculate
1.1. PULSE-CODE MODULATION 8

50

100

150

200

250
50 100 150 200 250 300

Figure 1.7: Compressed and restored signal with restored samples

quantization noise (as normalized power - defined for 1W)


2
0.2752
quantization noise= ∆ 12 , Quantization noise for our example:= 12 = 6.3mW . More important than quan-
tization noise is signal to noise ratio.
We should know the power of the signal which we don’t so just to show approximately how to calculate let’s
suppose that input signal is sinusoidal, then we could calculate the signal to noise ratio. The signal to noise
ratio is usually expressed in dB.

m2max
2
SN RdB = 10 log m2max
= 1.76 + 6.02R
3L2

1.1.2 Quantization Noise


Let the quantizer input rn be the sample value of a zero-mean random variable M. A quantizer g(.) maps the
input random variable M of continuous amplitude into a discrete random variable V; Let the quantization error
be denoted by the random variable Q of sample value q. We may thus write

q =m−v

or
Q=M −V
Consider then an input m of continuous amplitude in the range (−mmax , mmax ),we find that the step-size of
the quantizer is given by
2mmax
∆=
L
where L is the total number of representation levels.
assume that the quantization error Q is a uniformly distributed random variable, and the interfering effect
of the quantization noise on the quantizer input is similar to that of thermal noise. We may thus express the
probability density function of the quantization error Q as follows:
(
1/∆ −∆
2 <q ≤

2,
fQ (q) = (1.1)
0 otherwise
1.1. PULSE-CODE MODULATION 9

50

100

150

200

250

50 100 150 200 250 300 350

Figure 1.8: The example of analog signal

20

40

60

80

100

120

140

160

50 100 150 200 250 300

Figure 1.9: sign 3 bits represent binary code for result of amplitude quantization 1 = +, 0 = −

we must ensure that the incoming signal does not overload the quantizer. Then, with the mean of the
2
quantization error being zero, its variance σQ is the same as the mean-square value:
Z ∆/2
2
£ ¤
σQ = E Q2 = q 2 fQ (q)dq
∆/2

Z ∆/2
2 1 ∆2
σQ = q 2 dq =
∆ ∆/2 12
Typically, the L-ary number k, denoting the kth representation level of the quantizer, is transmitted to the
receiver in binary form. Let R denote the number of bits per sample used in the construction of the binary
code. We may then write
L = 2R
R = log2 L
2mmax
∆=
2R
1.2. PULSE-CODE MODULATION 10

Table 1.4: Input voltage intervals with belonging segments

Input voltage range[V] Segment


-8 ∆ ,-7∆ 0
-7∆ ,-6∆ 1
-6∆ ,-5∆ 2
-5∆ ,-4∆ 3
-4∆ ,-3∆ 4
-3∆ ,-2∆ 5
-2∆ ,-1∆ 6
-1∆ ,0 7
0 ,∆ 8
∆,2∆ 9
2∆ ,3∆ 10
3∆ , 4∆ 11
4∆ , 5∆ 12
5∆ ,6∆ 13
6∆ ,7∆ 14
7∆ ,8∆ 15

2 1 2
σQ = m 2−2R
3 max
Let P denote the average power of the message signal x(t). We may then express the output signal-to-noise
ratio of a uniform quantizer as µ ¶
P 3P
SN R = 2 = 22R
σQ m2max
or µ ¶
P 3P
SN R = 2 = L2
σQ m2max
or
m2
SN R = 3L2 2rms (1.2)
mmax
where mmax is the peak amplitude value Above Equation shows that the output signal-to-noise ratio of the
quantizer increases exponentially with increasing number of bits per sample, R. Recognizing that an increase
in R requires a proportionate increase in the channel (transmission) bandwidth B we thus see that the use of a
binary code for the representation of a message signal (as in pulse code modulation) provides a more efficient
method than either frequency modulation (FM) for the trade-off of increased channel bandwidth for improved
noise performance. In making this statement, we presume that the FM system are limited by receiver noise,
whereas the binary-coded modulation system is limited by quantization noise.

1.2 PULSE-CODE MODULATION


With the sampling and quantization processes at our disposal, we are now ready to describe pulse-code mod-
ulation, which, as mentioned previously, is the most basic form of digital pulse modulation. In pulse-code
modulation (PCM), a message signal is represented by a sequence of coded pulses, which is accomplished by
representing the signal in discrete form in both time and amplitude. The basic operations performed in the
transmitter of a PCM system are sampling, quantizing, and encoding. The lowpass filter prior to sampling is
included to prevent aliasing of the message signal. The quantizing and encoding operations are usually per-
formed in the same circuit, which is called an analog-to-digital converter. The basic operations in the receiver
are regeneration of impaired signals, decoding, and reconstruction of the train of quantized samples.
In what follows, we describe the various operations that constitute a basic PCM system.
1.2. PULSE-CODE MODULATION 11

50

100

150

200

250

300

350

400

50 100 150 200 250 300 350 400 450 500

Figure 1.10: Quantization levels with belonging voltage representatives and code words

1.2.1 Sampling
The incoming message signal is sampled with a train of narrow rectangular pulses so as to closely approximate
the instantaneous sampling process. To ensure perfect reconstruction of the message signal at the receiver, the
sampling rate must be greater than twice the highest frequency component fs ≥ 2W W ( Nyquist frequency)of
the message signal in accordance with the sampling theorem. In practice, a low-pass anti-aliasing filter is used
at the front end of the sampler to exclude frequencies greater than W before sampling.

1.2.2 Quantization
Recall that SNR, the SNR, is an indication of the quality of the received signal. Ideally we would -like to have
a constant SNR (the same quality) for all values of the message signal power m2 (t). Unfortunately, the SNR is
directly proportional to the signal power m2 (t), which varies from talker to talker by as much as 40 dB (a power
ratio of (104 ). This means the SNR in Eq. (??) can vary widely, depending on the talker . Even for the same
talker, the quality of the received signal will deteriorate markedly when the person speaks softly. Statistically,
it is found that smaller amplitudes predominate in speech and larger amplitudes are much less frequent. This
means the SNR will be low most of the time.
The root of this difficulty lies in the fact that the quantizing steps are of uniform value ∆ = 2mmax /L.
2
The quantization noise σQ = ∆2 /12 is directly proportional to the square of the step size. The problem can be
solved by using smaller steps for smaller amplitudes (nonuniform quantizing), as shown in Fig. 1.14. The same
1.2. PULSE-CODE MODULATION 12

50

100

150

200

250

50 100 150 200 250 300 350 400 450 500

Figure 1.11: PCM encoded input signal value

result is obtained by first compressing signal samples and then using a uniform quantization. The input-output
characteristics of a compressor are shown in Fig. 1.15. The horizontal axis is the normalized input signal (i.e.,
the input signal amplitude m divided by the signal peak value mmax ). The vertical axis is the output signal
y. The compressor maps input signal increments ∆m into larger increments ∆y for small input signals, and
vice versa for large input signals. Hence, a given interval ∆m contains a larger number of steps (or smaller
step size) when m is small. The quantization noise is smaller for smaller input signal power. An approximately
logarithmic -compression characteristic yields a quantization noise nearly proportional to the signal power m2 (t),
thus making the SNR practically independent of the input signal power over a large dynamic range, (see Fig.
6.13). This approach of equalizing the SNR appears similar to the use of progressive income tax to equalize
incomes. The loud talkers and stronger signals are penalized with higher noise steps ∆ in order to compensate
the soft talkers and weaker signals. Among several choices, two compression laws have been accepted as
desirable standads by the CCITT: the µ-law used in North America and Japan, and the A-law used in Europe
and the rest of the world and international routes. Both the µ-law and the A-law curves have odd symmetry
about the vertical axis. The µ-law (for positive amplitudes) is given by
ln (1 + µ|x|)
y= (1.3)
ln (1 + µ)
where vi represent the input voltage and vi (max) its maximum value, and x is the normalized input voltage
vi
x=
vi (max)
1.2. PULSE-CODE MODULATION 13

50

100

150

200

50 100 150 200 250 300

Figure 1.12: The input signal with digitized samples

and y is the normalized output voltage


vo
y=
vo (max)
|x| is the magnitude of x, µ the compression parameter,µ = 255 is widely used.
Another compression law that is used in practice is the so-called A-law defined by

A|x| 1
y= for |x| ≤
1 + ln(A) A

1 + ln(A|x|)
or y= (1.4)
1 + ln(A)
A=87.6 is widely used.

1.2.3 Encoding
In combining the processes of sampling and quantization, the specification of a continuous message (baseband)
signal becomes limited to a discrete set of values, but not in the form best suited to transmission over a
telephone line or radio path. To exploit the advantages of sampling and quantizing for the purpose of making
the transmitted signal more robust to noise, interference and other channel impairments, we require the use of
an encoding process to translate the discrete set of sample values to a more appropriate form of signal. Any
1.2. PULSE-CODE MODULATION 14

50

100

150

200

50 100 150 200 250 300

Figure 1.13: Input signal and restored signal

plan for representing each of this discrete set of values as a particular arrangement of discrete events is called
a code. One of the discrete events in a code is called a code element or symbol. For example, the presence
or absence of a pulse is a symbol. A particular arrangement of symbols used in a code to represent a single
value of the discrete set is called a code word or character. In a binary code, each symbol may be either of
two distinct values or kinds, such as the presence or absence of a pulse. The two symbols of a binary code are
customarily denoted as 0 and 1. In a ternary code, each symbol may be one of three distinct values or kinds,
and so on for other codes. However, the maximum advantage over the effects of noise in a transmission medium
is obtained by using a binary code, because a binary symbol withstands a relatively high level of noise and is
easy to regenerate. Suppose that, in a binary code, each code word consists of R bits: bit is an acronym for
binary digit; thus R denotes the number of bits per sample. Then, using such a code, we may represent a total
of 2R distinct numbers. For example, a sample quantized into one of 256 levels may be represented by an 8-bit
code word. There are several ways of establishing a one-to-one correspondence between representation levels
and code words. A convenient method is to express the ordinal number of the representation level as a binary
number. In the binary number system, each digit has a place-value that is a power of 2.
Line Codes
Any of several line codes can be used for the electrical representation of a binary data stream. Figure 1.16
displays the waveforms of five important line codes for the example data stream 01101001. The five line codes
illustrated in Figure 1.16 are described here:
• In this line code, symbol 1 is represented by transmitting a pulse of amplitude A for the duration of the
1.2. PULSE-CODE MODULATION 15

50

100

150

200

250

50 100 150 200 250 300 350

Figure 1.14: nonuniform quantization

50

100

150

200

250

300

350

50 100 150 200 250 300 350 400 450 500

Figure 1.15: nonuniform quantization

symbol, and symbol 0 is represented by switching off the pulse, as in Figure 1.16a. This line code is
also referred to as on-off signaling. Disadvantages of on-off signaling are the waste of power due to the
transmitted DC level and the fact that the power spectrum of the transmitted signal does not approach
zero at zero frequency.

• Polar nonreturn-to-zero (NRZ) signaling


In this second line code, symbols 1 and 0 are represented by transmitting pulses of amplitudes +A and -A,
respectively, as illustrated in Figure 1.16b. This line code is relatively easy to generate but its disadvantage
is that the power spectrum of the signal is large near zero frequency.

• Unipolar return-to-zero (RZ) signaling


In this other line code, symbol 1 is represented by a rectangular pulse of amplitude A and half-symbol
width, and symbol 0 is represented by transmitting no pulse, as illustrated in Figure 1.16c. An attractive
feature of this line code is the presence of delta functions at f = 0, ±l/Tb in the power spectrum of the
transmitted signal, which can be used for bittiming recovery at the receiver. However, its disadvantage is
that it requires 3 dB more power than polar return-to-zero signaling for the same probability of symbol
error.

• Bipolar return-to-zero (BRZ) signaling


1.2. PULSE-CODE MODULATION 16

50

100

150

200

250

300

350

400

450

50 100 150 200 250 300

Figure 1.16: Line codes for the electrical representations of binary data. (a) Unipolar NRZ signaling. (b) Polar
NRZ signaling. (c) Unipolar RZ signaling. (d) Bipolar RZ signaling. (e) Split-phase or Manchester code.

This line code uses three amplitude levels as indicated in Figure 1.16d. Specifically, positive and negative
pulses of equal amplitude (i.e., +A and -A) are used alternately for symbol 1, with each pulse having a
half-symbol width; no pulse is always used for symbol 0. A useful property of the BRZ signaling is that the
power spectrum of the transmitted signal has no DC component and relatively insignificant low-frequency
components for the case when symbols 1 and 0 occur with equal probability. This line code is also called
alternate mark inversion (AM) signaling.

• Split-phase (Manchester code)


In this method of signaling, illustrated in Figure 1.16e, symbol 1 is represented by a positive pulse of
amplitude A followed by a negative pulse of amplitude -A, with both pulses being half-symbol wide.
For symbol 0, the polarities of these two pulses are reversed. The Manchester code suppresses the DC
component and has relatively insignificant low-frequency components, regardless of the signal statistics.
This property is essential in some applications.

1.2.4 Transmission Bandwidth and the Output SNR


For a binary PCM, we assign a distinct group of R binary digits (bits) to each of the L quantization levels.
Because a sequence of R binary digits can be arranged in 2R distinct patterns,

L = 2R orR = log2 L
Each quantized sample is, thus, encoded into R bits. Because a signal x (t) band-limited to B Hz requires
a minimum of 2B samples per second, we require a total of 2R B bits per second (bps), that is, 2R B pieces of
1.2. PULSE-CODE MODULATION 17

information per second. Because a unit bandwidth (1 Hz) can transmit a maximum of two pieces of information
per second , we require a minimum channel of bandwidth BW HZ, given by

BW = RBHz
This is the theoretical minimum transmission bandwidth required to transmit the PCM signal. We shall see
that for practical reasons we may use a transmission bandwidth higher than this.
Notes
For a given number of quantisation levels, L, the number of binary digits required for each PCM code word
is R = log2 L. The PCM peak signal to quantisation noise ratio, (SN R)peak is, therefore:
¡ ¢2
SN R = 3L2 = 3 2R

If the ratio of peak to mean signal power,see equ 1.2 is denoted by α then the average signal to quantisation
noise ratio is:
¡ ¢
SN R = 3 22R (1/α)
or in dB
SN R = 4.8 + 6R − αdB
For a sinusoidal signal α = 2 or (3dB), and for speech α = 10 dB.
2-If distortion is to be avoided the maximum peak signal must not be allowed to exceed half the total input
voltage range or xmax = L∆/2 in this case
SN R = 3k 2 L2
where k = xrms /xmax
Example
A digital communications system is to carry a single voice signal using linearly quantised PCM. What PCM
bit rate will be required if an ideal anti-aliasing filter with a cut-off frequency of 3.4 kHz is used at the transmitter
and the signal to quantisation noise ratio is to be kept above 50 dB?

SN R = 4.8 + 6R − αdB

50 + 10 − 4.8
R= = 9.2
6
10 bit/sample are therefore required. The sampling rate required is given by Nyquist’s rule,fs ≥ 2fH . fs =
2 × 3.4kHz =6.8kHz, The PCM bit rate (or more strictly binary baud rate) is therefore:

r = fs R = 6.8 × 1000 × 10bit/s = 68kbit/s

Example
A signal x(t) band-limited to 3 kHz is sampled at a rate 33 13 % higher than the Nyquist rate. The maximum
acceptable error in the sample amplitude (the maximum quantization error) is 0.5% of the peak amplitude xmax .
The quantized samples are binary coded. Find the minimum bandwidth of a channel required to transmit the
encoded binary signal. If 24 such signals are time-division-multiplexed, determine the minimum transmission
bandwidth required to transmit the multiplexed signal.
the Nyquist sampling rate = 2 × 3000 = 6000Hz (sample per second). But actual sampling rate is 6000 ×
(1.3) = 8000Hz Let the quantization step is ∆ and maximum quantization error is ∆/2
therefore
∆ xmax .5xmax
= = ⇒ L = 200
2 L 100
For binary coding, L must be a power of 2. Hence, the next higher value of L that is a power of 2 is L =
256.
1.2. PULSE-CODE MODULATION 18

we need R = log2 256 = 8 bits per sample. We require to transmit a total of C = 8x8000 = 64, 000 bits/s.
Because we can transmit up to 2 bits/s per hertz of bandwidth, we require a minimum transmission bandwidth
BW = C/2 = 32 kHz. The multiplexed signal has a total of CM = 24 × 64000 = 1.536 Mbit/s which requires a
minimum of 1.536/2=0.768MHzof transmission bandwidth.
Problems
Show that the SQR is worse for low amplitude signals than for high amplitude signals. How does this justify
the use of companding?

• Draw the block diagram of a PCM coder (and decoder), explaining the function of each component.

• Explain, with the aid of diagrams, the process of quantization

• A signal is to quantized with a fixed spacing between L quantization levels. The maximum signal which
can be handled has amplitude Vmax. Determine the mean quantization error in terms of (a) L (b) the
number of bits, R, needed to encode L levels.

• A sine wave with 1.5 volt maximum amplitude is to be digitised with a minimum signal to quantisation
noise ratio (SQR) of 35 dB. How many bits are needed to encode each sample?

• An audio signal with spectral components limited to the frequency band 300 to 3300 Hz, is sampled at
8000 samples per second, uniformly quantized and binary coded. If the required output SQR is 30 dB,
calculate: a) the minimum number of uniform quantisation levels needed, b) the number of bits per sample
needed c) the minimum bit rate sent to line.

Example A PCM system is to have SNR ratio of 40dB. the signals are speech and rms-to-peak ratio of -10dB
is allowed for. Find the number of bits per code word required.
Solution
SN R = 3k 2 22R
10 log(SN R) = 10 log 3 + 20 log k + 20R log 2
SN RdB = 4.77 − 10 + 6R
R = 7.5 ≈ 8(rounded up)
Differential Encoding In analog messages we can make a good guess about a sample value from a knowl-
edge of the past sample values. In other words, the sample values are not independent, and generally there is
a great deal of redundancy in the Nyquist samples. Proper exploitation of this redundancy leads to encoding a
signal with a lesser number of bits. Consider a simple scheme where instead of transmitting the sample values,
we transmit the difference between the successive sample values. Thus, if x[k] is the kth sample, instead of trans-
mitting x[k], we transmit the differ d[k] = x[k] − x[k − 1]. At the receiver, knowing d[k] and the previous sample
vtalue m[k-1] we can reconstruct x[k]. Thus, from the knowledge of the difference d[k], we can reconstruct x[k]
iteratively at the receiver. Now, the difference between successive samples is generally much smaller than the
sample values. Thus, the peak amplitude xp or xmax of the transmitted is reduced considerably. Because the
quantization interval ∆ = xp /L, for a given L (or R) this reduces the quantization interval ∆, thus reducing
the quantization noise, which is given by ∆2 /12. This means that for a given R (or transmission bandwidth),
we can increase SNR, or for a given SNR, we can reduce R (or transmission bandwidth).
We can improve upon this scheme by estimating (predicting) the value of the kth sample x[k] from a
knowledge of the previous sample values. If this estimate is x̂[k], then we transmit the difference (prediction
error) d[k] = x[k] − x̂[k]. At the receiver also, we determine estimate x̂[k] from the previous sample values, and
then generate x[k] by adding the receiver d[k] to the estimate x̂[k].
Thus, we reconstruct the samples at the receiver iteratively. If our prediction is worth its salt, the predicted
(estimated) value x̂[k] will be close to x[k] and their difference (prediction error) d[k] will be even smaller than
the difference between the successive samples. Consequently, this scheme, known as the differential PCM
1.2. PULSE-CODE MODULATION 19

(DPCM) is superior to that described in the previous paragraph, which is a special case of DPCM, where the
estimate of a sample value is taken as the previous sample value, that is, x̂[k] = x[k − 1]
Let a signal x(t) which has derivatives of all orders at t. Using the Taylor series for this signal, we can
express x(t + Ts ) as
T2
x(t + Ts ) = x(t) + Ts ẋ(t) + s ẍ(t) + ......
2

x(t + Ts ) = x(t) + Ts ẋ(t)


Above equation shows that from a knowledge of the signal and its derivatives at instant t, we can predict a
future signal value at t + T,. In fact, even if we know just the first derivative, we can still predict this value
approximately, as shown in the second Eq. Let us denote the kth sample of x(t) by x[k], that is, x(kTs ) = x[kl
and (kTs ± Ts , ) = x[k ± 1] and so on. Setting t = kTs ,
· ¸
x[k] − x[k − 1]
x[k + 1] ≈ x[k] + Ts
Ts
= 2x[k] − x[k − 1]
This shows that we can find a crude prediction of the (k + 1)th sample from the two previous samples. To
determine the higher order derivatives in the series, we require more samples in the past. The larger the number
of past samples we use, the better will be the prediction. Thus, in general, we can express the prediction formula
as
x[k] = a1 x[k − 1] + a2 x[k − 2] + ..... + aN x[k − N ]
The right-hand side is x̂[k] the predicted value of x[k].
x̂[k] = a1 x[k − 1] + a2 x[k − 2] + ..... + aN x[k − N ]
In DPCM we transmit not the present sample x[k], but d[k] (the difference between m[k] and its predicted
value x̂[k]. At the receiver, we generate x̂[k] from the past sample values to which the received d[k] is added
to generate m[k]. There is, however, one difficulty in this scheme. At the receiver, instead of the past samples
x[k − 1], x[k − 2], .... as well as d[k],w e have their quantized versions xq [k − 1], xq [k − 2], ... Hence, we cannnot
determine x̂[k] We can only determine x̂q [k] the estimate of the quantized sample xq [k] in terms of the
quantized samples xq [k − 1], xq [k − 2], .... This will increase the error in reconstruction. In such a case, a better
strategy is to determine xq [k] the estimate of x[k], (instead of x[k]) at the transmitter also from the quantized
samples xq [k − 1], xq [k − 2], ... . The difference d[k] = m[k] - x̂q [k], is now transmitted using PCM. At the
receiver, we can generate x̂q [k], and from the received d[k],we can reconstruct xq [k] . The difference
d[k] = x[k] − x̂q [k]
is quantized to yield
dq [k] = d[k] + q[k]
where q[k] is the quantization error. The predictor output x̂q [k] is fed back to its input so that the predictor
input xq [k] is
xq [k] = xq [k] + dq [k]
xq [k] = x[k] − d[k] + dq [k]
xq [k] = x[k] + q[k]
This shows that xq [k] is a quantized version of x[k]. The predictor input is indeed xq [k] as assumed. The
quantized signal dq [k] is now transmitted over the channel. Therefore, the predictor output must be xq [k] (the
sam as the predictor output at the transmitter). Hence, the receiver output (which is the predictor input) is
also the same, xq [k] = x[k] + q[k], as found in last equation. This shows that we are able to receive the desired
signal x[k] plus the quantization noise q[k]. This is the quantization noise associated with the difference signal
d[k], which is generally much smaller than x[k] The received samples are decoded and passed through a low-pass
filter for D/A conversion.

Você também pode gostar