Escolar Documentos
Profissional Documentos
Cultura Documentos
PULSE-CODE MODULATION
Al-Azhar University
Table of Contents ii
ii
Chapter 1
Contents
• Quantization Noise
• Decoding Noise
q = M ν , ν = logM q or L = MR R = logM L
Thus, the number of quantum levels for binary PCM equals some power of 2, namely
L = 2R , R = log2 L
1
1.1. PULSE-CODE MODULATION 2
Finally, successive codewords are read out serially to constitute the PCM waveform, an M-ary digital signal.
The PCM generator thereby acts as an ADC, performing analog-to-digital conversions at the sampling rate
fs = l/Ts . A timing circuit coordinates the sampling and parallel-to-serial readout.
200
400
600
800
1000
1200
Each encoded sample is represented by a ν-digit output word, so the signaling rate becomes r = Rfs , with
fs ≥ 2W . Therefore, the bandwidth needed for PCM baseband transmission is
1 1
BT ≥ r = Rfs ≥ RW
2 2
Fine-grain quantization for accurate reconstruction of the message waveform requires q À 1, which increases
the transmission bandwidth by the factor R = logM L times the message bandwidth W.
Now consider a PCM receiver with the reconstruction system in Fig. 1.2a. The received signal may be
contaminated by noise, but regeneration yields a clean and nearly errorless waveform if (S/N )R is sufficiently
large. The DAC operations of serial-to-parallel conversion, M-ary decoding, and sample-and-hold generate the
analog waveform xq (t) drawn in Fig. 1.2b. This waveform is a ”staircase” approximation of x(t), similar
to flat-top sampling except that the sample values have been quantized. Lowpass filtering then produces the
smoothed output signal
yD (t), .which differs from the message x(t) to the extent that the quantized samples differ from the exact
sample values x(kTs ). Perfect message reconstruction is therefore impossible in PCM, even when random noise
has no effect. The ADC operation at the transmitter introduces permanent errors that appear at the receiver
1.1. PULSE-CODE MODULATION 3
100
200
300
400
500
600
700
800
900
1000
1100
as quantization noise in the reconstructed signal. We’ll study this quantization noise after an example of PCM
hardware implementation.
Ik : {xk−1 < x ≤ xk } , k = 2, · · ·, N − 1
The code k is transmitted and the receiver transforms it into an amplitude yk . The amplitudes xk are called
the decision levels and the amplitudes yk are called the reconstruction levels. The quantization error introduced
by the process is q = x - y. If x represents samples from a random process, then we could consider the errors
introduced by the quantizer as a form of additive quantization noise. It is important to recognize the fact that
the noise would not be any different if the sampling was performed after quantization.
The length of the decision interval Ik (i.e. xk - xk−1 ) is denoted by ∆k , the step size. Two special decision
levels xo and xN , known as the overload levels, are defined such that x1 − xo = x2 − x1 , and, xN − xN −1 =
xN −1 − xN −2 . Note that y1 is the reconstruction level whenever x < x1 andyN is the reconstruction level
1.1. PULSE-CODE MODULATION 4
50
100
150
200
250
300
forx > xN −1 . If the signal is limited to the range xo to xN , then we have a no overload situation and the
resulting noise is known as granular noise. The term overload distortion is used to denote the quantization
noise caused by the input signal falling outside this range.
The first example of PCM encoding In this example the signal is quantized in 11 time points using 8
quantization segments. All the values that fall into a specific segment are approximated with the corresponding
quantization level which lies in the middle of a segment. The levels are encoded using this table: The first
Figure (Fig. 1.4) shows the process of signal quantizing and digitizing. The samples shown are already quantized
- they are approximated with the nearest quantization level. To the right of each sample is the number of its
quantization level. This number is converted into a 3-bit code word using the above table. The second Figure
(Fig. 1.5) shows the process of signal restoration.The restored signal is formed according to taken samples.
It can be noticed that the restored signal diverges from the input signal. This divergence is a consequence of
quantization noise. It always has the same intensity, independent from the signal intensity. If the signal intensity
drops, the quantization noise will be more noticeable (the signal-to-noise ratio will drop). PCM encoded signal
in binary form: 101 111 110 001 010 100 111 100 011 010 101
Basic compression methods
1.Reducing number of quantization levels
The number of quantization segments can be reduced by joining two neighboring segments into one. This
1.1. PULSE-CODE MODULATION 5
20
40
60
80
100
120
140
160
180
200
220
means that finally we will have 4 quantization segments unlike the previous case in which we had 8 segments.
Four quantization segments can be coded using 2-bit code words, this will be shown in the table below.
Table 1.2: Quantization levels with belonging code words (after compression)
The Figure (Fig. 1.6) shows the reconstructed signal after compression. It still has the same basic contours,
but the distortions are greater due to coarser approximation - the quantization noise has increased. This is due
to the fact that the quantization step is now double in size than with the uncompressed PCM. The compressed
signal is PCM encoded as follows: 10 11 11 00 01 10 11 10 01 01 10 After compression we have 22 bits, that
means 33% reduction in size with compression ratio of 1.5:1 .
Practical use of this method In practice, a PCM encoded audio signal is compressed at higher rates -
for example, from 16 to 8 bits per sample (a rate of 2:1). This compression standard (called the A-law) uses
nonlinear quantization. The quantization levels are not evenly distributed across the quantization range - they
are denser near the zero level and sparser close to the maximum level. This way the quantization noise is
reduced for lower-intensity signals.
2. Reducing the number of samples
Another basic method of compression is to reduce the number of samples. The number of samples can
be reduced in the way that each two segments are replaced with one sample which is equal to their average.
After the number of sample has been reduced in that way we halved number of samples which means that the
sampling frequency has been halved. And because the bandwidth of the restored signal is directly proportional
to the sampling frequency (B=0.5Fs), the net result is that it got halved as well. In our example this resulted
in the loss of the highest frequency component of the signal: In practice we have to decide which frequencies
are relevant to our application and which can be left out to achieve a reasonable size of the recording.
1.1. PULSE-CODE MODULATION 6
20
40
60
80
100
120
140
160
180
200
220
The first step was to take values of signal in discrete time intervals, now it is time for amplitude quantization.
In the first example code word was an binary form of quantization level (level: 3 code word: 011 see table:
related with first example). Encoding in the second example is different, it will be explained below.
2mmax
∆=
L
mmax -maximum input voltage L- number of quantization levels, ∆ quantization step
L = 2R
R (n-number of encoding bits) mmax -maximum = 2.1[v], R = 4 ⇒ L = 16 → ∆ = 0.275
1.1. PULSE-CODE MODULATION 7
50
100
150
200
250
50 100 150 200 250 300
Segments are numbered in the way that segment 0 represents the lowest input voltage range. There are
positive and negative values of an analog input signal so 1 bit of a code word will be used to code the sign of
the value and the other bits (3 in this example) will represent a binary form of a discreet amplitude value which
is determined by quantization step. The most significant bit(MSB) represents the sign of input value, and rest
of code word is a binary encoded number which was the result of amplitude quantization.
FORMAT OF CODE WORD: Now, lets see example of coding two analog values: Ex1. x=2,10 [V],∆ =
0.275 ¹ º ¹ º
|x| 2.1
N= = = b7.6363c = 710 = 1112
∆ 0.275
x - analog value N - result of amplitude quantization, b..c cuts of decimal places, example:
Code word:x is positive voltage so MSB is 1, three bits are binary coded result of amplitude quantization
(N=111). So, code word in this case is : 1111
For x=-1,64 [V],∆ = 0.275 and N = 510 = 1012
Code word: x is negative voltage so MSB is now 0, three bits are binary coded result of amplitude quanti-
zation (N=101). So, code word in this case is : 0101
As was presented the quantization level is the value which lies in the middle of the segment. So every
quantization level in this example has a belonging voltage representative.In the following table quantization
levels with belonging voltage representatives and code words will be shown. So after a code word is calculated it
is possible to find in which quantization level the observed voltage lies. It can be noticed that unlike first example
the code word in general is not a binary represented quantization level, positive values of quantization levels
are just binary representatives, while negative values are represented binary in different way which corresponds
to the encoding process. After we established the quantization levels and belonging voltage representatives it
can be shown how the example analog input is quantized. After quantization levels are formed we can see how
the input signal is quantized and digitized.
Finally, PCM encoded input signal in binary form looks like this: 1111 1110 0000 0100 0000 1001 1110 1100
1011 0001 0100 We used 44 bits to encode this signal.
If we want to restore the original signal we will have to follow the digital PCM recording and using quan-
tization levels representatives form an analog output. Restoring a signal is shown in the chart below. As it
can be seen that there is a slight divergence of restored signal from the input signal. This divergence is the
result of quantization because the process of quantization is followed by quantization noise. Now, let’s calculate
1.1. PULSE-CODE MODULATION 8
50
100
150
200
250
50 100 150 200 250 300
m2max
2
SN RdB = 10 log m2max
= 1.76 + 6.02R
3L2
q =m−v
or
Q=M −V
Consider then an input m of continuous amplitude in the range (−mmax , mmax ),we find that the step-size of
the quantizer is given by
2mmax
∆=
L
where L is the total number of representation levels.
assume that the quantization error Q is a uniformly distributed random variable, and the interfering effect
of the quantization noise on the quantizer input is similar to that of thermal noise. We may thus express the
probability density function of the quantization error Q as follows:
(
1/∆ −∆
2 <q ≤
∆
2,
fQ (q) = (1.1)
0 otherwise
1.1. PULSE-CODE MODULATION 9
50
100
150
200
250
20
40
60
80
100
120
140
160
Figure 1.9: sign 3 bits represent binary code for result of amplitude quantization 1 = +, 0 = −
we must ensure that the incoming signal does not overload the quantizer. Then, with the mean of the
2
quantization error being zero, its variance σQ is the same as the mean-square value:
Z ∆/2
2
£ ¤
σQ = E Q2 = q 2 fQ (q)dq
∆/2
Z ∆/2
2 1 ∆2
σQ = q 2 dq =
∆ ∆/2 12
Typically, the L-ary number k, denoting the kth representation level of the quantizer, is transmitted to the
receiver in binary form. Let R denote the number of bits per sample used in the construction of the binary
code. We may then write
L = 2R
R = log2 L
2mmax
∆=
2R
1.2. PULSE-CODE MODULATION 10
2 1 2
σQ = m 2−2R
3 max
Let P denote the average power of the message signal x(t). We may then express the output signal-to-noise
ratio of a uniform quantizer as µ ¶
P 3P
SN R = 2 = 22R
σQ m2max
or µ ¶
P 3P
SN R = 2 = L2
σQ m2max
or
m2
SN R = 3L2 2rms (1.2)
mmax
where mmax is the peak amplitude value Above Equation shows that the output signal-to-noise ratio of the
quantizer increases exponentially with increasing number of bits per sample, R. Recognizing that an increase
in R requires a proportionate increase in the channel (transmission) bandwidth B we thus see that the use of a
binary code for the representation of a message signal (as in pulse code modulation) provides a more efficient
method than either frequency modulation (FM) for the trade-off of increased channel bandwidth for improved
noise performance. In making this statement, we presume that the FM system are limited by receiver noise,
whereas the binary-coded modulation system is limited by quantization noise.
50
100
150
200
250
300
350
400
Figure 1.10: Quantization levels with belonging voltage representatives and code words
1.2.1 Sampling
The incoming message signal is sampled with a train of narrow rectangular pulses so as to closely approximate
the instantaneous sampling process. To ensure perfect reconstruction of the message signal at the receiver, the
sampling rate must be greater than twice the highest frequency component fs ≥ 2W W ( Nyquist frequency)of
the message signal in accordance with the sampling theorem. In practice, a low-pass anti-aliasing filter is used
at the front end of the sampler to exclude frequencies greater than W before sampling.
1.2.2 Quantization
Recall that SNR, the SNR, is an indication of the quality of the received signal. Ideally we would -like to have
a constant SNR (the same quality) for all values of the message signal power m2 (t). Unfortunately, the SNR is
directly proportional to the signal power m2 (t), which varies from talker to talker by as much as 40 dB (a power
ratio of (104 ). This means the SNR in Eq. (??) can vary widely, depending on the talker . Even for the same
talker, the quality of the received signal will deteriorate markedly when the person speaks softly. Statistically,
it is found that smaller amplitudes predominate in speech and larger amplitudes are much less frequent. This
means the SNR will be low most of the time.
The root of this difficulty lies in the fact that the quantizing steps are of uniform value ∆ = 2mmax /L.
2
The quantization noise σQ = ∆2 /12 is directly proportional to the square of the step size. The problem can be
solved by using smaller steps for smaller amplitudes (nonuniform quantizing), as shown in Fig. 1.14. The same
1.2. PULSE-CODE MODULATION 12
50
100
150
200
250
result is obtained by first compressing signal samples and then using a uniform quantization. The input-output
characteristics of a compressor are shown in Fig. 1.15. The horizontal axis is the normalized input signal (i.e.,
the input signal amplitude m divided by the signal peak value mmax ). The vertical axis is the output signal
y. The compressor maps input signal increments ∆m into larger increments ∆y for small input signals, and
vice versa for large input signals. Hence, a given interval ∆m contains a larger number of steps (or smaller
step size) when m is small. The quantization noise is smaller for smaller input signal power. An approximately
logarithmic -compression characteristic yields a quantization noise nearly proportional to the signal power m2 (t),
thus making the SNR practically independent of the input signal power over a large dynamic range, (see Fig.
6.13). This approach of equalizing the SNR appears similar to the use of progressive income tax to equalize
incomes. The loud talkers and stronger signals are penalized with higher noise steps ∆ in order to compensate
the soft talkers and weaker signals. Among several choices, two compression laws have been accepted as
desirable standads by the CCITT: the µ-law used in North America and Japan, and the A-law used in Europe
and the rest of the world and international routes. Both the µ-law and the A-law curves have odd symmetry
about the vertical axis. The µ-law (for positive amplitudes) is given by
ln (1 + µ|x|)
y= (1.3)
ln (1 + µ)
where vi represent the input voltage and vi (max) its maximum value, and x is the normalized input voltage
vi
x=
vi (max)
1.2. PULSE-CODE MODULATION 13
50
100
150
200
A|x| 1
y= for |x| ≤
1 + ln(A) A
1 + ln(A|x|)
or y= (1.4)
1 + ln(A)
A=87.6 is widely used.
1.2.3 Encoding
In combining the processes of sampling and quantization, the specification of a continuous message (baseband)
signal becomes limited to a discrete set of values, but not in the form best suited to transmission over a
telephone line or radio path. To exploit the advantages of sampling and quantizing for the purpose of making
the transmitted signal more robust to noise, interference and other channel impairments, we require the use of
an encoding process to translate the discrete set of sample values to a more appropriate form of signal. Any
1.2. PULSE-CODE MODULATION 14
50
100
150
200
plan for representing each of this discrete set of values as a particular arrangement of discrete events is called
a code. One of the discrete events in a code is called a code element or symbol. For example, the presence
or absence of a pulse is a symbol. A particular arrangement of symbols used in a code to represent a single
value of the discrete set is called a code word or character. In a binary code, each symbol may be either of
two distinct values or kinds, such as the presence or absence of a pulse. The two symbols of a binary code are
customarily denoted as 0 and 1. In a ternary code, each symbol may be one of three distinct values or kinds,
and so on for other codes. However, the maximum advantage over the effects of noise in a transmission medium
is obtained by using a binary code, because a binary symbol withstands a relatively high level of noise and is
easy to regenerate. Suppose that, in a binary code, each code word consists of R bits: bit is an acronym for
binary digit; thus R denotes the number of bits per sample. Then, using such a code, we may represent a total
of 2R distinct numbers. For example, a sample quantized into one of 256 levels may be represented by an 8-bit
code word. There are several ways of establishing a one-to-one correspondence between representation levels
and code words. A convenient method is to express the ordinal number of the representation level as a binary
number. In the binary number system, each digit has a place-value that is a power of 2.
Line Codes
Any of several line codes can be used for the electrical representation of a binary data stream. Figure 1.16
displays the waveforms of five important line codes for the example data stream 01101001. The five line codes
illustrated in Figure 1.16 are described here:
• In this line code, symbol 1 is represented by transmitting a pulse of amplitude A for the duration of the
1.2. PULSE-CODE MODULATION 15
50
100
150
200
250
50
100
150
200
250
300
350
symbol, and symbol 0 is represented by switching off the pulse, as in Figure 1.16a. This line code is
also referred to as on-off signaling. Disadvantages of on-off signaling are the waste of power due to the
transmitted DC level and the fact that the power spectrum of the transmitted signal does not approach
zero at zero frequency.
50
100
150
200
250
300
350
400
450
Figure 1.16: Line codes for the electrical representations of binary data. (a) Unipolar NRZ signaling. (b) Polar
NRZ signaling. (c) Unipolar RZ signaling. (d) Bipolar RZ signaling. (e) Split-phase or Manchester code.
This line code uses three amplitude levels as indicated in Figure 1.16d. Specifically, positive and negative
pulses of equal amplitude (i.e., +A and -A) are used alternately for symbol 1, with each pulse having a
half-symbol width; no pulse is always used for symbol 0. A useful property of the BRZ signaling is that the
power spectrum of the transmitted signal has no DC component and relatively insignificant low-frequency
components for the case when symbols 1 and 0 occur with equal probability. This line code is also called
alternate mark inversion (AM) signaling.
L = 2R orR = log2 L
Each quantized sample is, thus, encoded into R bits. Because a signal x (t) band-limited to B Hz requires
a minimum of 2B samples per second, we require a total of 2R B bits per second (bps), that is, 2R B pieces of
1.2. PULSE-CODE MODULATION 17
information per second. Because a unit bandwidth (1 Hz) can transmit a maximum of two pieces of information
per second , we require a minimum channel of bandwidth BW HZ, given by
BW = RBHz
This is the theoretical minimum transmission bandwidth required to transmit the PCM signal. We shall see
that for practical reasons we may use a transmission bandwidth higher than this.
Notes
For a given number of quantisation levels, L, the number of binary digits required for each PCM code word
is R = log2 L. The PCM peak signal to quantisation noise ratio, (SN R)peak is, therefore:
¡ ¢2
SN R = 3L2 = 3 2R
If the ratio of peak to mean signal power,see equ 1.2 is denoted by α then the average signal to quantisation
noise ratio is:
¡ ¢
SN R = 3 22R (1/α)
or in dB
SN R = 4.8 + 6R − αdB
For a sinusoidal signal α = 2 or (3dB), and for speech α = 10 dB.
2-If distortion is to be avoided the maximum peak signal must not be allowed to exceed half the total input
voltage range or xmax = L∆/2 in this case
SN R = 3k 2 L2
where k = xrms /xmax
Example
A digital communications system is to carry a single voice signal using linearly quantised PCM. What PCM
bit rate will be required if an ideal anti-aliasing filter with a cut-off frequency of 3.4 kHz is used at the transmitter
and the signal to quantisation noise ratio is to be kept above 50 dB?
SN R = 4.8 + 6R − αdB
50 + 10 − 4.8
R= = 9.2
6
10 bit/sample are therefore required. The sampling rate required is given by Nyquist’s rule,fs ≥ 2fH . fs =
2 × 3.4kHz =6.8kHz, The PCM bit rate (or more strictly binary baud rate) is therefore:
Example
A signal x(t) band-limited to 3 kHz is sampled at a rate 33 13 % higher than the Nyquist rate. The maximum
acceptable error in the sample amplitude (the maximum quantization error) is 0.5% of the peak amplitude xmax .
The quantized samples are binary coded. Find the minimum bandwidth of a channel required to transmit the
encoded binary signal. If 24 such signals are time-division-multiplexed, determine the minimum transmission
bandwidth required to transmit the multiplexed signal.
the Nyquist sampling rate = 2 × 3000 = 6000Hz (sample per second). But actual sampling rate is 6000 ×
(1.3) = 8000Hz Let the quantization step is ∆ and maximum quantization error is ∆/2
therefore
∆ xmax .5xmax
= = ⇒ L = 200
2 L 100
For binary coding, L must be a power of 2. Hence, the next higher value of L that is a power of 2 is L =
256.
1.2. PULSE-CODE MODULATION 18
we need R = log2 256 = 8 bits per sample. We require to transmit a total of C = 8x8000 = 64, 000 bits/s.
Because we can transmit up to 2 bits/s per hertz of bandwidth, we require a minimum transmission bandwidth
BW = C/2 = 32 kHz. The multiplexed signal has a total of CM = 24 × 64000 = 1.536 Mbit/s which requires a
minimum of 1.536/2=0.768MHzof transmission bandwidth.
Problems
Show that the SQR is worse for low amplitude signals than for high amplitude signals. How does this justify
the use of companding?
• Draw the block diagram of a PCM coder (and decoder), explaining the function of each component.
• A signal is to quantized with a fixed spacing between L quantization levels. The maximum signal which
can be handled has amplitude Vmax. Determine the mean quantization error in terms of (a) L (b) the
number of bits, R, needed to encode L levels.
• A sine wave with 1.5 volt maximum amplitude is to be digitised with a minimum signal to quantisation
noise ratio (SQR) of 35 dB. How many bits are needed to encode each sample?
• An audio signal with spectral components limited to the frequency band 300 to 3300 Hz, is sampled at
8000 samples per second, uniformly quantized and binary coded. If the required output SQR is 30 dB,
calculate: a) the minimum number of uniform quantisation levels needed, b) the number of bits per sample
needed c) the minimum bit rate sent to line.
Example A PCM system is to have SNR ratio of 40dB. the signals are speech and rms-to-peak ratio of -10dB
is allowed for. Find the number of bits per code word required.
Solution
SN R = 3k 2 22R
10 log(SN R) = 10 log 3 + 20 log k + 20R log 2
SN RdB = 4.77 − 10 + 6R
R = 7.5 ≈ 8(rounded up)
Differential Encoding In analog messages we can make a good guess about a sample value from a knowl-
edge of the past sample values. In other words, the sample values are not independent, and generally there is
a great deal of redundancy in the Nyquist samples. Proper exploitation of this redundancy leads to encoding a
signal with a lesser number of bits. Consider a simple scheme where instead of transmitting the sample values,
we transmit the difference between the successive sample values. Thus, if x[k] is the kth sample, instead of trans-
mitting x[k], we transmit the differ d[k] = x[k] − x[k − 1]. At the receiver, knowing d[k] and the previous sample
vtalue m[k-1] we can reconstruct x[k]. Thus, from the knowledge of the difference d[k], we can reconstruct x[k]
iteratively at the receiver. Now, the difference between successive samples is generally much smaller than the
sample values. Thus, the peak amplitude xp or xmax of the transmitted is reduced considerably. Because the
quantization interval ∆ = xp /L, for a given L (or R) this reduces the quantization interval ∆, thus reducing
the quantization noise, which is given by ∆2 /12. This means that for a given R (or transmission bandwidth),
we can increase SNR, or for a given SNR, we can reduce R (or transmission bandwidth).
We can improve upon this scheme by estimating (predicting) the value of the kth sample x[k] from a
knowledge of the previous sample values. If this estimate is x̂[k], then we transmit the difference (prediction
error) d[k] = x[k] − x̂[k]. At the receiver also, we determine estimate x̂[k] from the previous sample values, and
then generate x[k] by adding the receiver d[k] to the estimate x̂[k].
Thus, we reconstruct the samples at the receiver iteratively. If our prediction is worth its salt, the predicted
(estimated) value x̂[k] will be close to x[k] and their difference (prediction error) d[k] will be even smaller than
the difference between the successive samples. Consequently, this scheme, known as the differential PCM
1.2. PULSE-CODE MODULATION 19
(DPCM) is superior to that described in the previous paragraph, which is a special case of DPCM, where the
estimate of a sample value is taken as the previous sample value, that is, x̂[k] = x[k − 1]
Let a signal x(t) which has derivatives of all orders at t. Using the Taylor series for this signal, we can
express x(t + Ts ) as
T2
x(t + Ts ) = x(t) + Ts ẋ(t) + s ẍ(t) + ......
2