Você está na página 1de 45

Lecture 10.

Lossless Image Compression


ELEN E4830 Digital Image Processing Spring 2010
Zhu Liu zliu@research.att.com AT&T Labs - Research David Gibbon dcg@research.att.com AT&T Labs - Research

Note: Part of the materials in the slides are from Gonzalezs Digital Image Processing and Prof. Yao Wangs lecture slides

Lecture Outline
Introduction Binary encoding
Fixed length coding Variable length coding
Huffman coding Arithmetic coding

Runlength coding of bilevel images Predictive coding


Srping 2010 ELEN E4830 Digital Image Processing Lecture 10, Page 2

Necessity for Signal Compression


Storage requirement for various uncompressed data types
Data One Page of Text One 640x480 24-bit color still image Voice ( 8 Khz, 8-bit, mono) Audio CD DA (44.1 Khz, 16-bit, stereo) Animation ( 320x640 pixels, 16-bit color, 16 frame/s) Video (720x480 pixels, 24-bit color, 30 frame/s) Size 2 KB 900 KB 8 KB /second 176 KB/second 6.25 MB/second 29.7 MB/second

Goal of compression

1KB = 1,024 Bytes 1MB = 1,048,576 Bytes (1,0242)

Given a bit rate, achieve the best quality Given an allowed distortion, minimize the data amount
Srping 2010 ELEN E4830 Digital Image Processing Lecture 10, Page 3

Image Coding Standards


Fax Group: encoding format for fax transmission
Fax group 3 (G3): commonly used, ~ 1page/min, up to 200 dpi Fax group 4 (G4): less frequently used, faster, up to 400 dpi.

JBIG: Joint Bi-Level Image Experts Group (ISO & ITU-T)


JBIG1: lossless binary image compression JBIG2: lossless and lossy, more efficient

JPEG: Joint Photographic Experts Group (ISO & ITU-T)


For coding still images or video frames. Lossless JPEG for medical and archiving applications

JPEG 2000: many advantages, but not widely used yet


Srping 2010 ELEN E4830 Digital Image Processing Lecture 10, Page 4

A Typical Compression System


Input Samples Transformed parameters Quantized parameters Binary bitstreams

Transformation

Quantization

Binary Encoding

Prediction Transforms Model fitting ...

Scalar Q Vector Q

Fixed length / variable length (Huffman, arithmetic, LZW)

Motivation for transformation --To yield a more efficient representation of the original samples.
Srping 2010 ELEN E4830 Digital Image Processing Lecture 10, Page 5

Binary Encoding
Binary encoding
To represent a finite set of symbols using binary codewords.

Fixed length coding


N levels (symbols) represented by log 2 N bits.

Variable length coding


more frequently appearing symbols represented by shorter codewords (Huffman, arithmetic, LZW=zip).

The minimum number of bits required to represent a source is bounded by its entropy.
Srping 2010 ELEN E4830 Digital Image Processing Lecture 10, Page 6

Entropy of a Source
Consider a source of N symbols, rn, n = 1,2,,N. Suppose the probability of symbol rn is pn. The self information of symbol rn is defined as,
H n = log 2 pn H n = ln pn (nats ) (bits ) (dits ) H n = log10 pn

The entropy of this source, which represents the average information, is defined as:
H = pn log 2 pn
n =1
Srping 2010

(bits )

0 log 2 0 = 0
x 0+

lim x log

x = 0

ELEN E4830 Digital Image Processing

Lecture 10, Page 7

Entropy of Tossing a Coin


Source X has two symbols
Head, Pr(X=head) = p, 0 p 1. Tail, Pr(X=tail) = 1-p.

Entropy of the result of a toss H = pn log 2 pn (bits)


n =1

H ( X ) = p log 2 p (1 p ) log 2 (1 p) (bits )


H(x)=0, when p=0 or 1 (no uncertainty) H(X)=1, when p=1/2 (max uncertainty) Entropy is a measure of the uncertainty of the source
Srping 2010 ELEN E4830 Digital Image Processing Lecture 10, Page 8

Shannon Source Coding Theory


For a source with pn = 2-ln, we can easily design a code such that the length of the codeword for rn is ln = -log2pn, and the average bit rate is

l = pnln = pn log 2 pn = H
For an arbitrary source, a code can be designed so that,ln = log 2 pn and the average length is

l = pnln pn log 2 pn = H

Srping 2010

ELEN E4830 Digital Image Processing

Lecture 10, Page 9

Shannon Source Coding Theory


Since log 2 pn ln log 2 pn + 1, we have H = pn log 2 pn l = pnln pn (log 2 pn + 1) = H + 1

H l H +1
This is Shannon Source Coding Theorem, which states the lower and upper bound for variable length coding. The Shannon theorem only gives the bound but not the actual way of constructing the code to achieve the bound.
Srping 2010 ELEN E4830 Digital Image Processing Lecture 10, Page 10

Huffman Coding
Procedure of Huffman coding
Step 1: Arrange the symbol probabilities pn in a decreasing order and consider them as leaf nodes of a tree. Step 2: While there is more than one node:
Find the two nodes with the smallest probability and arbitrarily assign 1 and 0 to these two nodes Merge the two nodes to form a new node whose probability is the sum of the two merged nodes.

Huffman coding is prefix code


No valid codeword is a prefix (leading part) of any other valid codeword in the set
The recipient can decode the message unambiguously.
Srping 2010 ELEN E4830 Digital Image Processing Lecture 10, Page 11

Example of Huffman Coding (1)


The source has 4 Symbols: 0, 1, 2, 3. Pr(X=0) = 1/49; Pr(X=1) = 4/49; Pr(X=2) = 36/49; Pr(X=3) = 8/49. Symbol 2 3 1 0 Probability 36/49 8/49 4/49 1/49 1 0 0 5/49 1 0 13/49 1 Codeword Length 1 49/49 01 001 000 1 2 3 3

I=

36 8 4 1 67 1 + 2 + 3 + 3 = = 1.4; H = pn log 2 pn =1.16. 49 49 49 49 49


ELEN E4830 Digital Image Processing Lecture 10, Page 12

Srping 2010

Example of Huffman Coding (2)


.43 .30 .30 .23 .27 .30 .23 .20 .30 .23 .15 .12 .30 .23 .15 .12 .08 .08 .06 .06 .30 .23 .15

b(2):00 b(1):10 b(3):010

.57

1.00
1 Section VIII
.43

0 1

.27

0 1
.20

0 1

.15

Section

VII

Section VI

.12

0 1

.12

b(4):111 b(0): 1100 b(5):1101 b(6): 0110 b(7): 0111

0 .06
.08 .06

Section V

Section IV

1 Section III 0 1 Section II

.06 .06

Section I

Average Bit Rate = pn ln = 2.71 bits; Entropy = pn log 2 pn = 2.68 bits.


Srping 2010
n n

ELEN E4830 Digital Image Processing

Lecture 10, Page 13

Disadvantage of Huffman Coding


At least one bit has to be used for each symbol. Solution
Vector Huffman coding: To obtain higher compression, we can treat each group of m symbols as one entity and give each group a codeword. Joint Entropy of a pair of variables (X,Y) with a joint distribution p(x,y)
H ( X , Y ) = p ( x, y ) log 2 p ( x, y )
x y
Srping 2010 ELEN E4830 Digital Image Processing Lecture 10, Page 14

Example of Vector Huffman Coding (1)


Consider an alphabetic source of three symbols: a, b, c.
2 p(a) = , 3 1 p (b) = , 6 1 p (c ) = 6

Assume the symbols are independent.


p ( xy ) = p ( x) p ( y )
2 2 4 p (aa ) = = 3 3 9 1 2 1 p (ba ) = = 6 3 9 1 2 1 p (ca) = = 6 3 9
Srping 2010

2 1 1 p (ab) = = 3 6 9 1 1 1 p (bb) = = 6 6 36 1 1 1 p (cb) = = 6 6 36

2 1 1 p(ac) = = 3 6 9 1 1 1 p (bc) = = 6 6 36 1 1 1 p (cc) = = 6 6 36


Lecture 10, Page 15

ELEN E4830 Digital Image Processing

Example of Vector Huffman Coding (2)


Design a Huffman code which uses a separate codeword for each symbol
Symbol Probability a 2/3 1 0 0 1/3 c 1/6 00 1 1

1/6

01

1 1 4 2 l = 1 + 2 + 2 = = 1.33 3 6 6 3
Srping 2010 ELEN E4830 Digital Image Processing Lecture 10, Page 16

Example of Vector Huffman Coding (3)


Design a Huffman code which use a separate codeword for each group of two symbols.
aa ab ac ba bb bc ca cb cc 4/9 1/9 1/9 1/9 1/36 1/36 1/9 1/36 1/36 1 0 1/18 0 1 1 1 0 3/9 5/9 1 1/9 0 2/9 0 1 0 1 011 0101 0100 00111 00110 000 00001 00000

0 2/9 1 1 0 1/18

l=

Srping 2010

1 1 1 1 1 1 46 1 4 1 1 5 3 5 1 + 3 + 4 + 4 + 5 + + + + 5 = = 1.27 9 9 9 36 36 9 36 36 36 2 9
ELEN E4830 Digital Image Processing

Lecture 10, Page 17

Conditional Entropy
If (X,Y) ~ p(x,y), then the conditional entropy H(Y|X) is defined as

H (Y | X ) = p ( x) H (Y | X = x) = p ( x) p ( y | x) log 2 p ( y | x)
x y x

= p ( x) p ( y | x) log 2 p ( y | x)
x y

= p( x, y ) log 2 p ( y | x)
x y

1st order conditional Huffman coding


Assume there are N symbols {ai}, I =1,,N, we have to build N different Huffman coder Ci based on p(aj|ai)
Srping 2010 ELEN E4830 Digital Image Processing Lecture 10, Page 18

Example of Conditional Huffman Coding


A source with three symbols A 3 p(ai ) H ( X | ai ) = {a1, a2, a3} has the following H = i =1 probability distributions 3

1 / 2 if i = j p{ai / a j } = 1 / 4 otherwise
a1 |a1 1/2 a2 |a1 1/4 a3 |a1 1/4 1 1 0 1/2 0 1 01 00

1 1 1 1 H ( X | ai ) = p (a j | ai ) log 2 p (a j | ai ) = log 2 2 log 2 = 1.5 2 2 4 4 j =1

1 H = 3 1.5 = 1.5 3

Input

a2 |a2 1/2 a1 |a2 1/4 a3 |a2 1/4

1 1 0 1/2 0

1 01 00

a1

What is the previous Symbol? a3

a2
a3 |a3 1/2 a1 |a3 1/4 a2 |a3 1/4 1 1 0 1/2 0 1 01 00

1 1 l = 1 + 2 2 = 1.5 2 4
Srping 2010

ELEN E4830 Digital Image Processing

Lecture 10, Page 19

Background of Arithmetic Coding


Representation of a real number
In decimal notation (Base-10), any real number x in the interval [0,1) can be represented as .b1b2b3... where 0 < bi < 9. In binary notation (Base-2), any real number x in the interval [0, 1) can be represented as .b1b2b3... where 0 < bi < 1. 0 1 1 0 ....
0

Srping 2010

ELEN 1 E4830 Digital Image Processing

Lecture 10, Page 20

Arithmetic Coding
Represent each string x of length n by a unique interval [L,R) in [0,1). The width R-L of the interval [L,R) represents the probability of x occurring. The interval [L,R) can itself be represented by any number, called a tag, within the half open interval. Find some k such that the k most significant bits of the tag are in the interval [L,R). That is, .t1t2t3...tk000... is in the interval [L,R). Then t1t2t3...tk is the code for x.
Srping 2010 ELEN E4830 Digital Image Processing Lecture 10, Page 21

Example of Arithmetic Coding (1)


p(a)=1/3, p(b)=2/3, and the string x = bba

0 1/3 a 1. tag must be in the half open interval. 2. tag can be chosen to be (L+R)/2. 3. code is the significant bits of the tag.

2/3

b bb

bba

15/27 .100011100... 19/27 .101101000... tag = 17/27 = .101000010... code = 101

Srping 2010

ELEN E4830 Digital Image Processing

Lecture 10, Page 22

Some Tags are Better than Others


0 1/3 a ba 2/3 b Using tag = (L+R)/2 tag = 13/27 = .011110110... code = 0111 Alternative tag = 14/37 = .100001001... code = 1
Srping 2010 ELEN E4830 Digital Image Processing Lecture 10, Page 23

11/27 .011010000... bab 15/27 .100011100...

Example of Codes
P(a) = 1/3, P(b) = 2/3.
tag = (L+R)/2 0 a aa aaa aab aba ab abb ba b bb 1
Srping 2010

code 0 0001 001 01 0111 101 11 aaa aab aba abb bab bba bbb

0/27 1/27 3/27 5/27

.000000000... .000010010... .000111000... .001011110...

.000001001... .000100110... .001001100... .010000101... .010111110... .011110111... .101000010... .110110100...

baa bab bba bbb

9/27 .010101010... 11/27 .011010000... 15/27 .100011100... 19/27 .101101000...

01011 baa

27/27 .111111111...

.95 bits/symbol .92 entropy lower bound


Lecture 10, Page 24

ELEN E4830 Digital Image Processing

Arithmetic Coding Algorithm


P(a1), P(a2), , P(am) C(ai) = P(a1) + P(a2) + + P(ai-1) Encode string x1x2...xn
L

Initialize L := 0 and R:= 1; for i = 1 to n do W := R - L; L := L + W * C(xi); R := L + W * P(xi); t := (L+R)/2; choose code for the tag

W*P(ai) W ai R Iteration i-1 Iteration i


Lecture 10, Page 25

L R

Srping 2010

ELEN E4830 Digital Image Processing

Arithmetic Coding Example


P(a) = 1/4, P(b) = 1/2, P(c) = 1/4 C(a) = 0, C(b) = 1/4, C(c) = 3/4 String: abca
symbol W := R - L; L := L + W*C(x); R := L + W*P(x) a b c a W 1 1/4 1/8 1/32 C(x) 0 1/4 3/4 0 P(x) 1/4 1/2 1/4 1/4 L 0 0 1/16 5/32 5/32 R 1 1/4 3/16 6/32 21/128

tag = (5/32 + 21/128)/2 = 41/256 = .001010010... L = 5/32 = .001010000... R = 21/128 = .001010100... code = 00101
Srping 2010 ELEN E4830 Digital Image Processing Lecture 10, Page 26

Decoding
2 symbols: a, b, and p(a)=1/3, p(b)=2/3. Assume the string length is known to be 3. 0001 which converts to the tag .0001000...
.0001000...
.010101010

0 aa a ab

aaa aab
.000111000

.000010010

Decoding result aab

1
Srping 2010 ELEN E4830 Digital Image Processing Lecture 10, Page 27

Arithmetic Decoding Algorithm


P(a1), P(a2), , P(am) C(ai) = P(a1) + P(a2) + + P(ai-1) Decode b1b2...bc, number of symbols is n.
Initialize L := 0 and R := 1; t := .b1b2...bc000... for i = 1 to n do W := R - L; find j such that L + W * C(aj) < t < L + W * (C(aj)+P(aj)) output aj; L := L + W * C(aj); R := L + W * P(aj);
Srping 2010

W*P(ai) W t aj L R W

R Iteration i-1 Iteration i


Lecture 10, Page 28

ELEN E4830 Digital Image Processing

Decoding Example
P(a) = 1/4, P(b) = 1/2, P(c) = 1/4 C(a) = 0, C(b) = 1/4, C(c) = 3/4 00101 The number of symbol is 4.
tag = .00101000... = 5/32 W L R output 0 1 1 0 1/4 a 1/4 1/16 3/16 b 1/8 5/32 6/32 c 1/32 5/32 21/128 a
W := R - L; find j such that L + W * C(aj) < t < L + W * (C(aj)+P(aj))

Srping 2010

ELEN E4830 Digital Image Processing

Lecture 10, Page 29

Issues of Arithmetic Coding


Decoding Issues
There are two ways for the decoder to know when to stop decoding.
1. Transmit the length of the string 2. Transmit a unique end of string symbol

W becomes too small:


By scaling we can keep L and R in a reasonable range of values so that W = R - L does not underflow.

Integer arithmetic coding avoids floating point altogether.


Srping 2010 ELEN E4830 Digital Image Processing Lecture 10, Page 30

Other Variable Length Coding Methods


LZW coding (Lempel, Ziv, and Welsh)
Assign fixed-length code words to variable length sequences of source symbols but requires no priori knowledge of the symbol probabilities. At the onset of the coding process, a codebook or dictionary containing the source symbols is constructed. As the encoder sequentially examines the images pixels, graylevel sequences that are not in the dictionary are placed in algorithmically determined (e.g., the next unused) locations. The next time that the same two consecutive pixels are encountered, the code word is used to represent them. Clearly, the size of the dictionary is an important system parameter. If it is too small, the detection of matching gray-level sequences will be less likely, if it is too large, the size of the code words will adversely affect compression performance.

Srping 2010

ELEN E4830 Digital Image Processing

Lecture 10, Page 31

Significance of Compression for Facsimile For a size A4 document (digitized to 1728x2376 pixels)
Without compression
Group 1 Fax: 6 min/page Group 2 Fax: 3 min/page

With compression
Group 3 Fax: 1 min/page

Fax becomes popular only after the transmission time is cut down to below 1 min/page.
Srping 2010 ELEN E4830 Digital Image Processing Lecture 10, Page 32

Runlength Coding of Bi-Level Images


1D Runlength Coding:
Count length of white and length of black alternatingly Represent the last runlength using EOL Code the white and black runlength using different codebook (Huffman Coding)

2D Runlength Coding:
Use relative address from the last transition in the line above Used in Facsimile Coding (G3,G4)

Srping 2010

ELEN E4830 Digital Image Processing

Lecture 10, Page 33

Example of 1-D Runlength Coding

X X 2 2 1 X X
Srping 2010

4 5

2 1 1

2 1 2 1

2 2 1

4 2 1

2 5

4 1

2 2 1

4 2

X 1 X

ELEN E4830 Digital Image Processing

Lecture 10, Page 34

Example of 2-D Runlength Coding


Relative address coding (RAC) is based on the principle of tracking the binary transitions that begin and end each black and white run.

c is current transition, e is the last transition in the same line, c is the first similar transition past e in the previous line. If ec <= cc, d = ec; if cc < ec, d = cc.
Srping 2010 ELEN E4830 Digital Image Processing Lecture 10, Page 35

CCITT Group 3 and Group 4 Facsimile Coding Standard the READ Code Relative Element Address Designate
The first line in every K lines is coded using 1D runlength coding, and the following K-1 lines are coded using 2D runlength coding
The reason for 1D RLC is used for every K line is to suppress propagation of transmission errors.

Group 4 method is designed for more secure transmission, such as leased data line where the bit error rate is very low.
Srping 2010 ELEN E4830 Digital Image Processing Lecture 10, Page 36

Predictive Coding
Motivation
The value of a current pixel usually does not change rapidly from those of adjacent pixels. Thus it can be predicted quite accurately from the previous samples. The prediction error will have a non-uniform distribution, centered mainly near zero, and can be specified with less bits than that required for specifying the original sample values, which usually have a uniform distribution.
Srping 2010 ELEN E4830 Digital Image Processing Lecture 10, Page 37

Lossless Predictive Coding System


f + + Predictor Encoder Delay (frame store) e + + + f Binary Encoding Be

fp

Be

Binary Decoding

f + Delay (frame store)

f
Decoder
Srping 2010

Predictor

fp

ELEN E4830 Digital Image Processing

Lecture 10, Page 38

Linear Predictor
Let f0 represent the current pixel, and fk, k = 1,2,,K the previous pixels that are used to predict f0. For example, if f0=f(m,n), then fk=f(mi,n-j) for certain i, j 0. A linear predictor is

= a f f k k 0
k =1

ak are called linear prediction coefficients or simply prediction coefficients. The key problem is how to determine ak so that certain criterion is satisfied.
Srping 2010 ELEN E4830 Digital Image Processing Lecture 10, Page 39

LMMSE Predictor (1)


The designing criterion is to minimize the mean square error (MSE) of the predictor. 2 K 2 2 p = E{| f 0 f0 | } = E f 0 ak f k k =1 The optimal ak should minimize the error
2 p K = E 2 f 0 ak f k f l = 0, l = 1,2,..., K . al k =1

a E{ f
k =1
K

f l } = E{ f 0 f l }, l = 1,2,..., K .
l = 1,2,..., K .
Lecture 10, Page 40

Let R(k, l)=E{fkfl}


Srping 2010

a R(k , l ) = R(0, l ),
k =1 k

ELEN E4830 Digital Image Processing

LMMSE Predictor (2)


a R(k , l ) = R(0, l ),
k =1 k K

l = 1,2,..., K .

In matrix format

R(2,1) R(1,1) R(1,2) R (2,2) M M R(1, K ) R (2, K )

R( K ,1) a1 R(0,1) a R(0,2) ... R( K ,2) Ra = r a = R 1r 1 = M M O M L R( K , K ) aK R(0, K ) ...

The MSE of this predictor

= E{( f 0 f0 ) f 0 } = R(0,0) ak R(k ,0) = R(0,0) r T = R(0,0) r T R 1r


2 p k =1

Srping 2010

ELEN E4830 Digital Image Processing

Lecture 10, Page 41

An Example of Predictive Coder


Assume the current pixel f0 = f(m, n) is predicted from two pixels, one on the left, f1 = f(m, n-1), and one on the top, f2 = f(m-1,n). The correlations are: R(0,0) = R(1,1) = R(2,2) = f2, R(0,1) = hf2, R(0,2) = vf2, R(1,2) = R(2,1) = df2. (m, n) = a f (m, n 1) + a f (m 1, n) f 1 2
f(m,n-1) f(m-1,n) f(m,n)

f 0 = f (m, n) f1 = f (m, n 1) f 2 = f (m 1, n) R (0,0) = E{ f (m, n) f (m, n)} R (1,1) = E{ f (m, n 1) f (m, n 1)} R (2,2) = E{ f (m 1, n) f (m 1, n)} R (0,1) = E{ f (m, n) f (m, n 1)} R (0,2) = E{ f (m, n) f (m 1, n)} R (1,2) = E{ f (m, n 1) f (m 1, n)} R (2,1) = E{ f (m 1, n) f (m, n 1)}
Srping 2010

R(1,1) R(2,1) a1 R(0,1) 1 d a1 h a = R(2,1) R(2,2) a = R(0,2) 1 2 d 2 v d h a 1 1 1 h d v 1 = = 2 2 1 a2 1 d d v 1 d v d h


2 a1 + v2 2 v d h h 2 = R(0,0) [R(0,1) R(0,2)] = f 1 2 1 d a2 if the correlation is isotropic, h = v = , 2 p

a1 = a 1 + 2 d
2 p 2 f

1 1
Lecture 10, Page 42

2 2 = 1 1 + d

ELEN E4830 Digital Image Processing

Homework (1)
1. Consider a discrete source with an alphabet A = {a1, a2, , aL}. Compute the entropy of the source for the following two cases: (a) the source is uniformly distributed, with p(al) = 1/L, l = 1, 2, , L. (b) For a particular k, p(ak) = 1 and p(al) = 0; l k. 2. A source with three symbols A = {a1, a2, a3} has the following probability distributions: 2 / 3 if i = j
p{ai / a j } = 1 / 6 otherwise

(a) Calculate the 1st order entropy, 2nd order entropy, 1st order conditional entropy. Hint: the probability for having a pair of symbols aiaj is P(ai)P(aj|ai). (b) Design the 1st order, 2nd order, and 1st order conditional Huffman codes for this source. Calculate the resulting bit rate for each case. Compare it to the corresponding lower and upper bounds defined by the entropy. Which method has the lowest bit rate per symbol? How do they compare in complexity?

Srping 2010

ELEN E4830 Digital Image Processing

Lecture 10, Page 43

Homework (2)
3. For the following image which consists of 8 symbols, (a) Determine the probability of each symbol based on its occurrence frequency; (b) Find its entropy; (c) Design a codebook for these symbols using Huffman coding method. Calculate the average bit rate and compare it to the entropy.

0 2 2 4

1 3 5 4

2 5 5 6

3 5 6 7

4.

For the following bi-level image: (a) Give its run-length representation in one dimensional RLC. Assume that each line start with W and mark the end with EOL. (b) Suppose we want to use the same codeword for the black and white run-lengths, determine the probability of each possible run-length (including the symbol EOL) and calculate the entropy of this source; (c) Determine the Huffman code for the source containing all possible runlengths, calculate the average bit rate of this code, and compare it to the entropy.

White pixel Black pixel

Srping 2010

ELEN E4830 Digital Image Processing

Lecture 10, Page 44

Reading
R. Gonzalez, Digital Image Processing, Section 8.1, 8.2.1, 8.2.3, 8.2.4, 8.2.5, and 8.2.9. A.K. Jain, Fundamentals of Digital Image Processing, Sections 11.1 11.3

Srping 2010

ELEN E4830 Digital Image Processing

Lecture 10, Page 45

Você também pode gostar