Você está na página 1de 49

INTRODUCTION

Wavelets are mathematical functions that divide data into different components and then
study each component with the resolution matched to its scale. The wavelet analysis
procedure is to adopt a wavelet prototype function, called an analyzing wavelet or mother
wavelet. Wavelets have advantages over traditional Fourier methods in analyzing physical
situations where the signals contain discontinuities and sharp spikes. As Fourier analysis
consists of breaking up a signal into sine waves of various frequencies, wavelet analysis is the
breaking up of a signal into shifted and scaled version of original wavelet. An important
property of wavelet basis is providing multi-resolution analysis, which makes it possible to
analyse any signal without disturbing the time and frequency resolution. Wavelet transform
have received significant attention recently due to their suitability for a number of important
signal and image processing tasks, such as image compression and denoising.
Wavelet based methods have become the state-of-the-art technique for image
compression after Embedded Zerotree Wavelet (EZW) image coding developed by
J.M.Shapiro in 1993. Image compression techniques, especially non-reversible or lossy ones,
have been known to grow computationally more complex as they grow more efficient. But
EZW is not only competitive in performance with the most complex techniques, but also
extremely fast in execution. Embedded zerotree algorithm has proven to be a simple
algorithm that requires no training, no pre-stored tables or codes, books and prior knowledge
of image sources. This has the property of generating bits of the bit stream in the order of
importance, thus yielding a fully embedded code. The significant part of EZW algorithm is
that the encoding can be terminated at any point, thus making any target bit rate achievable.
Even in the process of decoding, the decoder can truncate the bit stream at any point still
producing the same image that would have been encoded at bit rate corresponding to the
truncated bit stream.
Till now, EZW is just used for image compression. In this thesis, the application of
EZW algorithm for both image compression and image denoising has been explained and
programmed with an example using MATLAB. A noised image is decomposed into a set of
wavelet coefficients. Initially soft thresholding was performed on wavelet coefficients by
using a certain threshold, which varies with the signal to noise ratio of the image. The
threshold if chosen do that the reconstructed image (denoised image) achieves acceptable

1
PSNR ratio. EZW algorithm is then applied to these coefficients, where they are coded in
several passes measuring them against a particular threshold in each pass. The image is
finally coded into set of code list, Dominant and Subordinate code list for every pass
representing a corresponding significant coefficient values. The reduction of bits due to the
EZW algorithm has also been explained.

2
CHAPTER 1
WAVELETS
Introduction
A wavelet literally means a small wave or a ripple. In mathematical terms it is defined
as a function which breaks the given data into small frequency components, where each
frequency component can be separately analyzed. Wavelets find their importance in the fields
of mathematics, medicine, quantum physics, biology for cell membrane recognition,
metallurgy to find the characteristics of rough surfaces, in internet traffic description and in
particular electrical engineering. They are also used in many other applications such as digital
signal processing, image compression, image denoising, fingerprint charts recognition, image
encoding, etc.
Wavelets are a relatively new signal processing method that has been introduced to
overcome the shortcomings of the fourier transform which cannot be used to apply on non-
stationary signals. A wavelet is a waveform of effectively limited duration that has an
average value of zero.
Wavelets are able t separate data into different frequency components, and study each
component with a resolution matched to its scale. In wavelet transforms a signal is
represented by a sum of scaled and shifted wavelets. Narrow wavelets are comparable to high
frequency sinusoids and wide wavelets are comparable to low frequency sinusoids. All
wavelets are derived from a single mother wavelet.

Different types of Wavelets

The first step in using wavelet analysis for any kind of application is to decide n a
type of mother wavelet. The original signal can then be represented as translation and dilation
of mother wavelet. Many kinds of mother wavelet families exist and each has its own
significance and drawbacks. The different families have different trade-off between how
compactly they are localized in space and how smooth they are. Hence, the choice of
wavelets usually depends on the particular type of application.

3
Haar Wavelet:
This is the simplest of all wavelets. The Haar wavelet is discontinuous and resembles a step
as shown in the figure.

Fig1.1: Haar Wavelet

The main advantage of Haar wavelet is that, it is orthogonal and it has the symmetry
property.

Daubechies Wavelet

Daubechies wavelets are the compactly supported orthogonal wavelets and supports both
DCT and DWT implementations. The daubechies wavelet has its own family of wavelets
represented as ‘dbN’, where ‘db’ stands for daubechies and ‘N’ is the number of vanishing
moments. The db1 wavelet is the same as the Haar wavelet.

db2 db3 db4


Fig.1.2: Daubecheis Wavelet

4
Biorthogonal Wavelets

These wavelets exhibit orthogonality and symmetry. Most importantly, they exhibit the
property of linear phase which is needed for signal and image reconstruction. These are
widely used in various signal and image processing applications that require perfect
reconstructing properties.

Fig:1.3Biorthogonal Wavelets (a)bior2.2 (b)bior3.9

5
CHAPTER 2
WAVELET TRANSFORM

The application of wavelets on a particular signal involves multiplying the different


segment of signal with a wavelet function that generates a Continuous Wavelet Transform
(CWT). As, CWT cannot be practically computed using a digital computer, it is necessary to
discretize the waveform. This generates the Discrete Wavelet Transform (DWT) obtained by
sampling the continuous wavelet transform at a frequency of twice its highest frequency
(nyquist criteria).
Discrete Wavelet Transform
In DWT to analyze a signal at different scales, filters of different cutoff frequencies
are used. To analyze the high and low frequencies, the signal is passed through series of high
pass and low pass filters. The resolution of the signal is changed by filtering operations and
the scale is changed by upsampling and downsampling.
Downsampling a signal corresponds to reducing the sampling rate or removing some
of the samples of the signal. Downsampling by a factor of ‘n’ reduces the number of samples
in the signal ‘n’ times.
Upsampling a signal corresponds to increasing the sampling rate of a signal by adding
new samples to the signal. Upsampling a signal by a factor of ‘n’ increases the number of
samples in the signal by a factor of ‘n’.

Fig2.1: (a) Original sequence (b) Upsampled sequence

6
The decomposition of the signal into different frequency bands is simply obtained by
successive high pass and low pass filtering of the time domain signal. These filters are
called analysis filters. At every level filtering and downsampling will result in half the
number of samples. High pass and low pass filter output after the first stage contains half
the number of samples (each). These constitute the DWT coefficients of level one.
Successive decompositions are done until there are 2 DWT coefficients left. The DWT of
the original signal is obtained by concatenating all the coefficients starting from the last
level of the decomposition to the first.
The reverse of the above procedure reconstructs the signal. In this stage upsampling
followed by low pass and high pass filters (called as synthesis filter) is referred to as
interpolation of the signal. The above procedure is known as subband coding.

Fig2.2:Four band subband encoding

7
In case of an image, the low frequency part of the image carries most information and
the high frequency part adds intricate details to the image.
In case of an image the filter is applied along the rows and then applied along the columns,
thus the operations results in four bands, low-low, low-high, high-low and high-high. The
low-low frequency part can further be processed so that it is again subdivided into further
four bands.

Fig2.3: 2-DWavelet decomposition for the Image

8
CHAPTER 3
IMAGE COMPRESSION AND DENOISING

Introduction
Before going into the details of image compression and denoising one should have clear idea
about images. All digital images are two dimensional signals. They specify a color value for
every pixel within its space. All kinds of uncompressed images require considerable amunt of
storage capacity and transmission bandwidth. Images storage requirements are a function of
time and color depth. Color depth is defined as describing the number of different colors
available to an image. It is measured in terms of bits. If m-bit color means 2 m colors are
available. The human eye is said t be able to differentiate between a maximum of 2 24 colors.
True color images have colors specified with red, green and blue values. That means there are
256 shades each of red, green and blue which combine together and gives 2 24 different colors
in true color palette. Most commonly used is 8-bit color which offers 256 different colors.
Any 8-bit image is stored with color values ranging from 0-255. The mapping of these values
to RGB values is stored in a color map.
Consider a 512×512 image with 8-bit color depth. The size of the image in bits is
equal to the image size multiplied by color depth in bits.
Size= 512×512×8= 256 kilobytes
This is the storage requirement for an image of average size and few colors. A true color
version of same image takes three times as much space. Image used in photogrammetry and
aerial mapping often occupies more than 100 megabytes of space. There is need to reduce the
size of the image. So image compression is necessary.

9
Image compression

Image compressing is nothing but minimizing the size of graphics file by reducing the
number of bits needed to represent an image without degrading the quality of the image to an
unacceptable level. The reduction of file size allows more images to be stored in an amount
of memory space and also makes it fast for the images to be sent over the internet or
downloaded from web pages. For example, if the original picture is 100 kbyte, a high density
diskette with a capacity of 1.4 mb can store about 14 pictures. If the picture can be
compressed by 20 to 1 without any perceptual distortion, the capacity might be increased to
280 pictures. So with a 20:1 compression ratio, the space, bandwidth, and transmission time
requirements can be reduced by a factor of 20 with acceptable quality. Most common
characteristics of images is that the neighboring pixels are correlated and they contain
redundant information. Now the first task is to find the less correlated representation in the
image. The two fundamental components of compression are redundancy reduction which
aims at removing the duplication from the image and other one is irrelevancy reduction which
omits part of the signal that will not be noticed by the human visual system.
All graphics files can be compressed in two ways: lossless and lossy. In lossless
compression scheme, the reconstructed image after compression is numerically identical to
the original one.TIFF is an image format that can be compressed in a lossless way. It can just
achieve a modest amount of compression. But, in lossy compression scheme, an image
reconstructed following lossy compression is a degraded one compared to the original one. It
is because it completely removes the redundant information from the image. By using this
scheme, we can achieve higher compression ratios and most importantly no visible loss is
perceived. So we can say that it is visually lossless. JPEG is an image that is based on this
scheme.

Wavelets in Image Compression


Before the evolution of wavelets, Discrete Cosine Transform (DCT) is used
extensively for image compression. DCT can be regarded as discrete time version of fourier
cosine series. It is close to discrete fourier transform which can be used to convert signal into
elementary frequency components. DCT is a real valued and gives better approximation of a
signal with very few coefficients which is unlike DFT. Despite all the advantages of JPEG
compression based on DCT like simplicity, satisfactory performance and availability of
special purpose hardware for implementation, these lack something. In DCT, since the image
10
needs to be “blocked”, correlation across the block boundaries cannot be eliminated. This
results in noticeable blocking artifacts particularly at lower bit rates which can be annoying to
human eye. Lapped Orthogonal Transforms (LOT) are used to solve these problems which
employs smoothly overlapping blocks. Although, blocking effects are reduced in LOT
compressed images, increase computational efficiency of such algorithm doesn’t justify
replacing DCT by LOT.
Over the past several years, the wavelet transform has gained a widespread
acceptance in signal processing especially image processing. DWT can be efficiently used in
image coding application because of their data reductio0n capabilities. The basic of DWT
can be composed of any functions (wavelets) that satisfies the requirements of
multiresolution analysis. The choice of wavelet depends on contents and resolution of an
image. DWT have some properties which makes it better choice over DCT for image
co0mpression, especially for higher resolutions. In DWT based system, the entire image
transformed and compressed as a single data object rather than block by block as in DCT
based system, allowing for a uniform distribution of compression error across the entire
image. DWT have higher decorrelation and energy compression efficiency so DWT can
provide better image quality on higher compression ratios. Localization of wavelet functions,
both in time and frequency, gives DWT potentiality for good representation of images with
fewer coefficients. It represents the image on different resolution levels. The discrete wavelet
transform is identical to the hierarchical subband system. When the discrete wavelet
transform is applied on the image, to begin the decomposition, he image is divided into four
subbands, each subband corresponding to the output from two consecutive filters in the
subband. Hence, they can be named as LL1, HL1, LH1 and HH1 as shown in figure.

Fig.3.1: First stage of DWT of image

Each coefficient represents a spatial area corresponding to approximately 2×2 size of


the original picture. Subband LL1 represents the coarser wavelet coefficients and hence more

11
information of the image is contained by this. The subbands HL1, LH1 and HH1 represents
the finest scale wavelet coefficient. In other words, LL1 corresponds to the approximate
value of the image and HL1, LH1 and HH1 corresponds to the details of the image. The
lowest frequency subband (LL1- approximations) contains larger part of information than the
higher frequency subbands (HL1, LH1 and HH1- details).
To obtain the next coarser scale of wavelet coefficients, the subband LL1 is further
decomposed and critically sampled. Once again three subbands (details) are generated and
the lowest frequency subband (approximation) which is remaining would contain the coarser
information at this scale.

Fig.3.2: Two scale wavelet decomposition of Image

In two-scale wavelet decomposition each coefficient in the subband LL2, HL2,LH2 and HH2
represents a spatial area corresponding to approximately a 4×4 size of the original image as
shown in figure. The process could be continued until the final scale or the desired scale is
reached. For best reconstruction results, on image the decomposition is applied from 3 to 5
scales.

12
Image Denoising

In this real world, one needs image analysis to access several useful data. Unfortunately, in
some fields like astronomy or medicine, we often have disturbance on images taken from
bodies or from space. Phenomenon like reflection creates ‘noise’ on images. When one has
noise on images, that means he has not got the good data to work with and he often cannot
give correct solution to a problem. For all practical purpose, this noise might decrease to
negligible levels under ideal conditions, so that it does not require denoising but for further
data analysis, we must remove the noise corrupting a signal to recover that signal. In medical
images, noise suppression is a very delicate and difficult task. A tradeoff between noise
reduction and the preservation of the actual image features has to be made in such a way that
it has the same diagnostically irrelevant image content.
If one wants to delete the noise, there is every chance that one could erase the
information behind it. It is like a challenge, removing as much noise as possible without
really losing important data. In early days, everyone worked with methods using first or
second order statistics in original signal domain. Later on with the evolution of transforms
there was confusion whether the denoising should take place in original signal domain or in a
transform domain. If it is in transform domain, whether one should use fourier transform for
the time frequency domain or the wavelet transform for the time-scale domain. Many people
have described the development of wavelet transforms as revolutionizing modern signal and
image processing over the past two decades.
After the evolution of wavelets, the method of wavelet thresholding has been
extensively used for denoising images. Wavelets shrinkage is a kind of method of noise
reduction by just trying to remove the wavelet coefficients that correspond to noise. Since
noise is uncorrelated and usually small with respect to signal, it is assumed that the
corresponding wavelet coefficients that result from it will be uncorrelated too and probably
will be small as well. The idea is therefore to remove these small wavelet coefficients before
reconstruction and so remove the noise. But this method is not perfect because the small
wavelet coefficients that might result from some part of the signal of interest and cannot be
distinguished from noisy coefficients, will be thrown away, which is unwarranted. But if the
threshold is well chosen based on statistical properties of input signal and required
reconstruction quality, the result will be quite remarkable. But still the original image looks
somewhat blurred.

13
There are two types of wavelet shrinkage: one is hard thresholding in which one
simply throws all wavelet coefficients smaller than a certain threshold. Another is soft
thresholding, where one simply subtracts a constant value from all wavelet coefficients and
throws away all the wavelet coefficients that are smaller than zero (if one considers only
positive values).
This wavelet shrinkage is very similar to lossy EZW compression or decompression.
Herein lossy EZW, it just throws out all the wavelet coefficients that do not meet the quality
criteria that one sets for reconstruction. These coefficients are the small ones, which add extra
details. The way EZW works, these are the coefficients that are coded at the end of EZW
stream and used to lift already decoded coefficients to their original level.

14
CHAPTER 4
EZW Encoding

Introduction

EZW algorithm is based on the wavelet coefficients of an image and uses wavelet
transforms which explains the ‘W’ in the EZW. The biorthogonal filter is used to determine
the wavelet coefficients. Biorthogonal wavelets exhibit orthogonality and symmetry. Most
importantly, they exhibit the property of linear phase which is needed for signal and image
reconstruction. These are widely used in various signal and image processing applications
that require perfect reconstruction properties.
The EZW encoder is based on progressive encoding to compress an image into a bit
stream with increasing accuracy. This means that when more bits are added to stream, the
decoded image will contain more detail. This kind of encoding is known as Embedded
encoding which explains the ‘E’ in EZW. The surprising part of using embedded encoding is,
terminating the encoding at any arbitrary point in the encoding process doesn’t produce any
artifacts.
The ‘Z’ in EZW stands for the concept of Zerotree, which is the basis for entire
compressed encoding process. The zerotree coding can efficiently encode a significance map
of wavelet coefficients by predicting the absence of significant information across scales.

15
The Zerotree

A wavelet transform transforms a signal from time domain to joint time scale domain.
This means that the wavelet coefficients are two dimensional. In order to compress the
transform signal, not only the coefficient value have to be coded, but also their positions in
time. In the process of wavelet transform sub-sampling of the coefficients is performed at
every level, which develops a relationship between the wavelet coefficients across the
different subbands.
A coefficient in a low subband can be thought of having four descendants in the next
higher subband. The four descendants each also have four descendants in the next higher
subband and hence a quad-tree develops as shown in figure.

Fig4.1: Relation between wavelet coefficients in different subbands

16
A Zerotree can be defined as a quad-tree of which all nodes are equal to or smaller than the
root. The tree is coded with a single symbol during encoding and can be reconstructed by the
decoder as a quad-tree filled with zeros.

Fig4.2: Quad-tree : Every root with four leaves

17
Scanning of coefficients
We will follow the Morton Scan order for scanning of coefficients as depicted in the
following picture.

Fig4.3: Morton Scan Order

18
Encoding Process

In order to encode a matrix filled with values, the simplest way would be to directly
represent them in the form of bits, which would consume a large number of bits depending
upon the size of values. Instead, when a threshold is set, the same matrix can be encoded just
by specifying whether the values in it are greater or smaller than the threshold using the
symbols ‘H’ or ‘L’. This would considerably reduce the number of bits required to transmit
the same matrix. Better results can be obtained by repeating the procedure with different
values of thresholds. The same logic is followed in the EZW encoding process, where the
zerotree plays an additional significant role to further reduce the number of coefficients to be
transmitted. This can be explained with a simple example.

Fig4.4: Zerotree Encoding

In the above figure, Morton scanning of an unknown matrix and the corresponding
codes obtained by measuring the values against a particular threshold are shown. In the
normal process of encoding, the above matrix would be encoded as ‘HHLH HLLH LLLL
HLHL’. With the application of the zerotree concept, the code can be replaced with ‘HHTH
HLLH HLHL’. As l in the upper left part is followed by four Ls in the lower left part- its

19
descendants, these five codes are replaced by a single code ‘T’. The coefficients in the coarse
scale is called a parent and all the coefficients corresponding to the same spatial location at
the next finer scale of similar orientation are called children. For a given parent, set of all
coefficients at all finer scales of similar orientation.

Fig4.5: Parent child dependencies of subbands


As a first step in the process of encoding a threshold value ‘T0’ is chosen. The
threshold can be chosen according to the bit plane coding as,

(log¿¿ 2 MAX (γ (x , y)))¿


¿=2

where, MAX(.) represents the maximum value of the image and γ (x , y ) represents the
coefficient. Coefficients are then compared versus this threshold value in a predefined scan
order. The scanning of coefficients is performed in such a way that no child node is scanned
before its parent. Then codes are generated for every coefficient as stated,

p- Positive, if it is larger than the threshold.


n- Negative, if it is smaller than minus the threshold.
t- Zerotree, if it is the root of the zerotree.
z- Isolated zero, if it is smaller than the threshold but it is not the root of a zerotree.

Every pass or scan of all wavelet coefficients has two sub-passes,


Dominant pass
Subordinate pass
Dominant pass creates a significance map of all coefficients with respect to the current
threshold. Hence, for the first pass, the significant coefficients will lie in the range (T0, 2T0).
20
In order to keep track whether the coefficients are significant or not (already mapped or not),
EZW maintains two separate lists, dominant list and subordinate list. Dominant list contains
the location information of all the coefficients that have not been coded (found significant)
and is in such an order that coefficients of the current subband appear n initial dominant list
prior to coefficients in the next subband. The subordinate list contains the location
information of all the coefficients that have been coded (found significant) in previous passes.
Hence, the coefficients corresponding to the values of subordinate list are said to lie in the
interval (T0, 2T0) for the first pass. The subordinate pass gives an output ‘1’, if it is in the
upper limit of the interval within (3T0/2, 2T0) and ‘0’, if it is in the lower limit of the interval
within (T0, 3T0/2) as shown in the figure.

Fig4.6: First dominant and subordinate pass and coefficient quantization values

The process is continued with the threshold decrease by a factor of 2 for the following
passes.

Ti=Ti-1/2
The dominant passes are made only on those coefficients on the dominant list, which
are not found to be significant on a previous pass. Also, if a coefficient is found to be
significant in one dominant pass, it is updated in the subordinate list and not scanned on the
next passes. The encoding can be stopped at any time when a target bit rate is reached. If
terminated at an arbitrary point, the bit stream cannot be decoded for the last symbol to a
valid one, since a code word has been truncated. Still the decoder can reproduce the exact
image with the reduced rate at which the encoding was stopped. This makes the embedded
code to be very useful in rate-constraint or distortion constraint applications like image
transfer over internet.
Every pass has two sub-passes, dominant pass and subordinate pass. Dominant pass
contains the codes for all the positions whereas the subordinate pass has MSB corresponding
to only the significant values of every pass. Values coded as ‘p’ and ‘n’ are said to be

21
significant as their absolute value is greater than the threshold. After being coded, their
positions are replaced with zeroes in the original matrix, in order to prevent re-coding of the
same values. The process can be depicted in the form of the flow diagram show in figure.

22
Fig4.7: Flow chart for encoding a coefficient for a significant map

EZW Algorithm

The EZW algorithm can be summarized as follows:

23
1. The wavelet coefficients are placed on the dominant list. The initial threshold is set as,
(log¿¿ 2 MAX (γ (x , y)))¿
¿=2

where MAX¿(x,y)) is the maximum value of the coefficient matrix.

2. Dominant pass: scan the coefficients on the dominant list with respect to current
threshold Ti and subband ordering. Depending on coefficients values that can be
coded as
p- Positive
n- Negative
z- Isolated zero
t- Root of a zerotree
3. The significant values (coded as ‘p’ or ‘n’) are left for the subordinate pass, which
outputs either a ‘1’ or ‘0’, depending on whether it is in upper or lower half of the
quantization level.
4. The position of the significant values of the current pass is replaced by zero and the
dominant list is updated. The subordinate list is also updated with the positions of
significant values of the current pass.
5. The threshold is reduced by 2 i.e. T i+1= Ti/2 and the steps (2), (3) and (4) are repeated
until desired image quality or bit rate is reached.

Example of EZW Encoding and Decoding

Let us assume an 8×8 image with 3-scale wavelet transform coefficients to explain the order
of the operation. Out of the six dominant passes and five subordinate passes that can be

24
generated, the sequence of operation to generate first and second dominant and subordinate
pass has been explained. The Morton scan order is followed for scanning the coefficients.

Figure 4.8: An example of an 8×8 image that is transformed by a certain Wavelet transform.

First level

Encoder Section:

 A maximum coefficient value has been identified as 63. Hence the initial threshold is,

¿=2(log¿¿ 2(63))¿ =32


 The scanning starts with the first coefficient 63. As this is greater than 32, a positive code ‘p’
is generated.
 The next coefficient (as per Morton scan) is -34 which is less than minus threshold and hence
an ‘n’ is generated.
 Even though the following coefficient 31 is insignificant with respect to the threshold 32, it
has a significant descendant two generations down in the subband LH1 with magnitude 47.
Thus, an isolated zero symbol, ‘z’ is generated.
 The coefficient 23 is less than 32 and all descendants (3, -12, -14, 8) in subband HH2 and all
coefficients in subband HH1 are insignificant. Hence, it is coded with a zerotree ‘t’, and no
code will be generated for any coefficients in subbands HH2 and HH1 during the current
dominant pass.
 Next, 49 is greater than 32, hence a ‘p’ is generated.

25
 The magnitude 10 is less than 32 and all descendants (-12, 7, 6, -1) in HL1 also have
magnitudes less than 32. Thus zerotree symbol‘t’ is generated.
 Though 14 is insignificant with respect to 32, its descendants 47 in LH1 is significant. Thus,
an isolated zero symbol ‘z’ is generated.
 15 with its descendants (-5, 9, 3, 0) in LH1, are all insignificant. Hence it is a zerotree.
 -9 and -7 are zerotree, as their descendants (2, -3, 5, 11) and (6, -6, 5, 6) are all insignificants.
 It is to be noted that no symbols were generated in subband HH2 which would normally
proceed subband HL1 in the scan. As HL1, LH1 and HH1 have no descendants in the coding
process, both isolated zero and zerotree can be combined to get a single code ‘z’. Hence, 7,
13, 3, 4 and -1, all have symbols ‘z’.
 As 47 is significant with respect to 32, it carries a ‘p’.
 -3 and 2 of LH1 also carry ‘z’.
The series of codes generated at the end of first dominant pass would be,
D1: p n z t p t t t t z t t t t t t t p t t
For values 63, 34, 49 and 47(magnitudes) have been identified to be significant in the first
dominant pass. Thus their positions will be replaced with a zero.
So, the four values are moved to sublist and their positions to sublist.
Sublist = [63 34 49 47]
In the first subordinate pass, which has the interval (32, 64), the significant coefficients of the
above dominant pass will be refined. The magnitudes are partitioned into uncertainty interval
(32, 48) and (48, 64) with symbols ‘0’ and ‘1’ respectively.
The first entry 63 lies in the upper uncertainty interval (48, 64) and hence is given ‘1’. The
next value, 34 lies in (32, 48) and hence a ‘0’.similarly 49 and 47 get ‘1’ and ‘0’, as they
belong to the upper and lower uncertainty intervals respectively. The output of the first
subordinate pass would be,
S1: 1010

Decoder Section:

The encoded bits will undergo two passes, dominant and subordinate passes. In the
dominant pass, it checks whether the encoded bit is a ‘p’, ‘n’,‘t’ or ‘z’ and reconstructs to a

26
particular value. In the subordinate pass, it checks whether the value is coded as ‘1’ or ‘0’ and
depending on that it refines the already constructed value towards original value.
After first level the reconstructed coefficients looks like 56, -40, 56, 40.

Second Level

Encoder Section:

The above said is for the first level for the threshold of 32. The process continues in
the second dominant pass at the new threshold of 16 and a set of dominant and subordinate
passes are generated. In this dominant pass, -31 and 23 are found to be significant and they
are coded as ‘n’ and ‘p’ respectively and moved to sublist.
The series of codes generated at the end of second dominant pass would be,
D2: z t n p t t t t t t t t
Now the new sublist looks like as follows
Sublist= [63 34 49 47 31 23]
In the second subordinate pass which has the interval (16, 64), the significant coefficients f
the above dominant pass will be refined. The magnitudes are partitioned into uncertainty
intervals (16, 24), (24, 32), (32, 40), (40, 48), (48, 56) and (56, 64) with symbols ‘0’, ‘1’, ‘0’,
‘1’, ‘0’, ‘1’ respectively.
The first entry 63 lies in the upper uncertainty interval (56, 64) and hence is given ‘1’.
The next value, 34 lies in the (32, 40) and hence a ‘0’. Next value, 49 lies in (48, 56) and
hence it is given ‘0’. Similarly, 47, 31 and 23 get ‘1’, ‘1’ and ‘0’, as they belong to the
uncertainty intervals. The output of the second subordinate pass would be,

S2: 100110

27
Decoder Section

The encoded bits will undergo two passes, dominant and subordinate passes. In the
dominant pass, it checks whether the encoded bit is a ‘p’, ‘n’, ‘t’ or ‘z’ and reconstructs to a
particular value. In the subordinate pass, it checks whether the value is coded as ‘1’ or ‘0’ and
depending on that it refines the already constructed value towards original value.

After first level the reconstructed coefficients looks like 60, -36, 52, 44, -28, 20.

By this we can clearly see that the reconstructed coefficients are getting more refined
for every level. Similarly, encoding and decoding of the data will be continued for different
levels and can be stopped at any time. One can continue the process for all six levels and get
perfect reconstruction.

The continuous alteration between dominant and subordinate passes adds more and
more refinement bits for higher fidelity of the image and can be stopped at a desired bit-rate
or quality of the image. After the reverse transform of the decoded coefficients, the original
image can be reproduced.

Calculation of MSE and PSNR

Two error metrics used to compare the image denoising techniques are the MSE(Mean

Square Error) and PSNR (Peak Signal to Noise Ratio)

28
MSE (Mean Square Error), defined by

N 1−1 N 2−1
1
MSE=
N 1 N2
∑ ∑ (x [ n1 , n2 ]−x' [n1 , n2 ])2
n1=0 n2=0

PSNR (Peak Signal to Noise Ratio), defined by

PSNR=10 log 10 ¿ )

In above equation x [ n1 , n2 ] is the original image, x ' [ n1 , n2 ] is the reconstructed image and N1

and N2 are the dimensions of the image.

CHAPTER 5
EXPERIMENTAL RESULTS

29
This project is for developing a MATLAB code for denoising an image using EZW
algorithm which was primarily used for image compression. Initially an image is taken and
different set of noise levels are applied to that image. Depending on the noise levels the
threshold is chosen and the algorithm is applied on the image. Noise level is estimated by
calculating signal to noise Ratio (SNR).
First, a sample image named ‘catherine’ is taken. A particular noise level (SNR=8.2803 dB)
is added to the image. After applying the algorithm on the image, the results are shown as
follows. In the following figure, it shows the original image, the noised image (after adding
noise), the denoised image (after applying the algorithm) and PSNR ratios for original to
noised and original to denoised image.

(a)Original Image (b)Noised Image (c)Denoised Image


Fig.5.1: Catherine1

SNR = 8.2803 dB
Original to noised PSNR = 14.13 dB
Original to denoised PSNR = 20.45 dB

30
Again the image ‘catherine’ is taken and the noise level added is now changed to
(SNR=10.2456 dB). After applying the algorithm on the image, the results are shown as
follows. In the following figure, it shows the original image, the noised image (after adding
noise), the denoised image (after applying the algorithm) and PSNR ratio for original to
noised and original to denoised image.

Original Image Noised Image Denoised Image


Fig.5.2: Catherine2

SNR = 10.2465 dB
Original to noised PSNR = 16.10 dB
Original to denoised PSNR = 23.24 dB

31
Another sample image named ‘woman’ is taken. A particular noise level (SNR=8.2135 dB) is
added to the image. After applying the algorithm on the image, the results are shown as
follows. In the following figure, it shows the original image, the noised image (after adding
noise), the denoised image (after applying the algorithm) and PSNR ratio for original to
noised and original to denoised image.

Original Image Noised Image Denoised Image


Fig.5.3: Woman1

SNR = 8.2135 dB
Original to noised PSNR = 14.18 dB
Original to denoised PSNR = 17.53 dB

32
The image ‘woman’ is taken again and a noise level (SNR=8.2135 dB) is added to the image.
After applying the algorithm on the image, the results are shown as follows. In the following
figure, it shows the original image, the noised image (after adding noise), the denoised image
(after applying the algorithm) and PSNR ratio for original to noised and original to denoised
image.

Original Image Noised Image Denoised Image


Fig.5.4: Woman2

SNR = 6.6298 dB
Original to noised PSNR = 12.55 dB
Original to denoised PSNR = 15.57 dB

Results:

33
IMAGE SNR Original to Noised Original to Denoised
PSNR PSNR

8.2803dB 14.13 dB 20.45 dB


Catherine1

10.2465 dB 16.10 dB 23.24 dB


Catherine2

8.2135 dB 14.18 dB 17.53 dB


Woman1

15.57 dB
Woman2 6.6298 dB 12.55 dB

Fig5.5: Results

CHAPTER 6

34
CONCLUSION

The recent research on image processing has given wavelet image coding the most attention
due to its high performance with reasonable amount of quality of image. Major breakthrough
has been achieved by wavelet transform with subband classification and zerotree
quantization. EZW is one of the compression techniques which is used for compressing an
image. Denoising of an image is another addition to the usefulness of EZW algorithm, apart
from being an excellent compression technique.
By observing the results, we can clearly see that the denoised image has a better PSNR ratio
than the noised image. The PSNR is improved by almost 5 dB. So, this algorithm can be used
for applications which need variable levels of smoothness depending on requirements. This is
similar to low pass filtering in which the high frequency contents, mostly noise and some
details corresponding to the image which can be neglected, can be eliminated.
When comparison is performed between EZW and median filtering, it was prominently
noticed that the denoised image using EZW has better reconstruction quality than the
denoised image using median filtering. It can be concluded that this algorithm can be
efficiently used for both compression and denoising at the same time.

35
MATLAB CODE FOR IMPLEMENTATION
Main Program
function func_ezw_demo_main
%
% main program
%
clear all; close all; clc;

fprintf('----------- Load Image ----------------\n');

load catherine;
% Load original image.
figure(1);
image(X);
title('Original Image')
colormap(map);
y=40*randn(256,256);
s=X+y;
figure(2);
image(s);
title('noised Image')
colormap(map);
u=reshape(X,1,256*256);
u=u.^2;
v=reshape(y,1,256*256);
v=v.^2;
SNR=10*log10(sum(u)/sum(v))
Q = 255;
MSE = sum(sum((s-X).^2))/size(X,1)/size(X,2);
fprintf('The psnr performance is %.2f dB\n', 10*log10(Q*Q/MSE));
img_orig=s;
fprintf('done!\n');
fprintf('----------- Wavelet Decomposition ----------------\n');
n = size(img_orig, 1);
n_log = log2(n);
level =n_log ;
type = 'bior4.4'; %wavelet basis
[Lo_D,Hi_D,Lo_R,Hi_R] = wfilters(type);

%img_wavedata: wavelet coefficients of the input image


[img_wavedata, S] = func_DWT(img_orig, level, Lo_D, Hi_D);
figure(3);
image(img_wavedata);
title('dwt')
colormap(map);
fprintf('done!\n');
fprintf('----------- EZW Encoding ----------------\n');
ezw_encoding_threshold = 100;

36
[img_enc_significance_map, img_enc_refinement] = func_ezw_enc(img_wavedata,
ezw_encoding_threshold);

fprintf('done!\n');
fprintf('----------- EZW Decoding ----------------\n');
treshold = pow2(floor(log2(max(max(abs(img_wavedata))))));

img_wavedata_dec = func_ezw_dec(n, treshold, img_enc_significance_map,


img_enc_refinement);
figure(4);
image(img_wavedata_dec);
title('idwt')
colormap(map);

fprintf('done!\n');
fprintf('----------- Inverse Wavelet Decomposition ----------------\n');

img_reconstruct = func_InvDWT(img_wavedata_dec, S, Lo_R, Hi_R, level);


figure(5);
image(img_reconstruct);
title('final')
colormap(map);

fprintf('done!\n');
fprintf('----------- Performance ----------------\n');

Q = 255;
MSE = sum(sum((img_reconstruct-X).^2))/size(X,1)/size(X,2);
fprintf('The psnr performance is %.2f dB\n', 10*log10(Q*Q/MSE));

Wavelet Decomposition:

function [I_W , S] = func_DWT(I, level, Lo_D, Hi_D);

% input: I : input image


% level : wavelet decomposition level
% Lo_D : low-pass decomposition filter
% Hi_D : high-pass decomposition filter
%
% output: I_W : decomposed image vector
% S : corresponding bookkeeping matrix

[C,S] = func_Mywavedec2(I,level,Lo_D,Hi_D);

S(:,3) = S(:,1).*S(:,2); % dim of detail coef nmatrices


37
L = length(S);

I_W = zeros(S(L,1),S(L,2));

% approx part
I_W( 1:S(1,1) , 1:S(1,2) ) = reshape(C(1:S(1,3)),S(1,1:2));

for k = 2 : L-1
rows = [sum(S(1:k-1,1))+1:sum(S(1:k,1))];
columns = [sum(S(1:k-1,2))+1:sum(S(1:k,2))];
% horizontal part
c_start = S(1,3) + 3*sum(S(2:k-1,3)) + 1;
c_stop = S(1,3) + 3*sum(S(2:k-1,3)) + S(k,3);
I_W( 1:S(k,1) , columns ) = reshape( C(c_start:c_stop) , S(k,1:2) );

% vertical part
c_start = S(1,3) + 3*sum(S(2:k-1,3)) + S(k,3) + 1;
c_stop = S(1,3) + 3*sum(S(2:k-1,3)) + 2*S(k,3);
I_W( rows , 1:S(k,2) ) = reshape( C(c_start:c_stop) , S(k,1:2) );

% diagonal part
c_start = S(1,3) + 3*sum(S(2:k-1,3)) + 2*S(k,3) + 1;
c_stop = S(1,3) + 3*sum(S(2:k,3));
I_W( rows , columns ) = reshape( C(c_start:c_stop) , S(k,1:2) );

end

function [c,s] = func_Mywavedec2(x,n,varargin)


% For [C,S] = WAVEDEC2(X,N,Lo_D,Hi_D),
% Lo_D is the decomposition low-pass filter and
% Hi_D is the decomposition high-pass filter.
%
% The output wavelet 2-D decomposition structure [C,S]
% contains the wavelet decomposition vector C and the
% corresponding bookeeping matrix S.

if errargn(mfilename,nargin,[3:4],nargout,[0:2]), error('*'), end


if errargt(mfilename,n,'int'), error('*'), end
if nargin==3
[Lo_D,Hi_D] = wfilters(varargin{1},'d');
else
Lo_D = varargin{1}; Hi_D = varargin{2};
end

% Initialization.
s = [size(x)];
c = [];
38
for i=1:n
[x,h,v,d] = dwt2(x,Lo_D,Hi_D,'mode','per'); % decomposition
c = [h(:)' v(:)' d(:)' c]; % store details
s = [size(x);s]; % store size

end

% Last approximation.
c = [x(:)' c];
s = [size(x);s];

EZW Encoding

function [significance_map, refinement] =


func_ezw_enc(img_wavedata,ezw_encoding_threshold);
% img_wavedata: wavelet coefficients to encode
% ezw_encoding_threshold: determine where to stop encoding
% significance_map:
% a string matrix containing significance data for different passes ('p','n','z','t'), where each
row contains data for a different scanning pass.
% refinement: a strubg matrix containing refinement data for different passes ('0' or '1'), each
row contains data for a different scanning pass.
%

subordinate_list = [];
refinement = [];
significance_map = [];
img_wavedata_save = img_wavedata;
img_wavedata_mat = img_wavedata;

% Morton scan order


n = size(img_wavedata,1);
scan = func_morton([0:(n*n)-1],n);

% Initial threshold
init_threshold = pow2(floor(log2(max(max(abs(img_wavedata))))))
threshold = init_threshold;

39
while (threshold >= ezw_encoding_threshold)

[str, list, img_wavedata] = func_dominant_pass(img_wavedata, threshold, scan);


significance_map = strvcat(significance_map, char(str));

if(threshold == init_threshold),
subordinate_list = list;
else
subordinate_list = func_rearrange_list(subordinate_list, list, scan, img_wavedata_save);
end
[encoded, subordinate_list] = func_subordinate_pass(subordinate_list, threshold);
refinement = strvcat(refinement, strrep(num2str(encoded), ' ', ''));

threshold = threshold / 2;
end
significance_map
refinement

Morton Scan
function scan = func_morton(pos,n);

bits = log2(n*n); % number of bits needed to represent position


bin = dec2bin(pos(:),bits); % convert position to binary

scan = [bin2dec(bin(:,1:2:bits-1)), bin2dec(bin(:,2:2:bits))];

Dominant Pass
function [signif_map, subordinate_list, data] = func_dominant_pass(img_wavedata,
threshold, scan);
data = img_wavedata;
dim = size(img_wavedata,1);

signif_map = [];
signif_index = 1;

subordinate_list = [];
subordinate_index = 1;

for element = 1:dim*dim;


row = scan(element,1)+1;
column = scan(element,2)+1;
40
% to check whether element should be processed
if(~isnan(data(row, column)) & data(row, column) < realmax),

if(data(row,column) >= threshold),


signif_map(signif_index) = 'p';
signif_index = signif_index + 1;

subordinate_list(1, subordinate_index) = data(row, column);

subordinate_list(2, subordinate_index) = threshold+threshold/2;


subordinate_index= subordinate_index + 1;

data(row, column) = 0;

elseif(data(row,column) <= -threshold), signif_map(signif_index) = 'n';


signif_index = signif_index+ 1;

subordinate_list(1, subordinate_index) = data(row, column);


subordinate_list(2, subordinate_index) = -threshold - threshold/2;
subordinate_index= subordinate_index + 1;

data(row, column) = 0;

else %to determine wether element is zerotree root


if(row<dim/2 | column<dim/2),
mask = func_treemask(row,column,dim);
else
if(abs(data(row, column)) < threshold),
% element is zerotree root
signif_map(signif_index) = 't';
signif_index = signif_index + 1;

data(row, column) = realmax;


else % element is isolated zero
signif_map(signif_index) = 'z';
signif_index = signif_index + 1;
end
end
masked = data .* mask;

% compare data to threshold


if(isempty(find(abs(masked) >= threshold))),
% element is zerotree root
signif_map(signif_index) = 't';
signif_index = signif_index + 1;

data = data + (mask*realmax);


else % element is isolated zero
signif_map(signif_index) = 'z';

41
signif_index = signif_index + 1;
end
end
end
end

index = find(data == realmax);


data(index) = img_wavedata(index);

func_treemask:

function mask = func_treemask(x,y,dim);

% x, y is the position in the matrix of the node where the EZW tree
% should start; top left is x=1 y=1
% dim is the dimension of the mask (should be the same dimension as
% the wavelet data)

mask = zeros(dim);

x_min = x;
x_max = x;
y_min = y;
y_max = y;

while(x_max <= dim & y_max <= dim),


mask(x_min:x_max, y_min:y_max) = 1;

% calculate new subset


x_min = 2*x_min - 1;
x_max = 2*x_max;
y_min = 2*y_min - 1;
y_max = 2*y_max;
end

Rearrange List
function subordinate_list = func_rearrange_list(orig_list, add_list, scan, wavedata);

subordinate_list = [];
o_index = 1; % index original_list
a_index = 1; % index add_list

for element = 1:size(scan,1),


row = scan(element,1)+1;
column = scan(element,2)+1;

if(size(orig_list,2) >= o_index & wavedata(row, column) == orig_list(1,o_index)),


subordinate_list = [subordinate_list orig_list(:,o_index)];
o_index = o_index + 1;

42
elseif(size(add_list,2) >= a_index & wavedata(row, column) == add_list(1,a_index)),
subordinate_list = [subordinate_list add_list(:,a_index)];
a_index = a_index + 1;
end
end

Subordinate Pass
function [encoded, subordinate_list] = func_subordinate_pass(subordinate_list, threshold);
% subordinate_list: current subordinate list containing coefficietns
% threshold: current threshold to use when comparing
% encoded: matrix containing 0's and 1's for refinement of the suborinate list
% subordinate_list: new subordinate_list (second row containing reconstruction values

encoded = zeros(1,size(subordinate_list,2));
encoded(find(abs(subordinate_list(1,:)) > abs(subordinate_list(2,:)))) = 1;

% update subordinate_list(2,:)
for i = 1:length(encoded),
if(encoded(i) == 1),
if(subordinate_list(1,i) > 0),
subordinate_list(2,i) = subordinate_list(2,i) + threshold/4;
else
subordinate_list(2,i) = subordinate_list(2,i) - threshold/4;
end
else
if(subordinate_list(1,i) > 0),
subordinate_list(2,i) = subordinate_list(2,i) - threshold/4;
else
subordinate_list(2,i) = subordinate_list(2,i) + threshold/4;
end
end
end

EZW Decoder:

function img_wavedata_dec = func_ezw_decode(dim, threshold, significance_map,


refinement);
% dim: dimension of the wavelet matrix to reconstruct
% threshold: initial threshold used while encoding
% significance_map: a string matrix containing significance data
% refinement: a string matrix containing refinement data
% img_wavedata_dec: reconstructed wavelet coefficients

43
img_wavedata_dec = zeros(dim,dim);

scan = func_morton([0:(dim*dim)-1],dim);

% number of steps significance map (and refinement data)


steps = size(significance_map,1);

for step = 1:steps,


%to decode significance map
img_wavedata_dec = func_decode_significancemap(img_wavedata_dec,
significance_map(step,:), threshold, scan);

img_wavedata_dec = func_decode_refine(img_wavedata_dec, refinement(step,:),


threshold, scan);

threshold = threshold/2;

end

func_decode_significancemap:

function img_wavedata_dec = func_decode_significancemap(img_wavedata_dec,


significance_map, threshold, scan);

% img_wavedata_dec: input wavelet coefficients


% significance_map: string containing the significance map ('p','n','z' and 't')
% threshold: threshold to use during this decoding pass (dominant pass)
% scan: scan order to use (Morton)
% img_wavedata_dec: the decoded wavelet coefficients

backup = img_wavedata_dec;

n = size(img_wavedata_dec,1);
index = 1;

for element = 1:n*n;


%to get matrix index for element
row = scan(element,1)+1;
column = scan(element,2)+1;

%to check whether element should be processed


if(isfinite(img_wavedata_dec(row, column))),

%to determine type of element


if(significance_map(index) == 'p'),
img_wavedata_dec(row, column) = threshold + threshold/2;
elseif(significance_map(index) == 'n'),
44
img_wavedata_dec(row, column) = -threshold - threshold/2;
elseif(significance_map(index) == 'z'),
img_wavedata_dec(row, column) = 0;
else
img_wavedata_dec(row, column) = 0;

mask = func_treemask_inf(row,column,n);
img_wavedata_dec = img_wavedata_dec + mask;
end
index = index + 1;
end
end

img_wavedata_dec(find(img_wavedata_dec > realmax)) = 0;

img_wavedata_dec = img_wavedata_dec + backup;

func_decode_refine:

function img_wavedata_dec = func_decode_refine(img_wavedata_dec, refinement,


threshold, scan);
% img_wavedata_dec: input wavelet coefficients
% refinement: string containing refinement data ('0' and '1')
% threshold: threshold to use during this refinement pass
% scan: scan order to use (Morton)
% img_wavedata_dec: the refined wavelet coefficients

n = size(img_wavedata_dec,1);
index = 1;

for element = 1:n*n;


% get matrix index for element
row = scan(element,1)+1;
column = scan(element,2)+1;

if(img_wavedata_dec(row, column) ~= 0),


ref = refinement(index);

if(ref == '1'),
if(img_wavedata_dec(row, column) > 0),
img_wavedata_dec(row, column) = img_wavedata_dec(row, column) + threshold/4;
else
img_wavedata_dec(row, column) = img_wavedata_dec(row, column) - threshold/4;
end
else
if(img_wavedata_dec(row, column) > 0),
img_wavedata_dec(row, column) = img_wavedata_dec(row, column) - threshold/4;
45
else
img_wavedata_dec(row, column) = img_wavedata_dec(row, column) + threshold/4;
end
end
index = index + 1;
end
end

func_treemask_inf:

function mask = func_treemask_inf(x, y, dim);

mask = zeros(dim);

x_min = x;
x_max = x;
y_min = y;
y_max = y;

while(x_max <= dim & y_max <= dim),


mask(x_min:x_max, y_min:y_max) = inf;

% calculate new subset


x_min = 2*x_min - 1;
x_max = 2*x_max;
y_min = 2*y_min - 1;
y_max = 2*y_max;
end

IDWT
function im_rec = func_InvDWT(I_W, S, Lo_R, Hi_R, level);
% input: I_W : decomposed image vector
% S : corresponding bookkeeping matrix
% Lo_D : low-pass decomposition filter
% Hi_D : high-pass decomposition filter
% level : wavelet decomposition level
% output: im_rec : reconstruted image

L = length(S);

m = I_W;

C1 = zeros(1,S(1,3)+3*sum(S(2:L-1,3)));

% approx part
C1(1:S(1,3)) = reshape( m( 1:S(1,1) , 1:S(1,2) ), 1 , S(1,3) );

for k = 2:L-1
46
rows = [sum(S(1:k-1,1))+1:sum(S(1:k,1))];
columns = [sum(S(1:k-1,2))+1:sum(S(1:k,2))];
% horizontal part
c_start = S(1,3) + 3*sum(S(2:k-1,3)) + 1;
c_stop = S(1,3) + 3*sum(S(2:k-1,3)) + S(k,3);
C1(c_start:c_stop) = reshape( m( 1:S(k,1) , columns ) , 1, c_stop-c_start+1);
% vertical part
c_start = S(1,3) + 3*sum(S(2:k-1,3)) + S(k,3) + 1;
c_stop = S(1,3) + 3*sum(S(2:k-1,3)) + 2*S(k,3);
C1(c_start:c_stop) = reshape( m( rows , 1:S(k,2) ) , 1 , c_stop-c_start+1 );
% diagonal part
c_start = S(1,3) + 3*sum(S(2:k-1,3)) + 2*S(k,3) + 1;
c_stop = S(1,3) + 3*sum(S(2:k,3));
C1(c_start:c_stop) = reshape( m( rows , columns ) , 1 , c_stop-c_start+1);
end

if (( L - 2) > level)
temp = zeros(1, length(C1) - (S(1,3)+3*sum(S(2:(level+1),3))));
C1(S((level+2),3)+1 : length(C1)) = temp;
end

S(:,3) = [];

im_rec = func_Mywaverec2(C1,S, Lo_R, Hi_R);

func_Mywaverec2:

function x = func_Mywaverec2(c,s,varargin)
if errargn(mfilename,nargin,[3:4],nargout,[0:1]), error('*'), end

x = func_Myappcoef2(c,s,varargin{:},0);

func_Myappcoef2:

function a = func_Myappcoef2(c,s,varargin)
if errargn(mfilename,nargin,[3:5],nargout,[0:1]), error('*'), end
rmax = size(s,1);
nmax = rmax-2;
if ischar(varargin{1})
[Lo_R,Hi_R] = wfilters(varargin{1},'r'); next = 2;
else
Lo_R = varargin{1}; Hi_R = varargin{2}; next = 3;

47
end
if nargin>=(2+next) , n = varargin{next}; else, n = nmax; end

if (n<0) | (n>nmax) | (n~=fix(n))


errargt(mfilename,'invalid level value','msg'); error('*');
end

nl = s(1,1);
nc = s(1,2);
a = zeros(nl,nc);
a(:) = c(1:nl*nc);

rm = rmax+1;
for p=nmax:-1:n+1
[h,v,d] = detcoef2('all',c,s,p);
a = idwt2(a,h,v,d,Lo_R,Hi_R,s(rm-p,:),'mode','per');
end

48
REFERENCES

• Shapiro, J.M.: Embedded Image Coding using Zerotrees of Wavelet Coefficients.


IEEE Transactions on Signal Processing 41, 3445–3462 (1993).

• R. C. Gonzalez, R. E. Woods, “Digital Image Processing,” Pearson Education.

• Donoho, D. L.: DE-NOISING BY SOFT-THRESHOLDING.


IEEE Transactions on Information Theory, Vol. 41, No. 3 (1995), p. 613-627.

 M. Antonimi, M. Barland, P.mathieu, and I. Daubechies : Image Coding using


Wavelet Transform. IEEE transactions on Image Processing, pp.205-220, January
1992

 M.Vetterli and C. Herley: Wavelets and filter banks: Theory and Design. IEEE
transactions on Signal Processing, Vol. 40, pp.2207-2232, September 1992.

49

Você também pode gostar