Escolar Documentos
Profissional Documentos
Cultura Documentos
1. Introduction:
The term digital image refers to processing of a
two dimensional picture by a digital computer. In a
broader context, it implies digital processing of any two
dimensional data. A digital image is an array of real or
complex numbers represented by a finite number of bits.
An image given in the form of a transparency, slide,
photograph or an X-rays is first digitized and stored as a As detailed in the diagram, the first step in
matrix of binary digits in computer memory. This the process is image acquisition by an imaging
digitized image can then be processed and or displayed sensor in conjunction with a digitizer to digitize the
on a high-resolution television monitor. For display, the image. The next step is the preprocessing step
image is stored in a rapid-access buffer memory, which where the image is improved being fed as an input
refreshes the monitor at a rate of 25 frames per second to the other processes. Preprocessing typically
to produce a visually continuous display. deals with enhancing, removing noise, isolating
regions, etc. Segmentation partitions an image into
1.1 The Image Processing System: its constituent parts or objects. The output of
A typical digital image processing system is segmentation is usually raw pixel data, which
given in fig.1.1 consists of either the boundary of the region or the
pixels in the region themselves. Representation is
the process of transforming the raw pixel data into
a form useful for subsequent processing by the
computer. Description deals with extracting
features that are basic in differentiating one class of
objects from another. Recognition assigns a label
to an object based on the information provided by
its descriptors. Interpretation involves assigning
meaning to an ensemble of recognized objects. The
knowledge about a problem domain is incorporated
into the knowledge base. The knowledge base
guides the operation of each processing module
and also controls the interaction between the
modules. Not all modules need be necessarily
present for a specific function. The composition of
the image processing system depends on its
2
application. The frame rate of the image processor is 2.2.1 Image Enhancement
normally around 25 frames per second. Image enhancement operations improve the
qualities of an image like improving the image’s
1.1.3 Digital Computer:
contrast and brightness characteristics, reducing its
Mathematical processing of the digitized image noise content, or sharpen the details. This just
such as convolution, averaging, addition, subtraction, enhances the image and reveals the same
etc. are done by the computer. information in more understandable image. It does
not add any information to it.
1.1.4 Mass Storage
2.2.2 Image Restoration
The secondary storage devices normally used are Image restoration like enhancement improves
floppy disks, CD ROMs etc. the qualities of image but all the operations are
mainly based on known, measured, or degradations
1.1.5 Hard Copy Device of the original image. Image restorations are used
The hard copy device is used to produce a to restore images with problems such as geometric
permanent copy of the image and for the storage of the distortion, improper focus, repetitive noise, and
software involved. camera motion. It is used to correct images for
known degradations.
1.1.6 Operator Console 2.2.3 Image Analysis
The operator console consists of equipment Image analysis operations produce numerical
and arrangements for verification of intermediate or graphical information based on characteristics of
results and for alterations in the software as and when the original image. They break into objects and
require. The operator is also capable of checking for then classify them. They depend on the image
any resulting errors and for the entry of requisite data. statistics. Common operations are extraction and
description of scene and image features, automated
2. Image processing fundamentals: measurements, and object classification. Image
2.1 Introduction analyze are mainly used in machine vision
Digital image processing refers processing of applications.
the image in digital form. Modern cameras may directly 2.2.4 Image Compression
take the image in digital form but generally images are Image compression and decompression reduce
originated in optical form. They are captured by video the data content necessary to describe the image.
cameras and digitalized. The digitalization process Most of the images contain lot of redundant
includes sampling, quantization. Then these images are information, compression removes all the
processed by the five fundamental processes, at least redundancies. Because of the compression the size
any one of them, not necessarily all of them. is reduced, so efficiently stored or transported. The
2.2 Image Processing Techniques: compressed image is decompressed when
This section gives various image processing displayed. Lossless compression preserves the
techniques. exact data in the original image, but Lossy
compression does not represent the original image
but provide excellent compression.
2.2.5 Image Synthesis
Image synthesis operations create images from
other images or non-image data. Image synthesis
operations generally create images that are either
physically impossible or impractical to acquire.
2.3 Applications
Digital image processing has a broad spectrum
of applications, such as remote sensing via
satellites and other spacecrafts, image transmission
and storage for business applications, medical
processing, radar, sonar and acoustic image
Fig 2.2.1 Image processing Techniques processing, robotics and automated inspection of
industrial parts.
3
of the time it would take to write a program in a scalar technique of predictive image coding For the
non-interactive language such as C or FORTRAN. CADU decoder; the PAR model plays a role of
adaptive noncausal predictor. The CADU approach
Introduction:
is very novel and unique that the predictor is only
The prevailing engineering practice of image/video
used at the decoder side, and the noncausal
compression usually starts with a dense 2-D sample
predictive decoding is performed in collaboration
grid of pixels. Compression is done by transforming the
with the prefiltering of the encoder.
spatial image signal into a space (e.g., spaces of Fourier
or wavelet bases) in which the image has a sparse Module 1:
representation and by entropy coding of transform The CADU image compression technique,
coefficients. Recently, researchers in the emerging field although operating on down-sampled images,
of compressive sensing introduced a new method called obtains some of the best PSNR results and visual
“over sampling followed massive dumping” approach. quality at low to medium bit rates. CADU
They showed, quite surprisingly, it is possible, at least outperforms the JPEG 2000 standard, even though
theoretically, to obtain compact signal representation by the latter is fed images of higher resolution and is
a greatly reduced number of random samples widely regarded as an excellent low bit-rate image
This project investigates the problem of compact codec. Since the down-sampled image has the
image representation in an approach of sparse sampling conventional form of square pixel grid and can be
in the spatial domain. The fact that most natural images fed directly to any existing image codec, standard
have an exponentially decaying power spectrum or proprietary, the CADU up conversion process is
suggests the possibility of interpolation-based compact entirely up to the decoder the proposed CADU
representation of images. A typical scene contains image coding approach can work in tandem with
predominantly smooth regions that can be satisfactorily any third party image/video compression
interpolated from a sparsely sampled low-resolution techniques. This flexibility makes standard
image. The difficulty is with the reconstruction of high compliance a non issue for the new CADU method.
frequency contents. Of particular importance is faithful We envision that CADU becomes a useful
reconstruction of edges without large phase errors, enhancer of any existing image compression
which is detrimental to perceptual quality of a decoded standard for improved low bit-rate performance.
image. For all these drawbacks, new image We make a more compact representation of an
compression methodology of collaborative adaptive image by decimating every other row and every
down-sampling and up conversion (CADU). other column of the image. This simple approach
Scope Of The Project: has an operational advantage that the down-
The main objective is to propose a new, sampled image remains a uniform rectilinear grid
standard-compliant approach of coding uniformly of pixels and can readily be compressed by any of
down-sampled images, which outperforms JPEG 2000 existing international image coding standards. To
in both PSNR and visual quality at low to modest bit prevent the down-sampling process from causing
rates by using the novel up conversion process of least aliasing artifacts, it seems necessary to low-pass
square noncausal predictive decoding, constrained by prefilter an input image to half of its maximum
adaptive directional low-pass prefiltering. It is to frequency. However, on a second reflection, one
estimate that a lower sampling rate can actually produce can do somewhat better. In areas of edges, the 2-D
higher quality images at certain bit rates. spectrum of the local image signal is not isotropic.
Thus, we seek to perform adaptive sampling,
Module: within the uniform down-sampling
Module 1: Decomposition of low –resolution image. framework, by judiciously smoothing the image
Module 2: Upconversion of the image to its resolution with directional low-pass prefiltering prior to
by PAR . down-sampling.
Module 3: Reverse the directional low-pass prefiltering To this end, we design a family of 2-D
operation of the encoder. directional low-pass prefilters under the criterion of
preserving the maximum 2-D bandwidth without
Module Description:
the risk of aliasing. Let WL(θ) and WH(θ) be the
The CADU decoder first decompresses the low-
side lengths of the rectangular low-passed region of
resolution image and then upconverts it to the original
the 2-D filter in the low- and high-frequency
resolution in constrained least squares restoration
directions of an edge of angle , respectively. The
process, using a 2-D piecewise autoregressive model
maximum area of this low-passed region without
(PAR) and by reversing the directional low-pass
aliasing is
prefiltering operation of the encoder. Two-dimensional
A= WL (θ) .WH (θ) = π2
autoregressive modeling was a known effective
6
REFERENCES:
[1] E. Cands, “Compressive sampling,” in Proc. Int.
Congr. Mathematics, Madrid, Spain, 2006, pp.
1433–1452. WU et al.: LOW BIT-RATE IMAGE
COMPRESSION VIA ADAPTIVE DOWN-
SAMPLING 561
[2] X.Wu,K.U. Barthel, and W. Zhang, “Piecewise 2-D
autoregression for predictive image coding,” in Proc.
IEEE Int. Conf. Image Processing, Chicago, IL, Oct.
1998, vol. 3, pp. 901–904.
[3] X. Li and M. T. Orchard, “Edge-direted prediction
for lossless compression of natural images,” IEEE
Trans. Image Process., vol. 10, no. 6, pp. 813–817, Jun.
2001.
[4] D. Santa-Cruz, R. Grosbois, and T. Ebrahimi, “Jpeg
2000 performance evaluation and assessment,” Signal
Process.: Image Commun., vol. 1, no. 17, pp. 113–130,
2002.
[5] A. M. Bruckstein, M. Elad, and R. Kimmel, “Down-
scaling for better transform compression,” IEEE Trans.
Image Process., vol. 12, no. 9, pp. 1132–1144, Sep.
2003.
[6] Y. Tsaig, M. Elad, and P. Milanfar, “Variable
projection for near-optimal filtering in low bit-rate
block coders,” IEEE Trans. Circuits Syst. Video
Technol., vol. 15, no. 1, pp. 154–160, Jan. 2005.
[7] W. Lin and D. Li, “Adaptive downsampling to
improve image compression at low bit rates,” IEEE
Trans. Image Process., vol. 15, no. 9, pp. 2513–2521,
Sep. 2006.
[8] L. Gan, C. Tu, J. Liang, T. D. Tran, and K.-K. Ma,
“Undersampled boundary pre-/post-filters for low bit-
rate dct-based coders,” IEEE Trans. Image Process. ,
vol. 16, no. 2, pp. 428–441, Feb. 2007.
[9] X. Zhang, X. Wu, and F. Wu, “Image coding on
quincunx lattice with adaptive lifting and
interpolation,” in Proc. IEEE Data Compression Conf.,
Mar. 2007, pp. 193–202.
[10] B. Zeng and A. N.Venetsanopoulos, “A jpeg-based
interpolative image coding scheme,” in Proc. IEEE
ICASSP, 1993, vol. 5, pp. 393–396.
[11] D. Tabuman and M. Marcellin , JPEG2000: Image
Compression Fundamentals, Standards and Parctice.
Norwell, MA: Kluwer, 2002.