Você está na página 1de 18

School of Chemical and Biomedical Engineering

Division of Bioengineering




BG3801: Bioengineering Laboratory
Year 3, Semester 1

Medical Image Processing

Location: [N1.3-B4-02]

Name: _______________________________________
Matric Number: _______________________________________
Group: _______________________________________
Date of experiment: _______________________________________

GRADE: _____________

2


Medical Image Processing


1. INTRODUCTION

Image processing is an important field with applications in a number of areas
(e.g., neuroimaging, surgical planning, etc). Medical image processing generally
involves the study of feature extraction, segmentation, and classification
techniques. The aim in image processing applications is to extract important
features from the image data, from which a description, interpretation, or
understanding of the disease can be provided by the machine.

2. OBJECTIVES

In this practical you will

1. learn to use Matlab to display and manipulate medical images

2. filter medical images using high pass filter for edge detection

3. filter medical images using low pass filter for noise reduction

4. learn to perform windowing/leveling of medical images using Matlab.
(optional)

3. BACKGROUND

3.1 Digital Images

Digital images are made of a matrix of pixels. For grayscale images each pixel
stores a value between 0 and 255 (in the case of 8 bit images) where 0 represents
black, 255 represents white and other numbers represent a varying shade of gray
from black to white. In the case of grayscale medical images, each pixel is 16 bits
(each pixel stores a value between 0 and 2
16
). This provides the medical images
with a larger range of shade of gray. The resolution of the image is defined by the
number of pixels making up the image. Hence, the larger the number of pixels
making up the images, the higher the resolution of the images. Consequently, the
image will have more details.

3.2 Spatial Filtering

Filters are widely used for pre-processing of digital images to remove different
kinds of noise, detect edges and boundaries, detect particular shapes in the image
etc. Filtering can be performed both in spatial and frequency domain. In this
3
practical, we will only focus on spatial filtering. Spatial filtering involves filtering
operations that are performed directly on the pixels of an image. The procedures
of spatial filtering are illustrated in Figure 1. The process involves moving the
filter mask from point to point in an image. A mask is defined as a small (e.g., 3 x
3) 2-D array in which the values of the mask coefficients determine the nature of
the process, such as image smoothing, edge detection, etc. For linear spatial
filtering, the response at each point (x, y) is given by a sum of products of the
filter coefficients and the corresponding image pixels in the area spanned by the
filter mask.



Figure 1. The mechanics of spatial filtering. The magnified drawing shows a 3 x 3
mask and the image section directly under it [1].
4

In general, linear filtering of an image f of size M x N with a filter mask of size m
x n is given by the expression:

( )

= =
+ + =
a
a s
b
b t
filt
t y s x f t s w y x I ) , ( ) , ( ,
(3.1)
where
2 / ) 1 ( = m a
;
2 / ) 1 ( = n b
;
1 ,...., 2 , 1 , 0 = M x
;
1 ,...., 2 , 1 , 0 = N y


Referring to the equation, it is important to note that the mask processes all pixels
in the image. The process of linear spatial filtering is similar to a frequency
domain concept called convolution. For this reason, linear spatial filtering often is
referred to as convolving a mask with an image. Similarly, filter masks are
sometimes called convolution masks.

High Pass filters (Edge Detection)

Edge detection is used for detecting meaningful discontinuities in an image in
gray level. An edge can be defined as a set of connected pixels that lie on the
boundary between two regions. Edge detection filters (also refers to as high-pass
filters) uses spatial first/second order derivatives of intensity function to enhance
intensity variation across the edge.

(i) First-order derivatives filters

First-order derivatives in a digital image in gray level can be calculated using the
gradient. The calculation of the derivatives of an image is based on various
approximations of the 2-D gradient. Edges are located at maxima of absolute
value of first order derivatives. The gradient of an image f(x, y) at location (x, y)
is defined as the vector
(
(
(
(

c
c
c
c
=
(

= V
y
f
x
f
G
G
f
y
x
(3.2)
The magnitude and direction of the gradient vector are obtained as follows:

5
| |
2 2
) (
y x
G G f mag + = V
(3.3)
|
|
.
|

\
|
=

x
y
G
G
y x
1
tan ) , ( o
(3.4)

The gradient of an image can be calculated by obtaining the partial derivatives
and at every pixel location. For example, consider a 3 x 3 area shown in Fig
below representing the gray levels in a neighborhood of an image.

1
z
2
z
3
z
4
z
5
z
6
z
7
z
8
z
9
z
Figure 2. A 3 x 3 area that represent the gray levels in a neighborhood of an
image.

The derivate can be approximated using masks of size 3 x 3 by the following
equations:

) ( ) (
3 2 1 9 8 7
z z z z z z G
x
+ + + + =
(3.5)
) ( ) (
7 4 1 9 6 3
z z z z z z G
y
+ + + + =
(3.6)

Referring to the equations, the difference between the first and third rows of the 3
x 3 image region approximates the derivative in the x-direction (G
x
), and the
difference between the third and first columns approximates the derivative in the
y-direction (G
y
). These two equations can be implemented using the mask shown
below. These marks are called Prewitt operators.

-1 0 1
-1 0 1
-1 0 1
y
G


-1 -1 -1
0 0 0
1 1 1
x
G

Figure 3. The Prewitt operators


6
(ii) Second-order derivatives filters - The Laplacian

Edge detection can also be performed using second-order derivatives. The
Laplacian is an example of second-order derivative operator. The Laplacian
operator highlights gray-level discontinuities in an image and deemphasizes
regions with slowly varying gray levels.

The digital implementation of the two-dimensional Laplacian is as follow:

) , ( 4
)] 1 , ( ) 1 , ( ) , 1 ( ) , 1 ( [
2
y x f
y x f y x f y x f y x f f

+ + + + + = V


This equation can be implemented using the mask shown in Figure 4 below. The
methods of implementation are similar to the linear smoothing filters. The main
difference is the values of the coefficients of the filters.

(
(
(




1 1 1
1 8 1
1 1 1

Figure 4. An example of Laplacian operators

Second order derivatives have a stronger response to fine detail, such as thin lines
and isolated points whereas first order derivatives generally have a stronger
response to a gray-level step.

Low Pass filters (Smoothing Spatial filters)

Low pass filters (smoothing filters) are used to reduce the noise in digital images.
The resulting processed image after using a smoothing, linear spatial filter is the
average of the pixels contained in the neighborhood of the filter mask. These
filters are also called averaging filters/lowpass filters.

The concept behind smoothing filters is to replace the value of every pixels in an
image by the average of the gray levels in the neighborhood defined by the filter
mask.
(
(
(

1 1 1
1 1 1
1 1 1
9
1

Figure 5. A 3 x 3 smoothing linear filter

Figure 5 above shows a 3 x 3 smoothing filters. The use of the filter produces the
standard average of the pixels under the mask. For example, consider a 3 x 3 area
as shown in Figure 2, using the 3 x 3 smoothing filter at location of Z
5
, the value
7
of Z
5
will be replaced by an average of the values of Z
1
to Z
9
. This process is
carried out at every pixel location. The results of smoothing are affected by the
filter size. We will study this effect in this practical (Exercise 2).

2. LABORATORY

Bio-Computing Lab. (N1.3 B2-25)

3. EQUIPMENT

Personal Computers (Window NT 4.0) and MATLAB software (Ver 6.1).

4. IMAGE FLES

All the image files you will use in this lab can be found on the lab PCs in Network
directory \\scbefs1\BG3701-4.

5. EXERCISE 1
In this section of the practical, we will be using Matlab routines for reading
DICOM 3.0 files from disk. DICOM 3.0 is an international standard for the
encoding and communication of medical image data. Images are stored as a
matrix in Matlab. Hence matrix operations can be performed on images.

5.1 Displaying DICOM 3.0 Images

1. First create a new folder, Medical Image Processing in your network
directory. This folder will be your working directory where you will save all the
programs written in this practical. Now specify the Matlab working directory as
your new folder, as illustrated in Figure 6. Remember to save all the files in this
folder.



Figure 6. Defining working directory in Matlab.

2. Copy the images in the Network directory BG3701-4 into your working
directory.

3. Use the following command to obtain the header information from the file

>> info = dicominfo(MR_Image);
8

Note: The file name is case-sensitive.

See if you can interpret some of the information in this structure.
To view the values in the variable info, you can use the following command

>> info

Or double click on the variable info in the workspace.

4. Use the following command to read the image.

>> I = dicomread (MR_Image);

5. In order to display the image, use the following command:

>> imshow (I, []);

Because the image data in this DICOM file is signed 16-bit data, you must use the
autoscaling syntax [ ] in Matlab to view the image.

You should see the image as shown below.


Figure 7. A figure window showing the MR image of the Knee.

5.2 Displaying multiple images

Because MRI and CT are tomographic imaging techniques, the image data
generally comprises a series of images covering a volume of interest, e.g. the
brain, knee, etc. Hence, the images are often display as sequence of images side
9
by side. This is the standard way of creating a hard-copy of multi-image data.
This can be performed using the montage function in Matlab.

1. Use the following command to load a MR image volume into the workspace

>> load mri;
>> figure, montage (D, map);

The second command (montage) above creates what is known as an image
montage.


6. EXERCISE 2

In this exercise, we will study the effects of different kind of high-pass and low-
pass filters on medical images. We will analyze the positive and negative effects
of using filters on medical images.


6.1 High Pass filter (Edge Detection)

In this section of the practical, you will filter both a digital photo and medical
image using the following filter masks.

a)
(
(
(

1 0 1
1 0 1
1 0 1
3
1
(b)
(
(
(


1 1 1
0 0 0
1 1 1
3
1
(c)
(
(
(




1 1 1
1 8 1
1 1 1


(i) Digital Photo
First we will filter a digital photo in JPEG format.

1. Use the following command to load and view the image.

>> I_photo = imread('cameraman.tif');
>> figure, imshow(I_photo);

2. Use the following command to define the filter (a) in Matlab
>> A1 = 1/3 * [-1 0 1
-1 0 1
-1 0 1];

Note: Hit enter after keying each row. When it reaches the last row, Matlab will
store the value of the filter into variable A1.

3. Use the following command to filter the image I_photo:
10

>> I_photo_filtered = imfilter(I_photo, A1);

Matlab provides imfilter function to filter images.

4. Use the following command to display the images (before and after filtering)

>> figure, subplot(1,2,1); imshow(I_photo);
>> subplot(1,2,2); imshow(I_photo_filtered);

What do you think is being highlighted in the image by filter A?

Now using similar commands in steps 2 - 4, filter the same digital photo using
filter B and C?

Question 1.
What do you think is being highlighted in the image by these filters?


(ii) Medical Image
4. Use the following command to load the MR image into Matlab workspace.

>> I = dicomread (MR_Image);

Following the same steps described in the above section for digital photo, filter
the medical image using the 3 filters.

Question 2.
What did you observe? Comment on the results.

6.2 Low Pass Filters (Noise Removal)

Images contaminated by speckle noise can be processed using a smoothing spatial
filter (also called low pass filter). Filter the image with the following averaging
filters and study how the size of the filter affects the image.

1. Load image using the following command

>> I_noise = dicomread(MR_noise);

The MR image has been contaminated by speckle noise.
11
d)
(
(
(

1 1 1
1 1 1
1 1 1
9
1
e)
(
(
(
(
(
(

1 1 1 1 1
1 1 1 1 1
1 1 1 1 1
1 1 1 1 1
1 1 1 1 1
25
1
f)
(
(
(
(
(
(
(
(
(

1 1 1 1 1 1 1
1 1 1 1 1 1 1
1 1 1 1 1 1 1
1 1 1 1 1 1 1
1 1 1 1 1 1 1
1 1 1 1 1 1 1
1 1 1 1 1 1 1
49
1


2. Use the following command to create filter (d)

>> A2 = 1/9 * ones(3, 3);

ones(m, n) function returns a m-by-n matrix of ones, where m = columns and n =
rows.

3. Using the imfilter function, filter the medical image using averaging filter (d).

4. Repeat step 2 and 3 using averaging filters (e) and (f).

Question 3.
Comments on your results. How does the size of the filter affect the image, e.g the
edges?

6.3 Windowing and leveling (Optional)

Because medical image data contains 16 bits data, it is a common practice to use
windowing and leveling to change the contrast and brightness of the images in
order to highlight the anatomy structure of interest.

>> imshow(MR_Image)
>> imcontrast(gca)

The Matlab imcontrast function allows you to interactively change the contrast
and brightness of the image (windowing/leveling).
- Clicking and dragging the mouse within the image interactively changes
the image's window values.
- Dragging the mouse horizontally from left to right changes the window
width (i.e., contrast).
- Dragging the mouse vertically up and down changes the window center
(i.e., brightness).
12

7. Exercise 3 (Using the images acquired at PWG)

Images that you have acquired at PWG are available at \\BIOE-
B402S1035\users\latifah\Documents\BG3701\AY13-14 . There are two folders in the shared
drive One for CT and one for MRI. Under each folder, each set of images are
stored in subfolder named after the name of one of the students in the subgroup.

The aim of this exercise is to view, process and analyze the CT and MRI images
of the bone sample that you have imaged using microCT and MRI.

Throughout this exercise, we will be using a software ImageJ
(http://rsbweb.nih.gov/ij/) to view the images. ImageJ is a freely available image
processing tool developed by National Institutes of Health. Here is a brief
introduction to the software. More details can be found at
http://rsbweb.nih.gov/ij/docs/pdfs/ImageJ.pdf


a) To start the program, double click on ij.exe in the folder. The following screen
should pop up on your screen. This is the main user interface.


Figure 7.1 A screen of the interface for ImageJ

b) Open the images

i. To import the images that you like to view, go to File->Import-> Image
Sequence. Select the first image in the folder and click open. See Figure
7.2a
ii. A diagloue box will then pop-up, click ok. This diagloe box shows the
number of images that you will be reading into ImageJ. See Figure 7.2b

13

(a) (b)
Figure 7.2


c) After step b, you should see the images similar to Figure 7.3. To scroll the
images, use the scroll bar at the bottom of the viewing box.


Figure 7.3


14
d) Changing the contrast and brightness of the images

It is common for us to change the contrast and brightness of the images. In
ImageJ, the tool to adjust contrast and brightness is available at Image->Adjust-
>Brightness/Contrast. You may also use Window/Level. A separate dialogue box
will appear. See Fig. 7.4


Figure 7.4

7.1 microCT
Three different views of the sample were acquired. These views are the transverse
plane, coronal plane and sagittal plane. Figure 8 below illustrates the orientation
of the three planes.


Figure 8 showing transverse (axial), sagittal, and coronal planes.
(http://en.wikipedia.org/wiki/Sagittal_plane)

15

Questions 4
i) In your logsheet, copy and paste one representative image from each plane
and label clearly. Note down the slice thickness and image resolution of
the images i.e. voxel size. You can obtain this information in the image
header. To view header information using ImageJ, go to Image->Show
Info


ii) Choose one of the best CT images from the set acquired in the sagittal
plane and label the bone, muscle, screws by drawing arrows to point at the
structures. Discuss the quality of the CT images, e.g. is the contrast good?
Are there artifacts?

iii) Choose one of the CT images in the sagittal plane that best display the
screw, and then measure the distance between the two screws using
ImageJ. Note down this measurement. To perform measurement using
ImageJ, refer to the figure below. To retrieve the measurement, go to
Analyze->Measure. A box will appear. Under the length column will be
the length of the line you have drawn. Discuss the potential errors that
could occur during the measurement.


16


iv) Image processing: using the same image that you chose in (ii), apply the
High Pass filter (Edge Detection) using the mask (a) in 6.1. Comment
on your answer.

(a)
(
(
(

1 0 1
1 0 1
1 0 1
3
1


v) Image processing: using the same image that you chose in (ii), apply the
Low Pass filter (Smoothing) using the mask (d) in 6.2. Comment on
your answer.
(d)
(
(
(

1 1 1
1 1 1
1 1 1
9
1



7.2 MRI
Three different views of the sample were acquired. These views are the transverse
plane, coronal plane and sagittal plane.


Questions 5

i) In your logsheet, copy and paste one MR image from each plane and label
clearly. By reading the Dicom header of the images (refer to previous
17
section 5.1 Displaying DICOM 3.0 Images), note down the TE, TR and
flip angle for the MR acquisition in the sagittal plane.

ii) Choose one of the best images from the set acquired in the sagittal plane
and label the bone, muscle, screws by drawing arrows to point at the
structures. Discuss the quality of the MR images, e.g. is the contrast good?

iii) Image processing: using the same MR image that you chose in (ii), apply
the High Pass filter (Edge Detection) using the mask (a) in 6.1.
Comment on your answer.

(a)
(
(
(

1 0 1
1 0 1
1 0 1
3
1


iv) Image processing: using the same image that you chose in (ii), apply the
Low Pass filter (Smoothing) using the mask (d) in 6.2. Comment on
your answer.
(d)
(
(
(

1 1 1
1 1 1
1 1 1
9
1



8. REFERENCES

1. Gonzalez, R. C. and Woods, R. E., Digital Image Processing, Addison-
Wesley, 1992.

2. K. R. Castleman, Digital Image Processing, Prentice-Hall, 1996

3. Getting Started with MATLAB. The MathWork, Inc., Natick, MA, 1984-
1997.

4. Using MATLAB. The MathWork, Inc., Natick, MA, 1984-1998.

5. Image Processing Toolbox User's Guide, The MathWork, Inc., Natick,
MA, 1993-1997.

18
9. APPENDIX

Guidelines and Hints for Image Processing in MATLAB.

Medical Image Input and Manipulation

dicomread is used to load medical image data in DICOM standard. The basic syntax is I
= dicomread(filename). The image data from the file will be stored in matrix I.

dicominfo is used to load the header information of medical image data in DICOM
standard. The basic syntax is Info = dicominfo(filename). The header information from
the image data will be stored in matrix Info.

imshow displays the data of an image file in a MATLAB figure window.


Figure 9. Details about the anatomy shown in the image. The image is showing
the knee acquired in the sagittal plane.

Você também pode gostar