Você está na página 1de 128

Chapter One

INTRODUCTION
1-1 Introduction
At the beginning of last century, doctors have been imaged the female
breasts suspected with breast lesions using X-rays. In the late 1960‟s, a
large scale of clinical test has done by the health insurance plan of New
York. It is concluded that the early detection of breast cancer using
mammograms could reduce the mortality rate of the disease [1]. Breast
cancer is the second most common type of cancer and the fifth most
common cause of cancer death according to Nishikawa [2]. Breast cancer
has been detected through imaging exams as mammography,
ultrasonography, and magnetic resonance imaging, where mammography
is the most common exam [3]. In Iraq, breast cancer is the commonest type
of female malignancy. According to the Iraqi cancer registry, counting of
female malignancy was about one third of registered female cancer. This
shows that the breast is the leading cancer site among the Iraqi population.
That was the basis of the Iraqi national program for early detection of
breast cancer, which was initiated in 2001. Since then specialized centers
and clinics for early detection of breast tumors have been established in the
major hospitals of Iraqi provinces [4]. The mammography aims to detect
characteristic of breast cancer lesions [3]. Screen film mammography
requires a high contrast in order to visualize and categorize tissues with
nominal density difference. Thus, better image quality can be achieved by
using low Kilo Voltage peak (KVp) settings. The cost of low KVp
however, is of high radiation dose to the breast. Therefore, image quality
and breast radiation dose should be optimized by using semi-automatic
mode of operation of mammographic unit [5]. Computer-Aided-Diagnosis
(CAD) is the one of these tools that is used to investigate the ALARA
principle to obtain the better image quality. It has a direct influence on the
1
analysis and treatment of early breast cancer [2]. Early detection and
diagnosis of breast cancer increase the treatment options and a cure is more
likely. Mammogram is an active tool used for early detection, but 10%-
30% of women who have the disease and undergo mammography have
negative mammograms. Two thirds of these false negative cases were
evident retrospectively. These mistakes in the visual interpretation are due
to poor image quality, eye fatigue of the radiologist, subtle nature of the
findings, or lack experienced radiologists especially in third world regions
[6]. Nowadays the Computer-aided diagnostics have been a “second
opinion” for radiologists to produce an accurate and faster diagnosis result
for breast cancer patients in grit to reduce the mortality rate [1].

1-2 Objectives of the Thesis


1-The objective of this study is to design an automated image analysis
system for the mammography images and Build automated computer
based program using matlab for breast cancer detection to help
radiographers in the diagnosis of breast cancer, and to investigate principle
of (ALARA).
2-Investigate the performance of CAD system in breast cancer diagnosis.
3-Test the system with more than one neural network.
4-Optimise enhancement technique for breast images.
5-Test the sensitivity and specificity of the proposed system.

1-3 Literature review


The idea of using a computer to help in diagnosis started in 1960. (Meyers
et al, 1964) proposed a system for distinguish, automatically, normal and
abnormal chest radiographs [7]. In 1967, Winsberg et al. developed a
system for automated analysis of mammograms based on bilateral
comparison; something which they recognized might especially be useful
2
in screening mammography with routine viewing of a large number of
mostly normal examinations [8].
A computer system for breast cancer detection was designed to classify
breast lesions on X-radiographies by Ackerman and Gose (Ackerman,
1972) [7].
In 1975 Tasto et al. described an algorithm for detection of
microcalcifications on mammograms, which were based on identification
of gray value in a mammographic image [8].
By 1980, improvements in Computer Vision techniques, mammographic
quality and digitalization methods started to make clinical CAD .Before
this, the concept was that computer would replace radiologists, giving the
diagnosis, which was called automated computer diagnosis. Due to this
notion, there was some criticism in the early phase to the implementation
of computational software to aid diagnosis. By this time, the computer
aided detection concept arises [3].
Several years later, 1990, a new study conducted by Chan et al consisted
in analyzes mammogram without and with the computer aid. This study
contributed considerable for the growing of this field [7].
There are several techniques have been proposed to automate the diagnosis
of breast cancer. In recent years, CAD systems have significant
development with respect to automated detection and classification of
breast abnormality in digital mammograms. The CAD system is used for
the purpose of pattern classification.
Nishikawa et al. (1994) used an initial global threshold level, followed by a
locally adaptive threshold step [9].
Giger and MacMohan, (1996) have suggested that a computer aided
system that can estimate the malignancy probability of mammography
lesion can assist the radiologists to decide patient management while
improving the diagnostic accuracy [1].
3
A system, utilizing a new model-based vision (MBV) algorithm was
developed by Polakowski, et. al. (1997) to find regions of interest (ROI‟s)
corresponding to masses in digitized mammograms and to classify the
masses into malignant/benign. Multilayer perceptron neural network
architecture was used to classify the masses [1].
In (1998), Fabio Ancona et al, proposed a modified PNN training
algorithm. The proposed modification overcomes the latter drawback by
introducing an elimination criterion to avoid the storage of unnecessary
patterns. The distortion in the density estimation introduced by this
criterion is compensated for by a cross-validation procedure to adapt the
network parameters [10].
Verma and Zakos (2001) presented a based on fuzzy-neural and feature
extraction techniques for detecting and diagnosing microcalcifications‟
patterns in digital mammograms. A fuzzy technique in conjunction with
three features: entropy, standard deviation, and number of pixels, was used
to detect a microcalcification pattern. Then, a neural network was used to
classify it into benign or malignant. The proposed system managed to
achieve a high classification accuracy of 88.9% out of the 58 samples used
(Verma and Zakos, 2001) [1].
Monika Shinde, (2003), developed an automated mass classification
system for breast cancer screening and diagnosis for Digital Mammogram
applications.In this work the task of automatically separating mass tissue
from normal breast tissue given a region of interest in a digitized
mammogram is investigated [11].
In (2004), MINI M.G. proposed Classification of Mammograms and
DWT based Detection of Microcalcification [12].
H.S. Sheshadri, et al, (2005), proposed an important approach for
describing a region is to quantify its structure content. In their paper the
use of functions for computing texture based on statistical measures is
4
prescribed. MPM (Maximizer of the posterior margins) algorithm is
employed. The segmentation based on texture feature would classify the
breast tissue under various categories [13].
H.S. Sheshadri, et al, (2006), proposed method employs simple
thresholding the region of interest and the use of filters for clear
identification of microcalcifications. The method suggested for the
detection of microcalcifications from mammogram image segmentation
and analysis was tested over several images taken from mini-MIAS
(Mammogram Image Analysis Society, UK) database. The algorithm was
implemented using MATLAB codes and hence can work effectively on a
simple personal computer with digital mammogram as stored data for
analysis [13].
In (2006), Esugasini Subramaniam, et al, discussed the systems that have
been proposed and developed for breast cancer diagnosis. These studies
have been divided into two paradigms, one which defines breast cancer
disease as a local and regional disease; and another as a systematic disease
[1].
In (2007), Zaim Gökbay, Inci, proposed a system to detect malignant
masses on mammograms. Investigated the behavior of Median at different
scales. After median filter was applied, suspicious regions were segmented
by means of an adaptive threshold [14].
In (2008), G. N. Srinivasan, and Shobha G, presented an overview of the
methodologies and algorithms for statistical texture analysis of 2D images
[15].
Chan et al (Chan, 2008) found a statistically significant improvement in
the performance of radiologists when they used the computer aid [7].
Priyanka Vaidya, (2009), presented a novel matrix of an artificial neural
network system to effectively classify breast cancer tumors as either

5
malignant or benign. This classification system makes use of both clinical
as well as genetic data [16].
J. Subhash Chandra Bose, et al, (2010), presented an intelligent system that
was designed to diagnose breast cancer through mammograms, using
image processing techniques along with intelligent optimization tools such
as GA and PSO. The suspicious region is extracted or segmented using
two different approaches such as asymmetry approach and Markov
Random Field (MRF) hybrid with Particle Swarm Optimization (PSO)
algorithm [13].
Célia Freitas da Cruz, (2011), proposed an automatic enhancement and
Segmentation of microcalcifications in mammographic images. This
dissertation includes implementation and application of image
enhancement techniques such as contrast-limited histogram equalization,
contrast stretching, adaptive neighborhood contrast enhancement, unsharp
masking, adaptive unsharp masking and homomorphic filter, with the
evaluation of several different parameters [3].
TRIPTY SINGH, et al, (2011), presented a system based on fuzzy-C
Means clustering and feature extraction techniques using texture based
segmentation and genetic algorithm for detecting and diagnosing micro
calcifications‟ patterns in digital mammograms [17].
In (2012), Dhirgaam A. Kadhim, presented a fully algorithm for detection
of abnormal masses by anatomical segmentation of Breast Region of
Interest (ROI). By medio-lateral oblique (MLO) view of mammograms.
Proposed method Marker-Controlled Watershed Segmentation Algorithm
(MCWSA) [4].
In (2012), Belal Kamal Elfarra, Proposed Mammogram Computer-Aided
Diagnosis to enhance and introduce a new method for feature extraction
and selection in order to build a CADx model to discriminate between
cancers, benign, and healthy parenchyma [6].
6
In (2012), Rita Filipa dos Santos Teixeira, used computer technologies to
detect abnormalities in mammograms [18].
K.Vaidehi, T.S.Subashini, Ph.D, (2013), proposed Automatic
Identification and Elimination of Pectoral Muscle in Digital
Mammograms. The proposed work is done in three steps. In the first step,
the mammogram is oriented to the left to minimize computations. In the
second step the top left quadrant of the mammogram which contains the
pectoral muscle is extracted. Next, the pectoral muscle contour is
computed using proposed algorithm [19].
Nasseer M. Basheer, Mustafa H. Mohammed, (2013), described the
Computer Aided Diagnosis (CADx) system for classifying abnormal
masses in digital mammograms by using a Support Vector Machines
(SVM) [20].
Abdelali Elmoufidi, et al, (2014), have presented method for the detection
of regions of interest (ROI) in mammograms by using a dynamic K-means
clustering algorithm [2].
Shefali Gupta, Yadwinder Kaur, (2014), described a review of different
local and global contrast enhancement techniques for a digital image [21].
Aziz Makandar, Bhagirathi Halalli, (2015), have presented Breast Cancer
Image Enhancement using Median Filter and CLAHE [22].
Amel H. Abbas, et al (2015), proposed Breast Cancer Image Segmentation
Using Morphological Operations [23].

1-4 Thesis s Outline


This thesis has been organized as follows:
Chapter One: Provides an introduction and objectives of the thesis and
literature review.
Chapter Two: This chapter has separated into the following sections:
Medical background: This section provides an overview of cancers and
7
highlights on breast cancer where it defines factors increase a woman‟s
risk of breast cancer, type of medical diagnosis systems for breast and
what are the areas that may be susceptible to cancer by clarifying the
medical anatomy of the breast.
CAD and CADx systems: also this section defines Computer Aided
Diagnosis and shows theoretical back ground for CAD of breast cancer
and CAD Benefits.
Chapter Three: Describes the main components of a proposed system for
diagnosis using the computer to detect breast cancer. This part focuses on
the stages of proposed Breast Cancer CAD system which include
proposed system, MIAS Database, pectoral muscle removal for left and
right side, image enhancement, enhancement evaluation, image
segmentation using morphological operation, region of interest extraction,
feature extraction using textural feature extraction techniques (co-
occurrence matrices) ,feature selection and classification by using (PNN,
KNN, DTC).
Chapter Four: Presents the results from the proposed system of breast
cancer (CAD system) and discusses the classification analysis.
Chapter Five: this chapter provides the conclusions drawn up from the
study, and what more can be done in the future.

8
Chapter Two

THEORITICAL BACKGROUND
2-1 Breast Cancer
Cancer is a condition that affects people all over the world. Research in
this area beginning since 1900 and cancer was a disease without cure.
Cancer refers to a group of diseases in which cells in a part of the human
body grow abnormally. The common factor for different types of cancers
is that they all start when cells grow out of control, which produces a
tumor or a neoplasm [18, 24]. Breast cancer is the form of cancer that
either originates in the breast or is primarily present in the breast cells.
Breast cancer (malignant breast neoplasm) is cancer originating from
breast tissue, most commonly from the inner lining of milk ducts or the
lobules that supply the ducts with milk. Cancers originating from ducts are
known as ductal carcinomas; those originating from lobules are known as
lobular carcinomas [25]. The disease occurs mostly in women but a small
population of men is also affected by it. Breast cancer is reported as one of
the first causes of women mortality [26]. Breast cancer is the most
common form of cancer amongst the female population as well as the most
common cause of cancer deaths after lung cancer [16]. However, an early
detection and treatment is essential to stop the cancer evolution and to
minimize the damages. The breast cancer, as the majority of other cancers,
can have the ability to spread to other tissues, metastasizing, allowing the
dissemination of cancer. When the breast cancer is premature detected, this
phenomenon is avoided, which provides a better prognosis for the patient.
The breast cancer risk is increased with the age, where the majority of
patients are over 50 years. Other risk factors correspond to family history
of breast cancer, previous breast cancer, early menarche, late menopause,
obesity, null parity and chest radiation exposure. Abnormal cells in

9
fibrocystic disease and hormone replacement therapy. Due to the
mentioned risks, some countries developed screening programs, where
women over 40 or with higher risk of developing breast cancer perform
mammographic exams in a periodic interval [3]. Researchers are in
agreements with the facts of early detection of breast cancer can save many
lives every year. However, as for other cancer types, breast cancer are
detected after symptoms appear in the later stages, and that is because in
most cases, there are no symptoms of the disease during the early stages.
To detect the disease, a screening test such as a mammogram has been
recommended to find the cancer before appearing of the symptoms. Breast
screening relies on mammography to detect cancer in its early stages due
to small changes in tissue composition. As with any examination that
includes x-rays, there is always a small stochastic risk of inducing cancer.
It is therefore important to evaluate the risk from the dose delivered to the
patient during the screening process. In other words, in order to keep the
dose as low as reasonably achievable (ALARA) [24].

2-2 Factors increase a woman’s risk of breast cancer:


Research has shown that women with the following risk factors have an
increased chance of developing breast cancer [25]:
 Gender: although men do get breast cancer, it is about 100 times
more common in women as such, less than 1% of patients with
breast cancer are males [27].
 Age: A woman‟s risk of developing this disease increases as she
gets older [25]. It is extremely rare below the age of 20 years, but
there after the incidence steadily rises so that by the age of 90 years
about 20% of women are affected [27].

10
 Family history: A woman‟s chance of developing breast cancer
increases if her mother, sister, and/or daughter have been diagnosed
with the disease, especially if they were diagnosed before age 50.
Having a close male blood relative with breast cancer also increases
a woman‟s risk of developing the disease [25].
 Menstrual periods: studies suggest that reproduction hormones
influence breast cancer risk by affecting cell proliferation and DNA
damage. Early menarche (younger than 12 years) and late
menopause (older than 55 years) increase a woman‟s risk of breast
cancer [27].
 Excess alcohol: Alcohol usage certainly increases the risk of getting
breast cancer [16]. Studies indicate that the more alcohol a woman
drinks, the greater her risk of breast cancer [25]. Women who have
two to five drinks daily have about one and a half times the risk of
women who drink no alcohol [27].
 Body weight : Studies have found that the chance of getting breast
cancer after menopause is higher in women who are overweight or
obese [25]
 Pregnancy: women who have not had children, or had their first
child after age 30, have a slightly higher risk of breast cancer. Being
pregnant more than once and at an early age reduces breast cancer
risk [27].
 Ionizing radiation: exposure of the breasts to ionizing radiation,
such as radiation therapy for Hodgkin‟s disease, is the best-
established environmental factor associated with an increased risk of
breast cancer [27].

11
2-3 Types of medical diagnosis systems for breast
Modern diagnostic methods, such as classic screen and digital
mammography, thermography, magnetic resonance imaging and
ultrasound scans are the early detection methods of breast cancer in
women.

2-3-1 Mammography
Mammography is the first choice method for diagnosing breast by using x-
ray to both breasts. It is a special type of x-ray imaging that uses low dose
x-ray; high contrast, high-resolution film; and an x-ray system designed
specifically for mammography to create detailed images of the breast, as
shown in Figure ((2-1)[3]) [28]. Routinely, mammographic images are
taken in the two most common projections which is Medio-lateral Oblique
(MLO) (side view taken at an angle) and Crania-Caudal (CC) (top to
bottom view). The advantage of the Medio-lateral oblique projection is
that almost the whole breast is visible, often including lymph nodes [29].
There are two main types of mammography; film screen and full field
digital mammography (FFDM) [28]. In film mammography, the image is
created directly on film, where as digital mammography takes an
electronic image of the breast and stores it directly on a computer [30].
Both types can be used for screening and for diagnosis. Screening
mammogram, this is performed in order to detect breast cancer before
symptoms occur [28]. Although both types of mammography have their
advantages and disadvantages, digital mammography has some potential
advantages over film mammography. Compared to digital mammography,
screen-film mammography has some limitations, which include:
1) Limited range of X-ray exposure.
2) Image contrast cannot be altered after the image is obtained.
3) The film acts as the detector, display, and archival medium.
12
4) Film processing is slow and introduces artifacts.
All of these limitations have pushed researchers further to develop
advanced techniques for digital mammography. Digital mammography is
overcoming and will continue to overcome the limitations of film
mammography described before, and will have the following potential
advantages:
1) Wider dynamic range and lower noise;
2) Improved image contrast;
3) Enhanced image quality; and
4) Lower X-ray dose [30].
The breast area is extracted as an image and processed before printing on
the film for better visualization of size, location and angle of the mass.
These optimized images are then checks for abnormalities by a radiologist
[31]. Mammography is, nowadays, the most effective and accurate method
for early detection of breast cancer. Mammography uses a low energy
radiation dose as the attenuation differences between a normal tissue and a
cancerous tissue increases rapidly with the lowest x-ray energies [24].
Radiation dose absorbed by the breast tissue has been suggested as a risk
factor in the mammographic procedure, since it can trigger carcinogenesis.
Furthermore, dose distribution inside breast is a very important issue, since
radiation sensitivity varies among the different tissues of breast. The
average absorbed dose by glandular tissue is the most appropriate
information for risk assessments associated with mammography [26].

13
Figure (2-1): Diagram of a common mammography equipment [3].

2-3-2 Thermography:
Thermography is the most efficient technique for the investigation of skin
temperature distribution which provides information on both normal and
abnormal breasts [32]. With the advent of color coded infrared (IR)
thermal imaging, body temperature has been used and applied for the
detection of rheumatism (1), breast cancer (2), skin lesion (3) and
impotence (4) [33]. Infrared radiation is emitted from objects with a
temperature above absolute zero. The human body radiates heat energy
from the surface of the skin and the emissivity of human skin is 0.98,
which is close to that of a perfect black body .Therefore, accurate
temperature values can be created from measurements of the infrared
radiation from the skin. Infrared thermography is the recording of
temperature distribution of a body using the infrared radiation emitted by
the surface of that body at wavelengths between 0.8 μm and 1.0 μ m [34].
Over the past 50 years, infrared thermography has been applied for the
purpose of condition monitoring, preventive maintenance, thermal building
survey and medical diagnosis [33].

14
2-3-3 Magnetic resonance imaging (MRI)
Magnetic resonance imaging (MRI) is a non-ionizing imaging technique
which allows for acquisition of three dimensional images with high spatial
resolution and excellent soft tissue contrast. MRI of the breast was first
described in 1979, and with the advent of MRI contrast agents, has been
recognized as a useful clinical tool since the late 1980s. However, it is only
in the last few years, with increasing demand and magnet availability, that
the number of scans performed has significantly increased, so breast MRI
has become a standard diagnostic tool of the practicing breast radiologist.
MRI can create either 2- or 3D images, which, depending on the
acquisition protocol, static or dynamic, have tradeoffs between spatial and
temporal resolution [35]. Magnetic Resonance Imaging is able to
differentiate between cancerous and noncancerous tissue because of
differing water content and blood flow and can detect tumors missed by
other modalities [29]. MRI has high sensitivity approaching 98% but it has
moderately low specificity [36]. For screening MRI is not a useful method,
because of its low specificity and relatively high cost [29].

2-3-4 Breast Ultrasound (Ultrasonography)


Ultrasound is an adjunctive tool used in conjunction with mammography
and clinical breast exam in screening for breast cancer [34]. Ultrasound,
also known as sonography, is an imaging technique in which high
frequency sound waves that cannot be heard by humans are bounced off
tissues and internal organs. Their echoes produce a picture called a
sonogram. Ultrasound is the most important diagnostic imaging tool after
mammography [35]. If a physician notes a lump or other suspicious
finding on a clinical breast exam, he or she may evaluate further with an
ultrasound. This can tell if the abnormality is a hollow cyst or something
solid and if it has malignant characteristics like irregular shape and
15
calcifications. Ultrasound is also used as an imaging guide during a needle
biopsy of a suspicious breast mass [36].Accuracy of ultrasound depend on
three factors: quality of the tools, expertise of the physician in conducting
the procedure and in interpreting the image, and the use of a
multidisciplinary approach for breast cancer detection [34]. Ultrasound is
not used for routine breast cancer screening because it does not
consistently detect certain early signs of cancer such as microcalcifications
(tiny deposits of calcium in the breast that cannot be felt but can be seen
on a conventional mammogram) [35].

2-4 Breast Anatomy


In humans, the breasts are located in left and right sides of the upper
ventral region of the trunk and each extends from the second rib above to
the sixth rib below [3]. In general, a mature female‟s breast is composed of
glands and fat. The human breasts do not contain bone or muscles. Each
breast has a nipple where the ducts of the mammary glands open onto the
body, as shown in Figure (2-2). The most important anatomical part of the
breast is the mammary glands, which are common in both sexes. However,
for males these glands remain rudimentary and functionless. The main
function of the female breast is to produce milk for the newborn. The
mammary glands produce the milk from water and nutrients taken from the
blood stream to the ducts, which work as contact channels between the
mammary glands and the nipple. The size and shape of the breast may
change over time due to, for example, the menstrual cycle, pregnancy and
age [24]. A children breast consists principally of ducts with dispersed
alveoli, being similar in both female and male. A teenage breast mostly
consists on fibrous and gland tissue. When adult, the fat substitutes some
of the fibrous and gland tissue. During menopause, the breast is mainly
adipose tissue [3]. Women have more mammary glands than older women.
16
Some women have more fatty tissue or more connective tissue, which
makes the breast more firms. The main subject of this research, from a
medical physics point of view, is the mammary glands as they are the most
sensitive part of the breast and are vulnerable to cancer. These glands are
also highly responsive to hormonal changes. The anatomical structure of
the breast is mostly fat, unlike other parts of the body. Studies of the breast
require images with maximum visualization of the breasts‟ anatomy to
detect a non-palpable cancer. When using ionizing radiation for such an
organ containing sensitive glands, it must be optimized to avoid increasing
the chances of inducing cancer in the patient. X-ray mammography is one
of the most effective techniques used to detect, diagnose and show a
variety of breast diseases. Mammography is designed to detect breast
pathology [8].

Figure (2-2): Anatomy of the breast [3].

17
2-5 Theoretical back ground for CAD of breast cancer:
Mammography is highly accurate, but like most medical tests, it is not
perfect. On average, mammography will detect about 80%-90% of the
breast cancers in women without symptoms [13].Sometimes the radiologist
is not aware of the abnormality or misinterprets the significance of an
abnormality. It is estimated that 20% - 30% of the cancers could be
detected in an earlier screening without an unacceptable increase in the
recall rate (i.e., the rate at which mammographically screened women are
recalled for additional assessment) [29]. The term CAD is commonly used
to refer both computer aided detection and computer aided diagnosis [11].
Computer-aided detection (CAD) systems aid radiologists in detecting
mammographic lesions that may indicate the presence of breast cancer and
computer-aided diagnosis systems (CADx) assist the radiologist in the
classification of mammographic lesions as benign or malignant [37].
Research in Computer Aided Diagnosis (CAD) is a rapidly growing
dynamic field with modern computer techniques, new imaging modalities,
and new interpretation tasks. CAD helps radiologist who uses the output
from a computerized analysis of medical images as a second opinion in
detecting lesions, assessing extent of disease, and improving the accuracy
and consistency of radiological diagnosis to reduce the rate of false
negative cases so the final decision is made by the radiologist. The general
approach for CAD is to find the location of a lesion and also to determine
an estimate of the probability of a disease [38]. More advanced CAD
systems which currently are under development, are incorporating
information from multiple views. Also using views obtained at different
time moments can help to determine if a mass is benign or malign because
benign masses tend to change slowly opposed to malignant masses which
may change considerably [29]. Figure (2-3) outlines the role CAD plays in
the overall context of breast cancer screening [36].
18
Fig (2-3) The role of computer-aided interpretation in breast cancer
screening [36].

2-6 CAD Benefits:


The radiologist analysis of the mammography is fallible, increased by the
repetitive and fatiguing task of detection abnormalities, poor image
quality, subtlety of some abnormalities, occlusion of anatomical structures
in the mammogram, low disease prevalence and breast structure
complexity [3]. CAD schemes will be most helpful in those situations and
in other circumstances, when there is large volume examinations with a
low incidence of disease (e. g. screening mammography) (up to 30%
missed lesions); Follow-up examinations, where we have to do lesion
extraction and quantification because the manual measurements of lesion
size may be inaccurate and to much time consuming [8]. Consequently,
radiologists fail to detect 10% to 30% of cancers. Approximately two
thirds of these lesions results are due to missed lesions that are evident
retrospectively [39]. Studies indicate that radiologists have false-negative
rate diagnosis of 21%. CAD has potential to reduce this false-negative rate
by 77%. However, there is some controversy in the efficiency of CAD,
when comparing with the radiologists‟ performance. Breast cancer CAD
has commonly higher sensitivity and positive predictive value than
radiologists. However, its false positives need to be reduced in order to
increase even further the positive predictive value [3].

19
Chapter Three
PROPOSED BREAST CANCER CAD SYSTEM
3-1 Introduction
In This Chapter the stages of proposed breast cancer CAD
system will be introduced. The medical images from MIAS database are
selected to cover the benign, malignant and normal left and right (MLO)
Breast Images. The proposed system includes medical image enhancement,
pectoral muscles removal, segmentation, feature extraction, feature
selection, classification and Neural network (training and testing stages),
by using MATLAB programing.

3-2 Proposed system


A total of 8 stages involved in the proposed model that begins from the
data input to output. The data used in our experiments is obtained from the
Mammographic Image Analysis Society (MIAS) database, which contains
MLO and Crania-Caudal (CC) views of both left and right breasts.
It consists of 322 images belonging to normal, benign and malignant
classes. All images have a resolution of 1024×1024 pixels and 8-bit
accuracy (gray level). They also include the locations of any abnormalities
that may be present. The existing data consist of the location of the
abnormality (like the center of a circle surrounding the tumor), its radius,
breast position (left or right), type of breast tissues (fatty, fatty-glandular
and dense) and tumor type if exists (benign or malign) [36].
102 images of the mini-MIAS database have been tested. Thus 34 of the
patients are diagnosed with (malignant( , 34 of the patients are diagnosed
with (benign) breast cancer and the rest are normal. The breast
mammogram images are converted from (PGM) to (JPG) form by using
MATLAB software. Figure (3-1) shows the block diagram overall system,
such that including pectoral muscle removal, enhancement, segmentation
20
feature extraction, feature selection and classification. In the first stage,
pectoral muscle removal is performed. In the second stage, image
processing techniques and algorithms are applied on the digital
mammographic images for the purpose of image preprocessing. In this
research, preprocessing stage includes: power-law transformations,
histogram equalization, adaptive histogram equalization, average filter,
piecewise-linear transformation, smoothing filter, median filter and
exponential. The segmentation stage includes using the Morphological
Operations (open, erosion, reconstruction, close, dilation, imfill).
Enhancement and segmentation used for detection the regions of interest,
which are essential steps for any CAD software. The stage of texture
features is performed by using Gray Level Co-occurrence Matrices
(GLCMs), which are employed in this research to compute the texture
features from the ROIs (abnormal regions). After gathering the results
from texture feature analysis, the results are used to perform a statistical
hypothesis test, student's t test from the feature selection method. The last
stage of this system is the classification, performed by using the (PNN,
KNN, DTC) approaches.

21
Input mammogram
images

Pectoral muscle
removal

Test performance for


image Enhancement

Histogram Equalization, Adaptive Histogram Equalization, Power


Law Transformations, Piecewise-Linear Transformation (Contrast
stretching), Smoothing filter, Exponential, Median filter, Averaging
filter

Image Enhancement (Adaptive


Histogram Equalization)

Image segmentation
By Morphological operation

ROI selection

GLCM

Feature selection Feature extraction

Classification
(PNN, KNN, DTC)

Decision

Figure 3-1: Proposed block diagram of the work

22
3-3 MIAS Database:
The MIAS database is generated by the mammographic image analysis
society where mammograms are taken from the UK National Breast
Screening Program. The database contains 322 mammograms from 161
women. The sizes of the images are 1024x1024 pixels and 8 bit accuracy.
207 images are normal and 115 images are abnormal. The abnormal
images are further categorized into benign (non-cancerous) and malignant
(cancerous) images. Bilateral (left and right) MLO view mammograms are
taken for each woman [40]. The images are annotated according to their
breast density by expert radiologists, using three distinct classes: Fatty (F)
(106 images), Fatty-glandular (G) (104 images) and Dense-glandular (D)
(112 images). The abnormalities are also described with their kind,
location and even their coordinates in the image [41]. Each abnormality
has information on the lesion type, the assessment, the subtlety, the
pathology and at least one outline. In some cases there is more than one
outline for the same abnormality [36]. The type of abnormalities are
calcifications, circumscribed masses, speculated masses, ill-defined
masses, architectural distortion or asymmetries [41]. Figures ((3-2) a, b, c)
show three example mammographic images (mdb033, mdb001, mdb058)
for normal, benign malignant breast respectively, of a woman in the MIAS
database [87].

a) (normal) b) (benign)
23
c) (malignant)
Figure (3-2) mammographic images [87].

3-4 Pectoral muscle removal:


The pectoral is the term relating to the chest. It is a large fan shaped
muscle that covers much of the front upper chest. Hence, during the
mammogram capturing process pectoral muscle also would be captured.
The pectoral muscle represents a predominant density region. Hence, it
will severely affect the result of image processing. For better detection
accuracy pectoral region should be removed from mammogram image
[42]. The pectoral muscle appears as a triangular opacity across the upper
posterior margin of the image as shown in Fig (3-3) [19]. The pectoral
muscle MLO mammogram image is a region of higher intensity than the
surrounding tissue. It doesn‟t happen in all the images, and it is one of the
main problems [41]. As it can bias and affect the results of any
mammogram processing method, it is often necessary to automatically
identify and segment the pectoral muscle prior to breast tissue image
analysis. The wide variability in the position of the muscle contour,
together with the similarity between muscle and breast tissues make this a
difficult task [19].

24
Figure (3-3): Digital Mammogram [19].

We have made a lot of tests before deciding for the actual procedure and
we encounter many problems. Mainly, the one when we confront images
where the muscle is brighter than the breast.

3-5 Image Enhancement


Image enhancement is the process of adjusting digital images so that the
results are more suitable for display or further analysis [43]. It is important
to do enhancement on mammogram images before applying auto features
extraction for mass objects. The parenchyma density hides the mass
objects and it will be so difficult to apply auto feature extraction
algorithms [44]. The images of the database contain also noise from the
digitization process, such as speckle noise. This type of noise should be
removed, in order to enhance the quality of the image and make the
segmentation task easier [41]. The goal of all image enhancements is to
produce a processed image that is suitable for a given application. Some
techniques have been tested to remove noise and preserve the edges of the
image and avoid effects from micro-texture that could appear in some
regions. We tested some methods and choose a good reconstructed image
with low MSE and high PSNR.

25
That means that the image has low error and high image fidelity [45]. So
image enhancement is required before segmentation. A complete survey on
conventional enhancement techniques are highlighted in the next sections.

3-5-1 Histogram Equalization (HE):


Histogram modeling techniques modify an image so that its histogram has
a desired shape. This is useful in stretching the low contrast levels of
mammograms with narrow histograms. A typical technique in histogram
modeling is histogram equalization [45]. Histogram equalization (HE) is a
widely used technique for contrast enhancement because it is simple to use
and better in performance on all types of images [46]. In histogram
equalization every pixel is replaced by the integral of the histogram of the
image in that pixel [3]. As addressed previously, HE can introduce a
significant change in brightness of an image, which hesitates the direct
application of HE scheme in consumer electronics [17]. Histogram
equalization can be applied on the digital mammograms but since this
method enhances the contrast globally so there are losses of details outside
the denser parts of the image [47]. The algorithm for histogram
equalization process is as follows: For a given image X = {X(i, j)},
composed of L discrete gray levels denoted as{ , ,... } ,where X
(i, j) represents an intensity of image at the spatial location (i, j) and X (i,
j) ϵ{ , ,... }. For image X, probability density functions P ( )
is defined as:

(3-1)

for k = 0,1,...,L -1, where represents number of times appears in


input image X and n is total number of samples in input image. P ( ) is

26
associated with histogram of input image which represents number of
pixels having specific intensity . A plot of vs. is known as
histogram of X [46, 48]. Histogram equalization by matlab code has been
tested and the result shown in the Figure (3-4):

a) Original image b) After Histogram Equalization (HE)


Figure (3-4) Images for Histogram Equalization (HE)

3-5-2 Adaptive Histogram Equalization (AHE):


Adaptive histogram equalization is a method of contrast enhancement. It is
different from ordinary histogram equalization [21]. This method, based on
local neighborhood, a different grayscale transform is computed at each
location in the image, and the pixel value at that location is mapped
accordingly [47]. It computes many histograms, each corresponds to a
different part of the image, then uses them to redistribute the lightness
values of the image [49]. With adaptive histogram equalization method,
information of all intensity ranges of the image can be viewed
simultaneously [21]. The ordinary histogram equalization process uses
only a single histogram for an entire image [49]. In AHE method, for each
pixel in the input image, a region centered about a pixel is assigned. This

27
region is called contextual region (calculate size of contextual region
before assigning) .The intensity values for this region are used to find the
histogram equalization mapping function and this mapping function is
applied to the pixel being processed in the region [21]. After using
adaptive histogram equalization (AHE) in matlab the result shown in
Figure (3-5):

a) Original image b) Adaptive Histogram Equalization (AHE)


Figure (3-5) Images for Adaptive Histogram Equalization (AHE)

3-5-3 Piecewise-Linear Transformation Functions:


The principal advantage of piecewise linear functions over the types of
functions which have been discussed thus far is that the form of piecewise
functions can be arbitrarily complex. In fact, as we will see shortly, a
practical implementation of some important transformations can be
formulated only as piecewise functions. The principal disadvantage of
piecewise functions is that their specification requires considerably more
user input [50]. One of the simplest piecewise linear functions is a
contrast-stretching transformation [51]. By performing contrast stretching,
the dynamic range of the image is increased to fill the entire range

28
available [52].Figure (3-6) shows a typical transformation which is used
for contrast stretching. The locations of points (r1, s1) and (r2, s2) control
the shape of the function. If r1=s1 and r2=s2, the transformation is a linear
function that produces no changes in gray levels. If r1=r2, s1=0 and s2=L-
1, the transformation becomes a thresholding function that creates a binary
image [51].

(3-2)

Where is the original (input) image pixel gray-level, and is


the enhanced (output) image pixel gray-level. and are the
minimum-intensity gray-level and maximum-intensity gray-level of the
original image, respectively .The imaging system can resolve gray-levels
having intensities ranging from 0 (black) to 255 (white). Hence, the factor
255 in eq (3-2) indicates the maximum gray-level value possible [52].

Figure (3-6) Form of transformation function [50].

29
a) Original image b) After contrast stretching
Figure (3-7) Piecewise-Linear Transformation Functions

3-5-4 Power-Law Transformations:


Power-law transformations have the basic form:

(3-3)

Where c and are positive constants. Sometimes Eq. (3-3) is written as


to account for an offset (that is, a measurable output when
the input is zero). However, offsets typically are an issue of display
calibration and as a result they are normally ignored in Eq. (3-3). Plots of s
versus r for various values of are shown in Figure (3-8). As in the case
of the log transformation, power-law curves with fractional values of
map a narrow range of dark input values into a wider range of output
values, with the opposite being true for higher values of input levels.
Unlike the log function, however, we notice here a family of possible
transformation curves obtained simply by varying .As expected, we see in
Fig (3-8) that curves generated with values of >1 have exactly the

30
opposite effect as those generated with values of <1. Finally, Eq. (3-3)
reduces to the identity transformation when c= =1 [50].

Figure (3-8) Plots of the equation for various values of (c=1 in


all cases) [50].

a) Original image c) Power-Low Transformation

Figure (3-9) Power-Law Transformation

31
3-5-5 Filter
Image filtering is a mathematical processing for noise removal and
resolution recovery. The goal of the filtering is to compensate for loss of
detail in an image while reducing noise [53].In Matlab software using
image processing toolbox we can design and implemented filters for image
data [53].The function fspecial produces several kinds of predefined filters,
in the form of computational molecules [54]. Here type specifies a specific
filter and the optional parameters are related to the filter selected. Filter
type options include: „gaussian‟, „disk‟, „sobel‟, „prewitt‟, „laplacian‟,
„log‟, „average‟, and „unsharp‟. The „gaussian‟ option produces a Gaussian
low pass filter [53]. Mammography image can be filtered by using linear
filtering and nonlinear filtering techniques and also spatial and frequency
domain filtering techniques [22].

3-5-5-1 Median Filter:


A median filter is nonlinear type of filter and efficient to remove of salt
and pepper noise and Gaussian noise. It helps to keep the sharpness of the
image at the time of removing the noise. Potency of median filter depends
on the scale of the windowing. For mammography 3X3 window provides
smart result. In median filter, the value of an output component is
determined by the median of the neighborhood pixels as shown in the
Figure (3-10). The median is good to evaluate extreme values and so better
able to take away this outlier without reducing the sharpness of the image
[22]. Neighborhood values are 115,119,120,123, 124 125, 126, 127, 150
Median is 124.

32
Fig (3-10) Median value of a local pixel neighborhood in 3X3 window mask [22].

a) Original image b) Filter image (by Median)


Figure (3-11) Median Filter images

3-5-5-2 Smoothing filter:


Smoothing is often used to reduce noise within an image or to produce a
less pixelated image. Image smoothing is a key technology of image
enhancement, which can remove noise in images. Excellent smoothing
algorithm can both remove various noises and preserve details [55].

33
3-5-5-3 Fspecial filter:
For linear filtering, MatLab provides the fspecial command to generate
some predefined common 2D filters. fspecial creates Laplacian of
Gaussian (LoG) filters using following equition:


(3-4)

∑ ∑
(3-5)

This equation above describes the commands‟ package that can be used for
the application of the mean (average) filter in a SPECT slice for different
convolution kernel sizes (for 3×3, 9×9, 25×25 average filter) [56].

3-5-5-4 Average filter:


One simple filter fspecial can produce an averaging filter. This type of
filter computes the value of an output pixel by simply averaging the values
of its neighboring pixels. The default size of the averaging filter fspecial
creates is 3-by-3, but you can specify a different size. The value of each
element is 1/length (h(:)). For example, a 5-by-5 averaging filter would be:

0.0400 0.0400 0.0400 0.0400 0.0400


0.0400 0.0400 0.0400 0.0400 0.0400
0.0400 0.0400 0.0400 0.0400 0.0400
0.0400 0.0400 0.0400 0.0400 0.0400
0.0400 0.0400 0.0400 0.0400 0.0400

Applying this filter to a pixel is equivalent to adding up the values of that


pixel‟s 5-by-5 neighborhood and dividing by 25. This has the effect of

34
smoothing out local highlights and blurring edges in an image [54].
Figures (3-12) illustrates applying a 3-by-3 averaging filter to an intensity
Image:

a) Original image b) spatial filter

c) Frequency filter
Figure (3-12) Images for average filter

3-6 Enhancement Evaluation:


There are different techniques to evaluate the enhancement of
mammographic images. Some works use distinct mathematical parameters

35
in order to perform the evaluation such as contrast, contrast improvement
index (CII), background noise level (BNL), peak signal to noise ratio
(PSNR), and the average signal to noise ratio (ASNR) [3].
To evaluate the enhancement we use peak signal to noise ratio (PSNR) and
Mean Squared Error (MSE).
Peak signal-to-noise ratio, often abbreviated PSNR, is the ratio between
the maximum possible power of a signal and the power of corrupting noise
that affects the fidelity of its representation [57]. The PSNR is most
commonly used as a measure of quality of reconstruction of lossy
compression codecs (e.g., for image compression). The signal in this case
is the original data, and the noise is the error introduced by compression
higher PSNR would normally indicate that the reconstruction is of higher
quality [48].

(3-6)

The PSNR is most commonly used as a measure of quality of


reconstruction in image compression etc. It is most easily defined via the
Mean Squared Error (MSE) which for two monochrome images I
and K, where one of the images is considered a noisy approximation of the
other is defined as [48,57]:

∑ ∑ (3-7)

In general, a good reconstructed image is one with low MSE and high
PSNR. That means that the image has low error and high image fidelity
[57].

36
3-7 Image Segmentation
In analyzing mammogram image, it is important to distinguish the
suspicious region from its surroundings. The goal of segmentation is to
simplify and change the representation of an image into something that is
more meaningful and easier to analyze[58].To extract features from the
abnormal regions, Segmentation methods are used in the analysis of digital
mammograms to separate the different regions of interest in the image
[59]. Breast tissue, breast border, fatty tissue, pectoral muscle and
background, they are also needed for segmenting microcalcifications, cysts
and tumors [59]. Segmentation can be carried out using any of the standard
techniques like local thresholding, morphological operation, region
growing, region clustering, pectoral muscle removal,…etc [58].In the three
stages morphology operation have been performed.
Morphology is an operation of image processing based on shapes [60].
Morphological operations are important in the digital image processing,
since that can rigorously quantify many aspects of the geometrical
structure of the way that agrees with the human intuition and perception. It
emphasizes on studying geometry structure of image [61]. The value of
each pixel in the output image is based on the corresponding input pixel
and its neighbors [54]. Mathematical morphology is a very important tool
for extracting image component that are useful in representation and
description of region shape. The basic mathematical morphological
operators are dilation, erosion, opening and closing [62].

3-7-1 Erosion:
Erosion is one of two fundamental operations in morphological image
processing from which all other morphological operations are based [63].
The erosion process is similar to dilation, but we turn pixels to white, not
black [23].For erosion, if every pixel in the input pixel‟s neighborhood is
37
on, the output pixel is on. Otherwise, the output pixel is off [54]. It was
originally defined for binary images, later being extended to gray scale
images, and subsequently to complete lattices [63]. Erosion can be
performed to eliminate irrelevant details, which are smaller than the
structuring elements. As before, slide the structuring element across the
image and then follow these steps:
1. If the origin of the structuring element coincides with a white pixel in
the image, there is no change, move to the next pixel.
2. If the origin of the structuring element coincides with a black pixel in
the image, and at least one of the 'black' pixels in the structuring element
falls over a white pixel in the image, then change the black pixel in the
image from „black‟ to a white [23].The erosion of an object A by the
structural element B corresponds to the mathematical expression [3]:

A B= {Z| ̂ ∩ ≠∅} (3-8)

Where ∅ is the empty set. The erosion of A by B is the set of all the
structuring elements „origin locations where the translated B has no
overlap with the background of A [3].
Erosion by matlab code has been tested and the result shown in the Figure
(3-13).Performing erosion with disk-shaped structuring element of size 20:

38
a) Original image b) image of erosion
Figure (3-13) Image of erosion function

Next the erosion-by-reconstruction computed by imreconstruct as shown


in Eq. (3-9). In the output image, all the intensity fluctuations except the
intensity peak have been removed.

R= imreconstruct (I2, I). (3-9)

3-7-2 Dilation:
Dilation is defined as the maximum value in the window. Dilation adds
pixels to the boundaries of objects in an image [62]. The difference is in
the operation performed. It is best described in a sequence of steps:
1. If the origin of the structuring element coincides with a 'white' pixel in
the image, there is no change; move to the next pixel.
2. If the origin of the structuring element coincides with a 'black' in the
image, make black all pixels from the image covered by the structuring
element [23]. The dilation of an object A by the structural element B
corresponds mathematically to:

39
A B= {Z| ̂ ∩A ≠∅} (3-10)

Hence, the dilation of A by a structuring element B can be regarded as


flipping B around its origin and then successively displacing it, so that it is
centre slides over A [64].
Dilation by matlab code has been tested and the result shown in the Figure
(3-14).Performing dilation with disk-shaped structuring element of size 20:

a) Original image b) image of dilation


Figure (3-14) image of dilation function

Next the dilation by-morphological reconstruction computed by


imreconstruct as shown in Eq (3-11):

Re= imreconstruct (I3, I). (3-11)

3-7-3 Opening:
If erosion is followed by dilation, the operation is termed opening. If the
image is binary, this combined operation will tend to remove small objects
without changing the shape and size of larger objects. Basically, the initial
erosion tends to reduce all objects, but some of the smaller objects will

40
disappear altogether. The subsequent dilation will restore those objects that
are not eliminated by erosion [65]. The erosion of an object A by a
structuring element B, followed by the dilation of the result by the same
structuring element corresponds to an image opening. Image opening
removes regions of an object that are smaller than the structuring element
[3]:

A o B= (A B) B (3-12)

Opening by matlab code has been tested and the result shown in the Figure
(3-15)

a) Original image b) image of open


Figure (3-15) Image open function.

3-7-4 Closing:
If the order is reversed and dilation is performed first followed by erosion,
the combined process is called closing. Closing connects objects that are
close to each other, tends to fill up small holes, and smooths an object‟s
outline by filling small gaps. As with the more fundamental operations of
dilation and erosion, the size of objects are removed by opening or filled
41
by closing, depending on the size and shape of the neighborhood that is
selected [65]. The closing set of A by B is simply the dilation of A by B, is
defined as:

A• B= (A B) B (3-13)

The closing of A by B is simply the dilation of A by B, followed by the


erosion of the result by B [14]. Morphological closing smooths the object
edges, joins narrow breaks, and fills holes smaller than the structural
element [3].Closing by matlab code has been tested and the result shown
in the Figure (3-16).

a) Original image b) image of close


Figure (3-16) Image closing function

3-7-5 Image Fill:


This operation begins at a designated pixel and changes connected
background pixels (0‟s) to foreground pixels (1‟s), stopping only when a
boundary is reached. For grayscale images, imfill brings the intensity
levels of the dark areas that are surrounded by lighter areas up to the same

42
intensity level as surrounding pixels. (In effect, imfill removes regional
minima that is not connected to the image border.) The initial pixel can be
supplied to the routine or obtained interactively [65]. There are still holes
presents in the image. To fill these holes, the hole filling operation of
image processing has been used. As shown in Figure (3-17).

a) Original image b) image of imfill


Figure (3-17) Image fill function

3-8 Region of interest extraction:


The interested region is extracted to reduce the processing time. Normally
Masses appear as whiter regions in the mammogram. There is no
possibility of detecting the mass region from the darker region of
mammogram, hence, the darker region can be simply ignored during
processing [42]. The ROI of the image is the brighter glandular breast
[9].Thus the process of extracting the brighter region alone neglecting the
darker region is called as Region of Interest Extraction [42].The program
determines the maximum and minimum pixels for the lesion and separates
it by searching the image row by row. Till it finds the highest white point
and lowest white point the lesion is cropped between largest and lowest.

43
3-9 Features extraction:
Features play a significant role in CADx (Computer Aided Diagnostic)
environment. The transformation of an image into its set of features is
known as feature extraction. Useful features of the image are extracted
from the image for classification purpose [66]. The features can be
calculated from the ROI characteristics such as the size, shape, density,
and smoothness of borders, etc. The feature space is very large and
complex due to the wide diversity of the normal tissues and the variety of
the abnormalities. Only some of them are significant [67]. Also, feature
extraction is a key phase in most pattern recognition systems. The features
should have similar values for the patterns within the same class and have
significant difference in different classes. Mathematical and statistical
methods aim at extracting specific information from the images which
have been adapted in this regard. These features are usually placed in a
feature vector, also referred to as signature [20]. It is very difficult to
predict which feature or feature combinations will achieve better in
classification rate. We will have different performances as a result of
different feature combinations. In addition, using excessive features may
degrade the performance of the algorithm and increase the complexity of
the classifier. Relatively few features that are used in a classifier can keep
the classification performance robust. Therefore, we have to select an
optimized subset of features from a large number of available features [6].
Texture analysis is a major step in texture classification, image
segmentation and image shape identification tasks [15].Texture features
are computed mathematically, which are not evident to human eyes and
not easily extracted visually [6].Texture analysis is important in many
applications of computer image analysis for classification, detection or
segmentation of images based on local spatial variations of intensity [68].
Normally texture analysis can be grouped into four categories: model-
44
based, statistical-based, structural-based, and transform-based methods.
[61]. Statistical approach calculates different properties and texture sizes
that are comparable with the pixels sizes. These include Fourier
transforms, convolution filters, co-occurrence matrix, spatial
autocorrelation, fractals, etc [15]. One type of statistical descriptors are
used in this research is co-occurrence matrices.

3-9-1 Co-occurrence Matrix:


The Gray Level Co-occurrence Matrix (GLCM) method is a 2-D matrix
representation of image texture proposed by Haralick et al [69]. It models
the relationships between pixels within the region by constructing Gray
Level Co-occurrence Matrix [61]. This GLCM matrices are constructed at
a distance of d=1, 2, 3, 4 and for direction of data given as θ = 0, 45, 90,
135°, etc. P (i, j) represents the probability that two pixels with a specified
separation have grey levels i and j [70].The spatial relationship is defined
in terms of distance d and angle θ. If the texture is coarse, and distance d is
small, the pair of pixels at distance d should have similar gray values.
Conversely, for a fine texture, the pairs of pixels at distance d should often
be quite different, so that the values in the GLCM should be spread out
relatively uniformly. Similarly, if the texture is coarser in one direction
than another, then the degree of spread of the values about the main
diagonal in the GLCM should vary with the direction θ. The Figure (3-18)
represents the formation of the GLCM of the grey-level (4 levels) image at
the distance d = 1 and the direction θ = 0°.

45
a) Image with 4 grey levels. b) GLCM for d=1 and θ=0° [61].
Figure (3-18) Co-occurrence matrix

The thin box in figure (3-18) a) represents pixel-intensity 0 with pixel


intensity 1 as its neighbor in the direction θ = 0º. There are two
occurrences of such pair of pixels. Therefore, the GLCM matrix formed
with value 2 in row 0 and column 1. This process is repeated for other pair
of intensity values. As a result of the pixel matrix this is represented in
Figure (3-18) a) can be transformed into GLCM as shown in
Figure (3-18) b). In addition to the direction (0º), GLCM can also be
formed for the other directions 45º, 90º and 135º as shown in Figure (3-19)
[61]. The size of co-occurrence matrix will be the number of threshold
levels. When we consider neighboring pixels, the distance between the pair
of pixels is 1. However, each different relative position between the two
comparable pixels creates a different co-occurrence matrix [15].

Figure (3-19): Direction of GLCM generation

46
The pixels 1, 2, 3 and 4 are representing the directions (θ) 0º, 45º, 90º and
135º respectively for distance d = 1 from the pixel x[61]. Fifteen features
are derived from each GLCM, four values are obtained for each feature
corresponding to the four matrices, and this corresponds to 60 features
matrix [36].

• Energy • Correlation
• Contrast • Sum of Squares
• Homogeneity • Sum Average
• Entropy of GLCM Matrix • Sum Entropy
• Information Correlation 2 • Difference Entropy
• 1st Order Difference Moment • Cluster Shade
• 2nd Order Inverse Difference Moment • Prominence
• Max. of GLCM Matrix

It is intuitive to believe that an increase in the number of features can


achieve more accurate classification results. Hence, the more features the
better [71]. The Gracoprops function has been used. The Graycoprops
function in MATLAB calculates the statistics specified from GLCMs,
which has four texture descriptors, namely: contrast, correlation, energy
and homogeneity [9].

1. Contrast:
The contrast of a region is its whiteness relative to other breast tissue. A
region with a higher contrast than other parts of the mammographic image
is more likely to be a mass, so this feature is important [72]. The contrast
measures the amount of local variations present in an image [13]. The
range of Contrast is [0 (size GLCM, 1)-1) 2]. Contrast is 0 for a constant
image [73].
47
∑ (3-14)

Where, p (i,j) is the pixel value at point (i,j) [73].

2. Correlation:
Measures a correlation of pixel pairs on gray-levels, the range of
correlation is [-1 1]. Correlation is 1 or -1 for a perfectly positively or
negatively correlated image. Correlation is NaN (Not a Number) for a
constant image [73, 74].

∑ (3-15)

Where, p(i,j) is the pixel value at point (i,j), and σ are the Mean and
Standard Deviation respectively [73].

3. Energy:
Is the sum of squared elements in the Grey Level Co-Occurrence Matrix
(GLCM) [13]. Energy of an image gives the concepts about measure of the
information. It can be calculated by using a probability distribution
function [44]. Energy is also known as uniformity. The range of energy is
[0 1]. Energy is 1 for a constant image [73].

∑ (3-16)

Where E is energy and p (i,j) is the pixel value at point (i,j).

48
4. Homogeneity:
The homogeneity descriptor refers to the closeness of the distribution of
elements in GLCM to the GLCM diagonal [13]. The range of
Homogeneity is [0 1]. Homogeneity is 1 for a diagonal GLCM [73].

∑ (3-17)

Where, p (i, j) is the pixel value at point (i, j) [73].


And next entropy, symmetry and momentum for co-occurrence matrices
have been used.

5. Entropy:
A statistical measure of randomness that can be used to characterize the
texture of the input image. Entropy (h) can also be used to describe the
distribution variation in a region [73]:

∑ (3-18)

Where, Pr is the probability of the k-th gray level [73].

These measures provide high discrimination accuracy which can be only


marginally increased by adding more measures in the feature vector. Thus,
using the above mentioned ten co-occurrence matrices we have obtained
80 features describing spatial distribution in each window corresponding to
a region in which an original image is divided in order to apply the
proposed image indexing scheme.

49
3-10 Features selection:
Feature selection is an important part of any classification scheme. The
success of a classification scheme largely depends on the features selected
and the extent of their role in the model. Only a few features may be useful
or „optimal‟ while most may contain irrelevant or redundant information
that may result in the degradation of the classifier‟s performance [75].
Feature selection approaches offer more than one significant advantages,
including reduction of computational complexity, improved generalization
ability and robustness against outliers [76].

3-11 Classification:
The automated classification or diagnosis methods are developed to assist
radiologist in making final assessment. They may be used to estimate the
likelihood of malignancy for the given abnormality [11]. The classification
methods could be roughly divided into two categories: non-parametric
classifiers and parametric classifiers. Non-parametric classifiers hold no
underlying assumption on the statistical distribution of the data to be
classified. The class of classifiers includes the nearest neighbor, k-nearest
neighbor, and the Parzen window. The maximum-likelihood method and
Bayes method are examples of parametric classifiers [77]. Image or pattern
classification goal is building models in order to predict the class of new
images given a training set where each image is labeled with one single
class (supervised learning) [41].
This section discusses various different classifiers used in this thesis: the
K-Nearest Neighbor (K-NN), Decision Tree Algorithm (DT) and
Probabilistic Neural Network (PNN). First, we will construct the clusters,
then learn the system and test it. The classification process is divided into
the training phase and the testing phase. In the training phase, known data
are given and the features are calculated by the processing which precedes
50
classification. Separately, the data on a candidate region which has already
been decided as a tumor or as normal is given, and the classifier is trained.
In the testing phase, unknown data is given and the classification is
performed using the classifier after training. We measure quantitatively,
the detection accuracy, sensitivity and specificity on the data.

3-11-1 K-Nearest Neighbor (K-NN):


K-Nearest Neighbor (KNN) classification is one of the most fundamental
and simple classification methods. When there is little or no prior
knowledge about the distribution of the data [78]. K-Nearest Neighbor (K-
NN) approach is based on the closest training examples in the feature
space. Thus, an object is classified according to the majority of its K-
nearest neighbors. Hence, it is instance based learning. For the K-Nearest
Neighbors (KNN) is necessary to have a training set not too small, and a
good discriminating distance. KNN performs well in multiclass
simultaneous problem solving. The parameter K corresponds to the
number of nearest neighbors considered to perform the classification.
There is an optimal choice for this value that brings to the best
performance of the classifier [3]. Instead, the proximity of neighboring
input (a) observations in the training data set and their corresponding
output values (y) are used to predict the class of the objects in the
validation data set. Firstly two input variable case is considered since it is
easy to represent in two dimensional space. Euclidean distance between
two input vectors and is used in KNN [78].

(3-19)

(3-20)

51
The distance between these two vectors is computed as the length of the
difference vector , denoted by

√ (3-12)

More generally the distance between two p-dimensional vectors


and is calculated as

√ (3-11)

The minimum distance between the vectors gives the closest neighbor so it
is predicted that it belongs to the same class with the test object [78].

3-11-2 Decision Tree Classifier (DT):


Decision tree classifier is a data mining induction techniques that
recursively partitions a data set of records using depth-first greedy
approach or breadth-first approach until all the data items belong to a
particular class [79]. Decision trees are powerful classification methods
which often can also easily be understood. In order to classify an example,
the tree is traversed bottom-down. Every node in a decision tree is labeled
with an attribute [80]. In classification a given set of data record is divided
into training and test data sets. The training data sets are used in building
the classification model, while the test data record is used in validating the
model. The decision tree structure is made of root, internal and leaf nodes.
The tree structure is used for classifying unknown data records. At each
internal node of the tree, a decision split is made using impurity measures.
Techniques of classification are operating in two phases: tree building and

52
tree pruning. Tree building is done in top-down manner, in this phase the
tree repeatedly partitioned till all the data items belong to the same class
label. It is very tasking and computationally intensive as the training data
set is analyzed repeatedly. Tree pruning perform in a bottom up manner; it
is used to improve the prediction and classification accuracy of the
algorithm by minimizing over-fitting (noise or much detail in the training
data set. Over fitting would result in misclassification error [79]. Their
major advantage is that they often produce very simple structures that use
only a few parameters to classify the objects. Another major advantage is
being interpretable [81].

3-11-3 Probabilistic Neural Network (PNN):


The Probabilistic Neural Network provides a general solution to pattern
classification problems by following an approach developed in statistics,
called Bayesian Classifiers [82]. Bayes theory, introduced in the 1950's,
takes into account the relative likelihood of events and uses apriori
information to improve prediction [12]. The network paradigm also uses
Parzen Estimators which are developed to construct the probability density
function required by Bayes theory [83]. Key advantage of the PNN is that
training is easy and instantaneous. The distinguishing feature of PNN is
that the computational load in the training phase is transferred to the
evaluation phase. The PNN architecture consists of an input layer, a
pattern layer, a summation layer, and an output layer [76]. Input layer has
as many elements as there are separable parameters needed to describe the
objects to be classified [12]. Figure (3-20) shows the architecture of PNN.
The input layer simply distributes the input to the neurons in the pattern
layer and does not perform any computation [76]. The neuron of the

53
pattern layer receives a pattern x from the input layer and computes its
output as given by the equation below.

[ ] (3-23)

Where 𝜎 denotes the smoothing parameter, denotes the neuron vector


and D denotes the dimension of the pattern vector x.
The summation layer neurons compute the maximum likelihood of pattern
x being classified into Ci by summarizing and averaging the output of all
neurons that belong to the same class using equation given below,

∑ [ ] (3-24)

Where 𝑁𝑖 is the total number of samples in class Ci. The decision layer unit
classifies the pattern x in accordance with the Bayes decision rule based on
the output of all the summation layer neurons by

for i=1,2,…m (3-25)

Here C(x) denotes the estimated class of the pattern x and m is the total
number of classes in the training samples [76].

54
Figure (3-20): Probabilistic neural network architecture [80].

3-12 Performance Evaluation Criteria for Classifier:


The performance of CAD systems is variable and depends on the organ,
disease, type of image finding, and so on [7]. The efficiency of a CAD
system can be classified in four perspectives:
 TP: True Positive, means region segmented as mass that proved to
be mass.
 FP: False Positive, means region segmented as mass that proved to
be not mass.
 FN: False Negative, means region segmented as not mass that
proved to be mass.
 TN: True Negative, means region segmented as not mass that
proved to be not mass [84].
The result false positive may put the patient in delicate and fragile position
but, with the help of complementary exams, this result can be excluded.
However, when in the results are a false negative, is a more worrying

55
situation once the person has the lesion but the algorithm does not detect.
The evaluation of mammography images is performed by expert
radiologists, by histological examination, in the pathological cases and by
three-year follow-ups in the negative results.
The performance of a CAD system can be limited to the detection of
obvious cancers with a moderate sensibility and a relative good specificity.
These metrics are based on true/false, positives/negatives metrics [3].
Given the definitions above, the performance of various computer-aided
diagnosis (CADx) schemes can be evaluated by calculating True Positive
Fraction (TPF) [85].
The performance of classifier depends on various factors like CR,
Sensitivity, and Specificity. Where, TP: Number of True Positives, FP:
Number of False positives, TN: Number of True negatives, FN: Number of
False negatives.

A-Accuracy (CR):

(3-26)

Where CR is accuracy.

B-Sensitivity:
It is the statistical measure of how well a binary classifier correctly
identifies the positive cases [44].

(3-27)

High values of sensitivity imply minimal false negative detection [3].

56
C-Specificity:
It is also a statistical measure of how well a binary classifier correctly
identifies the negative class.

(3-28)

Specificity is also defined as, Specificity=1-false alarm rate, Therefore, the


false alarm rate can be calculated as [44]:

High values of specificity imply minimal false positive detection [3].


Conversely, CAD systems can be oriented toward high sensibility without
regard for decreasing specificity. The developers of CAD systems can
choose whatever the system will have high specificity or high sensibility.
At present, CAD systems have a sensitivity of detection around 88 to 92%
in mammography. However, despite the software improvements, masses
have the highest rate of false positives [7].

57
Chapter Four
RESULT & DISCUTION

4-1 Pectoral muscle removal:


For removing pectoral muscle in the left and right side, first step the high
intensity artifacts present in the image is removed by applying connected
component labeling. The biggest component which contains the breast
profile including the pectoral muscle is retained and the remaining
components are removed as shown in Figure (4-1) for left side and Figure
(4-2) for right side.

Figure (4-1) Removed high intensity artifacts in the left side.


a) Original Image b) Connected component labeled image
c) Artifacts removed image.

58
(a) (b)

(c)
Figure (4-2) Removed high intensity artifacts in the right side.
a) Original image b) Connected component labeled image
c) Artifacts removed image

The removals of Pectoral muscles for left and right breast are described
below: Firstly 1024 x 1024 size artifact less MLO mammogram is taken as
input as shown in Figure (4-3) a for left breast and Figure (4-4) a for right
breast. To make the processing easy, the right MLO mammogram is
flipped before removing the pectoral region in the left side and the left
MLO mammogram is flipped before removing the pectoral region in the
right side. Next step the given MLO mammogram image is partitioned into
four quadrants of 512 x 512 each. The top left and the top right quadrants
512 x 512, which contain the pectoral muscle are the region of interest and
at the pixels of the other three quadrants are changed to black as shown in

59
Figure (4-3) b and Figure (4-3) c for left and Figure (4-4) b, Figure (4-4) c
for right breast. The top left quadrant given in Figure (4-3) c. is processed
further to detect the pectoral muscle. The top right quadrant given in
Figure (4-4) c. is processed further to detect the pectoral muscle.

Figure (4-3) Image division in the left side


a) 1024 x 1024 Input image b) Four quadrants
c) Left top quadrant (ROI) and other three quadrants changed to black.8

60
Figure (4-4) Image division in the right side
a) 1024 x 1024 Input image b) Four quadrants
c) Right top quadrant (ROI) and other three quadrants changed to black

The steps involved are as follows for (left –right) pectoral muscle removal:
1-The upper quadrants left and right are divided into 4 parts of size 128 ×
512 as shown in Figure ( 4-5) a for left and Figure ( 4-6) a for right.
2- Then pixel at location (128,512) is taken for both left and right breast
as point A and the non-zero pixel in the last row (512th row) is taken for
left and right as point B as shown in Figure (4-5) b for left and
Figure (4-6) b for right breast.
3- The region above line AB which contains the pectoral region
Figure (4-5) c for left breast and Figure (4-6) c for right breast is now
threshold with the value of 176 to obtain the binary image as shown in
Figure (4-5) d for left and Figure (4-6) d for right. The threshold value of
176 is found out experimentally as the pectoral region appears as high
intensity regions in the mammogram.

61
Figure (4-5) Pectoral muscle segmentation.
a) Top left quadrant divided into 4 parts. b) Points A and B identified.
c) Region above the line AB. d) Segmented pectoral muscle region.

Figure (4-6) Pectoral muscle segmentation.


a) Top right quadrant divided into 4 parts. b) Points A and B identified.
c) Region above the line AB. d) Segmented pectoral muscle region.

62
Finally the binary image is compared with the original image and the
pectoral muscle is removed for both left and right side by replacing the
original image with intensity value of zero in the locations where the
intensity is 255 in the binary image. The results are shown in Figure (4-7)
a, Figure (4-7) b and Figure (4-7) c for left breast and Figure (4-8) a,
Figure (4-8) b and Figure (4-8) c for right breast.

Figure (4-7) Pectoral muscle removal in the left side,


a) Original image. b) Segmented pectoral muscle region.
c) Pectoral muscle eliminated breast region.

63
Figure (4-8) Pectoral muscle removal in the right side.
a) Original image. b) Segmented pectoral muscle region.
c) Pectoral muscle eliminated breast region.

4-2 Image Enhancement:


The aim of preprocessing is to improve the image data by suppressing the
undesired distortions or enhances some image features relevant for further
processing and analysis task. After applying all methods, which are
mentioned in chapter three on all images to enhancement them. The
optimum method for enhancement all images are adaptive histogram
equalization, depending on techniques to evaluate the enhancement of
mammographic images which is shown in Table (4-1), Figure (4-9) and
Figure (4-10).

64
Table (4-1) Evaluate the enhancement of mammographic images.

30
Adaptive histogram
20 equalization
ENHANCEMENT TECHNIQUE

10 smoothing filter

0 Histogram equalization

-10 exponential

-20
Contrast stretching
-30
averaging filter
-40
Median filter
-50
Power-Law Transform
-60
PSNR

Figure (4-9) PSNR measurement.

65
1.2E+10

1E+10
ENHANCEMENT TECHNIQUE

8E+09

6E+09

4E+09

2E+09

0
MSE

Figure (4-10) MSE measurement.

The adaptive histogram equalization, based on local neighborhood, a


different gray scale transform is computed at each location in the image,
and the pixel value at that location is mapped accordingly. The local
window is selected as a square tile centered at the pixel to be processed.
The regions occupy different gray scale ranges and those would be
enhanced locally by using histogram equalization [86].

66
4-3 Image segmentation:
Image segmentation, a process of pixel classification, aims to extract or
segment objects or regions from the background. It is a critical
preprocessing step to the success of image recognition, image
compression, image visualization, and image retrieval. The segmentation
can be applied in two stages, the first stage is done firstly by using pectoral
muscle removal before enhancement step, and the second stage is done
after enhancement using morphological operation. The process of
extracting the brighter region alone neglecting the darker region is called
as Region of Interest Extraction. Mathematical morphology is a powerful
tool that can be used to extract features and components from an image.
Segmentation refers to the process in which an image is subdivided into
constituent regions or objects. These objects can be further processed or
analyzed for the extraction of quantitative information. Multiple image
processing steps are often required in the process of segmentation. We
often combine segmentation with various morphological processing and
filtering techniques described before to achieve accurate and robust
segmentation of an image.

4-4 Features extraction:


Features are extracted from the original and the enhanced ROIs. This
texture feature is extracted from the 102 mammograms through MATLAB
programing as shown in Table (4-2), Table (4-3), and Table (4-4).

67
Table (4-2): Texture feature is extracted from 34 normal patients

68
Table (4-3): Texture feature is extracted from 34 benign patients

69
Table (4-4) Texture feature is extracted from 34 malignant patients

70
The homogeneity, energy, entropy, contrast, symmetry, correl0ation,
momentum1, momentum2, momentum3 and momentum4 from the table
(4-2), (4-3) and (4-4) are taken to find out the mean, standard deviation as
shown in Table (4-5). The graphic results can be found in Figure (4-11) to
Fig (4-20)

Table (4-5): Statistical Analysis

71
(a) (b)

(c)

72
(d) (e)
Fig (4-11) Results for Homogeneity
(a) ,(b) &(c)Best three results at angle 135 o , 225o and 315 o respectively
(d)& (e) worst two results at angle 0o and 180o respectively

(a) (b)
73
(c)

(d) (e)

Fig(4-12) Results for Energy feature


(a) ,(b)&(c) Best three results at angle 135o ,225o and 315 o respectively
(d)&(e) worst two results at angle 0o and 180o respectively

74
(a) (b)

(c)

75
(d) (e)

Fig (4-13) Results for Entropy


(a) ,(b)&(c) Best three results at angle 135 o ,225 o and 315o respectively
(d)& (e) worst two results at angle 0o and 180o respectively

(a) (b)

76
(c)

(d) (e)

Fig (4-14) Results for Contrast


(a) ,(b)&(c) Best three results at angle 135o ,225o and 315o respectively
(d)& (e) worst two results at angle 0o and 180o respectively

77
(a) (b)

(c)

78
(d) (e)

Fig (4-15) Results for Symmetry


(a) ,(b)&(c) Best three results at angle 135o ,225 o and 315o respectively
(d) & (e) worst two results at angle 0o and 180o respectively

(a) (b)

79
(c)

(d) (e)
Fig (4-16) Results for correlation
(a), (b) & (c) Best three results at angle 135o,225o and 315o respectively
(d)& (e) worst two results at angle 0o and 180o respectively

80
(a) (b)

(c)

81
(d) (e)

Fig (4-17) Results for Momentum 1


(a),(b)&(c) Best three results at angle 135o,225o and 315o respectively
(d)& (e) worst two results at angle 0o and 180o respectively

(a) (b)

82
(c)

(d) (e)

Fig (4-18) Results for Momentum2


(a) ,(b) &(c) Best three results at angle 135o ,225o and 315o respectively
(d)& (e) worst two results at angle 0o and 180o respectively

83
(a) (b)

(c)

84
(d) (e)

Fig (4-19) Results for Momentum 3


(a) , (b) &(c) Best three results at angle 135o ,225o and 315o respectively
(d)& (e) worst two results at angle 0o and 180o respectively

(a) (b)

85
(c)

(d) (e)

Fig (4-20) Results for Momentum 4


(a) , (b) & (c) Best three results at angle 135o,225o and 315o respectively
(d)& (e) worst two results at angle 0o and 180o respectively

86
4-5 Features selection.
After the extraction of the previously mentioned features, these 2-D
matrices to 1-D vectors to form the three clusters have been transformed;
normal cluster, benign cluster and malignant cluster. This is done by
putting the columns one after the other in one vector. We want to know
whether a certain feature can differentiate between normal-benign, normal-
malignant and benign- malignant or not for each angle has been used in
this research. This is done through the t-test. The t-test is a built-in
function in MATLAB. It takes normal-benign, normal –malignant and
benign-malignant for each feature and angle. A significance level of 0.05
has been used. Our null hypothesis is that the two vectors come from the
same distribution. The alternative hypothesis is that the two vectors are not
from the same distribution and this feature has the ability of discriminating
between normal, benign and malignant breast tissues. The significance
level is related to the degree of certainty required in order to reject the null
hypothesis in favor of the alternative. The t-test computes the p-value
which is the probability of observing the given sample result under the
assumption that the null hypothesis is true. If the p-value is less than ,
then the null hypothesis is rejected. For example, if alpha α = 0.05 and the
p-value is 0.03, then the null hypothesis is rejected. The converse is not
true. If the p-value is greater than α, you have insufficient evidence to
reject the null hypothesis. The three vectors that enter to the t-test are the
normal vector, benign vector and the malignant vector of each feature and
angle. Each vector of a certain feature is formed by concatenating the
vectors of this feature from all images in one vector. First we applied the t-
test on the 10 candidate features for eight angles. The test indicated that
only 5 features can discriminate between the three clusters (Energy,
Entropy, Symmetry, Momentum1, and Momentum3). All the
differentiating features have a p-value that is so close to zero
87
Table (4-6) Feature selection for malignant-benign

Table (4-7) Feature selection for normal-benign

88
Table (4-8) Feature selection for normal-malignant

89
Figure (4-21) Benign with malignant p-value

90
Figure (4-22) Benign with normal p-value

91
Figure (4-23) Normal-malignant p-value

92
4-6 Creating test and training data:

In order for MATLAB to perform the analysis, the data is arranged in an


alternative, training file and is created for familiarizing with the data and
its classes. A total of three training data are created: training 1, training 2
and training 3. The first 16 data from the alternative class consisting of
normal and infected patients in sequence are selected as training 1. Class 1
is assigned to normal breast while class 2 is for infected, as shown in Table
(4-9).

Table (4-9): Training 1 data

93
Training 2 is done in a similar method but the data is selected from the
middle portion consisting 16 of the normal and infected breast, while
training 3 consists of data from the bottom last 16 data which are shown in
table (4-10) and table (4-11) for training 2 and training 3 respectively:

Table (4-10): Training2 data

94
Table (4-11): Training 3 data

Testing files are created and applied to the classifier for classification
according to its accuracy and sensitivity from the normal and infected
breast classes. A total of three testing data are created: testing 1, testing 2
and testing 3.
Testing 1 is created from the last 34 data from the normal and infected
respectively, as it is shown in table (4-12) for testing 1.

95
Table (4-12): Testing 1 data

Testing 2 is done in a similar method but the data is selected from the first
and last 32 of the normal and infected data, while testing 3 consists of data
from the first 32 data. As it is shown in table (4-13) and table (4-14) for
testing 2 and testing 3 respectively.

96
Table (4-13): Testing 2 data

97
Table (4-14): Testing 3 data

4-7 Classification:
The remaining training and testing files are converted to .mat file format in
the similar method and name according to training1, training2, training3,
testing1, testing2 and testing3.
The training and testing files are applied to the following classifier:

98
K-Nearest Neighbor (K-NN), Decision Tree (DT), and Probabilistic Neural
Network (PNN).The result shown in Table (4-15), and Figure (4-24) to
Figure (4-26). With the selected features of Energy, Entropy, Symmetry,
Mommentum1 and Mommentum3, and input these features into the
classifier and had produce the results with a high accuracy, sensitivity,
specificity and positive predictive value for the best angles and low
accuracy, sensitivity, specificity and positive predictive value for the poor
angles with the three different classifier. And get the results for true
positive, false positive, false negative and true negative as explained in
section (3-12).

Table (4-15): Classification results from classifier

99
Figure (4-24) KNN for eight angles.

Figure (4-25) DTC for eight angles.

Figure (4-26) PNN for eight angles.

100
4-8 Discussion:
In this work it was proposed a method for breast density classification that
consists in use of features and segment and removes the pectoral muscle
and enhancement images in order to prove that can influence the results of
classification. The main goal is create a completely method of
mammography images analysis. The pectoral muscle in the images has
been removed because the pectoral muscle in Medio-lateral oblique
(MLO) mammogram images of left and right side is a region of higher
intensity than the surrounding tissue it is one of the main problems.
In (2013) Vaidehi and Subashini used Automatic Identification and
Elimination of Pectoral Muscle in Digital Mammograms in the left side
[19]. But in this work the pectoral muscle in right and left side have been
removed.
In (2011) Jaume Sastre Tomàs, used Segmentation of the breast region
with pectoral muscle suppression and automatic breast density
classification. By using Hough transform the pectoral muscle removed in
the images. [41]. While in this work MLO mammogram image is
partitioned into four quadrants of 512 x 512 for each left and right side.
The image enhancement techniques implemented corresponded to
Histogram Equalization (HE), Adaptive Histogram Equalization (AHE),
Piecewise-Linear Transformation Functions (contrast stretching), Power-
Law Transformations, Smoothing Filter, Averaging Filter, Median Filter
and Exponential. Those enhancement techniques were evaluated based on
the parameters PSNR and MSE from this analysis to the conclusion section
that the majority of the enhancement algorithms increase the contrast
improvement index, but also increase the noise level of the image. The
Adaptive Histogram Equalization (AHE) methods had, in general, better
enhancement performance.

101
Shefali Gupta and Yadwinder Kaur (2014) proposed Review of Different
Local and Global Contrast Enhancement Techniques for a Digital Image
by using Contrast Enhancement, Histogram Equalization, BBHE and
Adaptive Histogram Equalization. Those enhancement techniques were
evaluated based on the parameters PSNR and MSE and MEAN [21].while
in this work PSNR and MSE have been used to evaluate enhancement
methods.
Komal Vij, Yaduvir Singh in (2009) provided Enhancement of Images
Using Histogram Processing Techniques. Tested four techniques,
Histogram equalization (HE), Adaptive Histogram equalization and
Brightness Preserving Bi- Histogram Equalization (BBHE). BBHE
(Brightness Preserving Bi-Histogram Equalization Technique) has the
lowest MSE and highest PSNR and hence gave best results [48].The
morphological operation in the segmentation stage was done.
In (2013) Prakash Bethapudi Member IEEE, et.al, proposed Detection of
Malignancy in Digital Mammograms from Segmented Breast Region
Using Morphological Techniques. Based on the following procedure:
Removing the noise and the background information, Applying
thresholding and retrieving the largest region of interest (ROI), performing
the morphological operations and extracting the ROI and identifying the
malignant mass from the screened images of the breast . [63]. From the
better feature extraction Co-occurrence Matrix obtained and used eight
angles for each feature extraction, and the feature selection has been used
based on the statistical student‟s test, the extracted feature from normal
benign and malignant breast mammograms have the most significantly
different P-value which proves the null hypotheses false for each angle as
shown in table (4-6) to table (4-8). With the selected feature of Energy,
Entropy, Symmetry, Momentum1 and Momentum3.Then enter these
features into the classifier and got the results with several values of
102
sensitivity and specificity based on multiple classification, a high accuracy,
sensitivity, specificity and positive predictive value produced with the
Probabilistic Neural Network classifier. From table (4-15), the features
selection obtain 100% sensitivity, positive predictive of 100%, accuracy of
100% and specificity of 100% for angle (135 ,315 ) and obtain 91.7%
sensitivity, positive predictive of 100%, accuracy of 8559% and specificity
of 100% for angle (0 ,180 ) for Probabilistic Neural Network. But for K-
nearest neighbor obtain 95.8% sensitivity, positive predictive of 77.5%,
accuracy of 83.3% and specificity of 70.8% for angle (225 ) and obtain
79.2% sensitivity, positive predictive of 65.7%, accuracy of 68.8% and
specificity of 58.3% for angle (45 ,90 ,270 ). The decision tree classifier
for angle (0 ,180 ) obtain 35.4% accuracy, 25.3% sensitivity , positive
predictive of 28.3%,and 4559% specificity and obtain 16.7% sensitivity,
positive predictive of 19%, accuracy of 25% and specificity of 33.3% for
angle (135°,315°) that are relatively poorer compare to others. The higher
value of both sensitivity and specificity shows better performance of the
system. It is interesting to note that using combined features produces
relatively good classification results.
In (2012) Manavalan Radhakrishnan and Thangavel Kuttiannan, proposed
Comparative Analysis of Feature Extraction Methods for the Classification
of Prostate Cancer from Trus Medical Images by using histogram, Gray
Level Cooccurrence Matrix (GLCM), Gray-Level Run- Length Matrix
(GRLM),and the Support Vector Machine (SVM) is adopted to classify the
extracted features into benign or malignant [61]. But in this work only
Gray Level Cooccurrence Matrix (GLCM) has been used and classified by
using (PNN, KNN, and DT).
In (2013) Khamsa Djarodibl, et.al, proposed Textural Approach for Mass
Abnormality Segmentation in Mammographic Images. Most of the works

103
achieved in this area have used the Gray Level Cooccurrence Matrix
(GLCM) as texture features with a region-based approach. Also
contributed to clear up the cases for three types of tissues: dense, fatty and
glandular [84].
Nithya and Santhi (2011), proposed comparative study on feature
extraction by using Three different feature Extraction ,intensity
histogram, GLCM (Grey Level Co-occurrence Matrix by using (Contrast,
Cluster Shade, Energy, Sum of Square Variance) and intensity based
features are used [70].
This proves that the extracted features from the mammogram is useful in
detection and diagnostic of breast cancer.

104
Chapter Five

CONCLUSION AND FUTURE WORKS

5.1 Conclusion

From the experimental results one can conclude


1- Adaptive histogram equalization (AHE) is best for mammogram
image enhancement gives better performance by estimating the
PSNR and MSE values.
2- The best performance of GLCM features is and the best angle are
angle 135 and 315 and the poor angles are angle 0 and 180 for
Probabilistic Neural Network (PNN) but for K-Nearest Neighbor
(K-NN) the best angle is 225 and for Decision Tree (DT) the best
angles are angle 0 ,45 ,90 ,180 and 270 .
3- By using features selection techniques the best 5 features (Energy,
Entropy, Symmetry, Momentum1 and Momentum3) have been
selected.
4- The best classification method is Probabilistic Neural Network
(PNN) according the highest diagnostic rate accuracy is 100%.

5-2 Future work

1- Using mammogram from Erbil hospitals for the (CAD) system.


2- The features regarding the shape and area of lesion most be in
corporate in the future work.
3- Using Fuzzy logic for duded the stage of feature extraction.
4- Try other types of Neural Networks like (SVM), (SOM) and other
methods depend on wavelet transform like wavelet Neural Network.

105
References

[1] Subramaniam, E., Liung, T. K., Mashor, M. Y., & Isa, N. A. M.


“Breast Cancer Diagnosis Systems: A Review.” International journal of
the computer, the internet and management 14(2): 24-35., 2006.

[2] Elmoufidi, A., El Fahssi, K., Jai-Andaloussi, S., & Sekkaki, A..
“Detection of Regions of Interest in Mammograms by Using Local
Binary Pattern and Dynamic K-Means Algorithm.” International
Journal of Image and Video Processing: Theory and Application 1(1):
2336-0992., 2014.

[3] Da Cruz, C. F. “Automatic analysis of Mammography Images:


Enhancement and Segmentation Techniques.”, M.Sc. Thesis
.Engineering Faculty – Porto University., 2011.

[4] Kadhim, D. A. “Development Algorithm- Computer Program of


Digital Mammograms Segmentation for Detection of Masses Breast
using Marker-Controlled Watershed in MATLAB Environment.”,
Collage of Education for Pure Science/ Kerbala University., 2012

[5] Shahid, M. A., Rasool, A., Sabir, R., & Awan, M. S. “Dosimetric
Evaluation of Mean Glandular Dose for Mammography in Pakistani
Women.”, Peak Journal of Medicine and Medical Science 1(4): 32-38.,
2013.

106
[6] Elfarra, B. K. “Mammogram Computer-Aided Diagnosis.”, M.Sc.
Thesis. Computer Engineering Department Faculty of Engineering
Deanery of Higher Studies Islamic University – Gaza Palestine., 2012.

[7] Dos Santos Teixeira, R. F. “Automatic Analysis of Mammography


Images: Classification of Breast Density.”, M.Sc. Thesis , University of
Porto., 2013.

[8] Masala, G. L. “Computer Aided Detection on Mammography.”,


World Academy of Science, Engineering and Technology 15(1): 1-6.,
2006.

[9] Nagi, J. “The Application of Image Processing and Machine


Learning Techniques for Detection and Classification of Cancerous
Tissues in Digital Mammograms.”, M.Sc. Thesis .University of Malaya
Kuala Lumpur., 2011.

[10] Ancona, F., Colla, A. M., Rovetta, S., & Zunino, R. “Implementing
probabilistic Neural Networks.”, Neural Computing & Applications
5(3): 152-159., 1997.

[11] Shinde, M. “Computer Aided Diagnosis in Digital Mammography:


Classification of Mass and Normal Tissue.”, PhD. Thesis .University of
South Florida., 2003.

[12] Mini, M. “Classification of Mammograms and Dwt Based


Detection of Microcalssification.”, PhD. Thesis. Cochin University of
Science and Technology., 2014.

107
[13] Maitra, I. K., Nag, S., & Bandyopadhyay, S. K. “Identification of
Abnormal Masses in Digital Mammography Images.”, Ubiquitous
Computing and Multimedia Applications (UCMA), International
Conference on, IEEE., 2011.

[14] Gökbay, İ. Z. “Machine Learning Techniques in Breast Cancer


Detection.”, M.Sc. Thesis., 2007.

[15] Srinivasan, G. and G. Shobha. “Statistical Texture


Analysis.”,Proceedings of world academy of science, engineering and
technology., 2008.

[16] Vaidya, P. “Artificial Intelligence Approach to Breast Cancer


Classification.”, M.Sc. Thesis, The University of Akron., 2009.

[17] Singh, T., Nagraja, M., Rao, D. S., & Bommanalli, S. “Enhancing
Image Contrast of Mammogram & Equalization of Histograms.”,
IJEST 3(1)., 2011.

[18] Dos Santos Teixeira, R. F. “Computer Analysis of Mammography


Images to Aid Diagnosis.”, M.Sc. Thesis in Biomedical Engineering,
Faculdade de Engenharia da Universidade do Porto 35., 2012.

[19] Vaidehi, K. and T. Subashini. “Automatic Identification and


Elimination of Pectoral Muscle in Digital Mammograms.”, Int J
Comput Appl 75(14): 15-18., 2013.

[20] Nasseer M. Basheer, M. H. M. “Classification of Breast Masses in


Digital Mammograms Using Support Vector Machines.” International
108
Journal of Advanced Research in Computer Science and Software
Engineering 3(10)., 2013.

[21] Gupta, S. and Y. Kaur. “Review of Different Local and Global


Contrast Enhancement Techniques for a Digital Image.”, International
Journal of Computer Applications 100(18): 18-23., 2014.

[22] Makandar, A. and B. Halalli. “Breast Cancer Image Enhancement


using Median Filter and CLAHE.”, International Journal of Scientific &
Engineering Research, 6(4)., 2015.

[23] Abbas, A. H., Kareem, A. A., & Kamil, M. Y. “Breast Cancer Image
Segmentation Using Morphological Operation.”, Journal Impact Factor
6(4): 08-14., 2015.

[24] Zeidan, M. “Assessment of Mean Glandular Dose in


Mammography.”, M.Sc. Thesis ,University of Canterbury Christchurch,
New Zealand., 2009.

[25] Karemore, G. R. “Computer Aided Breast Cancer Risk


Assessment using Shape and Texture of Breast Parenchyma in
Mammography.”, PhD. Thesis., Københavns Universitet'Københavns
Universitet', Det Natur-og Biovidenskabelige FakultetFaculty of Science,
Datalogisk InstitutDepartment of Computer Science., 2012.

[26] Leili rahmatnezhad, Z. B., Ahad Zeinali, Mir Hamid Mohammady &
Nasrollah Jabbari. “An Investigation of Mean Glandular Dose from
Routine Mammography in Urmia, Northwestern Iran and the Factors

109
Affecting It.”, Research Journal of Applied Sciences, Engineering and
Technology 4(18): 3348-3353., 2012.

[27]Sakafu, L. L. “The Role of Imaging In The Detection of Overt


Abdominal And Chest Metastasis Due To Breast Cancer, And
Associated Risk Factors At Ocean Road Cancer Institute.”, PhD.
Thesis, Muhimbili University of Health and Allied Sciences., 2011.

[28] Hifaa.Mohamed Khair, H. O., & Abdelmoneim Sulieman .


“Estimation Radiation Risk during Mammography in Sudan.” Asian
Journal of Medical and Clinical Sciences.Khartoum, Sudan., 2012.

[29] Samulski, M. “Classification of Breast Lesions in Digital


Mammograms”, M.Sc. Thesis., 2006.

[30] Tang, J., Rangayyan, R. M., Xu, J., El Naqa, I., & Yang, Y.
“Computer-Aided Detection and Diagnosis of Breast Cancer with
Mammography: Recent Advances.”, Information Technology in
Biomedicine, IEEE Transactions on 13(2): 236-251., 2009.

[31] Yasmin, M., Sharif, M., & Mohsin, S. “Survey Paper on Diagnosis
of Breast Cancer Using Image Processing Techniques.”, Research
Journal of Recent Sciences 2(10), 88-98., 2013.

[32] Ng, E. Y. K., & Chen, Y. “Segmentation of Breast Thermogram:


Improved Boundary Detection with Modified Snake Algorithm.”,
Journal of Mechanics in Medicine and Biology 6(02): 123-136., 2006.

110
[33] Tan, J. H., Ng, E. Y. K., Acharya, R., & Chee, C. “Automated Study
of Ocular Thermal Images: Comprehensive Analysis of Corneal
Health With Different Age Group Subjects and Validation.”, Digital
Signal Processing 20(6): 1579-1591., 2010.

[34] Kennedy, D. A., Lee, T., & Seely, D. “A Comparative Review of


Thermography as a Breast Screening Technique.” Integrative Cancer
Therapies 8(1): 9-16., 2009.

[35] Sophia, T. “Diffusion-Weighted Magnetic Resonance Imaging


(DW-MRI) Of the Breast.” M.Sc. Thesis., 2014.

[36] Ahmed, W. A.-R. M. “Computer Aided Diagnosis of Digital


Mammograms.”, High Institute of Technology, Benha University, Doctor
of Philosophy., 2009.

[37] Monteiro, J. P. d. S. F. “Computer Aided Detection in


Mammography.” M.Sc. Thesis., 2011.

[38] Deepa, S. and B. A. Devi. “A Survey on Artificial Intelligence


Approaches for Medical Image Classification.” Indian Journal of
Science and Technology 4(11): 1583-1595., 2011.

[39] Sampat, M. P., Markey, M. K., & Bovik, A. C. “Computer-Aided


Detection and Diagnosis in Mammography.”, Handbook of image and
video processing 2(1): 1195-1217. 2005.

[40] Chen, Z. “Mammographic Image Analysis: Risk Assessment and


Microcalcification Classification Aspects.”, PhD. Thesis., 2013.
111
[41] Sastre Tomàs, J. “Segmentation of the Breast Region With Pectoral
Muscle Suppression and Automatic Breast Density Classification.”,
M.Sc. Thesis., 2011.

[42] Meenalosini, S., Janet, J., & Kannan, E. “A Novel Approach in


Malignancy Detection of Computer Aided Diagnosis.”, American
Journal of Applied Sciences 9(7): 1020., 2012.

[43] Blue, L. “Probabilistic Neural Networks and General Regression


Neural Networks.”, Proc. of World Conference on Neural Networks.,
1989.

[44] Tripathy, R. K. “An Investigation of the Breast Cancer


Classification Using Various Machine Learning Techniques.”, National
Institute of Technology Rourkela., 2013.

[45] Thangavel, K., Karnan, M., Sivakumar, R., & Mohideen, A. K.


“Automatic Detection of Microcalcification in Mammograms–A
Review.”, International Journal on Graphics Vision and Image Processing
5(5): 31-61., 2005.

[46] Gupta, E. S. and E. Y. Kaur. “Review of Different Histogram


Equalization Based Contrast Enhancement Techniques.”, image 1: 6.,
2014.

[47] Abir, M. I. K. “Contrast Enhancement of Digital Mammography


Based on Multi-Scale Analysis.” M.Sc. Thesis., 2011.

112
[48] Vij, K. and Y. Singh. “Enhancement of Images Using Histogram
Processing Techniques.”, Int. J. Comp. Tech. Appl 2(2): 309-313., 2009.

[49] Moh'd Rasoul, A., Al-Gawagzeh, M. Y., & Alsaaidah, B. A. “Solving


Mammography Problems of Breast Cancer Detection using Artificial
Neural Networks and Image Processing Techniques.” Indian Journal of
Science and Technology 5(4): 2520-2528., 2012.

[50]Gonzalez, R. C. and R. E. Woods. “Digital Image Processing.”,


Prentice Hall Upper Saddle River, NJ, USA., 2002.

[51] Ghosh, A. " Image Enhancement in the Spatial Domain: Image


Negative, Contrast Stretching, Bit Plane Slicing& Image
Segmentation: Watershed Segmentation Algorithm." Jadavpur
University Kolkata, M.Sc. Thesis., 2004.

[52] Kaymaz, E. “Image Analysis Of Degraded Laser-Luminescent


Fingerprints.”, M.Sc. Thesis,Texas Tech University., 1991.

[53] Lyra, M., Ploussi, A., & Georgantzoglou, A. “Matlab as a Tool in


Nuclear Medicine Image Processing.”, Intech Open Access Publisher.,
2011.

[54] Thompson, C. M. and L. Shure. “Image Processing Toolbox: For


Use with MATLAB. The MathWorks.”, Inc., Natick, MA., 1993.

[55] Arpana, M. and P. Kiran. “Feature Extraction Values for Digital


Mammograms.”, International Journal of Soft Computing and
Engineering (IJSCE) 4(2): 183-187., 2014.

113
[56] Abdallah, Y. M. Y., Hayder, A., & Wagiallah, E. “Automatic
Enhancement of Mammography Images using Contrast Algorithm.”,
Methods 7: 8., 2014.

[57] Dr.K.Meenakshi Sundaram , D. S., P.Aarthi Rani. “A Study on


Preprocessing A Mammogram Image Using Adaptive Median Filter.”,
International Journal of Innovative Research in Science, Engineering and
Technology 3(3)., 2014.

[58] Pradeep, N., Girisha, H., & Karibasappa, K. “Segmentation and


Feature Extraction of Tumors from Digital Mammograms.”, Computer
Engineering and Intelligent Systems 3(4): 37-46. 2012.

[59] Lampasona, C. “3D Digital Analysis of Mammographic


Composition.”, Von der Fakultat Informatik, Elektrotechnik und
Informationstechnik der Universitat Stuttgart, PhD.Thesis ., 2009.

[60] Choudhari, G., Swain, D., Thakur, D., & Somase, K. “Colorography:
an Adaptive Approach to classify and detect the Breast Cancer using
Image Processing.” International Journal of Computer Applications
45(17)., 2012.

[61] Radhakrishnan, M., Kuttiannan, T., & Tiruchengode, N.


“Comparative Analysis of Feature Extraction Methods for the
Classification of Prostate Cancer from TRUS Medical Images.” IJCSI
Int J Comput Sci(9): 1.,2012.

[62] Shareef, S. R. “Breast Cancer Detection Based on Watershed


Transformation.”, IJCSI International Journal of Computer Science
Issues 11(1): 1694-0814. 2014.
114
[63] Prakash Bethapudi Member IEEE , D. E. S. R.& Dr.Madhuri.P.
“Detection of Malignancy in Digital Mammograms from Segmented
Breast Region Using Morphological Techniques.” IOSR Journal of
Electrical and Electronics Engineering 5(4)., 2013.

[64] Lundén, J.” Image Analysis Methods for Evaluation of Fibre


Dimensions in Paper Cross-Sections.”, M.Sc. Thesis. The Swedish Royal
Institute of Technology, Uppsala, Sweden., 2002.

[65] Semmlow, J. “Biosignal and biomedical image processing:


MATLAB-based applications”, Marcel Dekker Inc, Madison Avenue,
New York., 2004.

[66] Jaffar, M. A., Naveed, N., Zia, S., Ahmed, B., & Choi, T. S. “Dct
Features Based Malignancy and Abnormality type Detection Method
for Mammograms.”, International Journal of Innovative Computing,
Information and Control 7(9): 5495-5513., 2011.

[67] Raman, V., Sumari, P., Then, H. H., & Al-Omari, S. A. K. “Review
on Mammogram Mass Detection by Machine Learning Techniques.”
International Journal of Computer and Electrical Engineering 3(6): 873-
879., 2011.

[68] Piriyakul, R.” Feature Selection and Dimension Reduction for


Medical Image Analysis”, PhD. Thesis, Kasetsart University., 2008.

[69] Keskin, M. F. “Image Processing Methods for Computer-Aided


Interpretation of Microscopic Images”, M.Sc. Thesis, Citeseer., 2012.

115
[70] Nithya, R. and B. Santhi. “Comparative Study on Feature
Extraction Method for Breast Cancer Classification.”, Journal of
Theoretical and Applied Information Technology 33(2): 1992-1986., 2011.

[71] Sun, Y. “Normal Mammogram Analysis”, PhD. Thesis, Purdue


University., 2004.

[72] Radstake, N., Lucas, P. J., & Marchiori, E. “Learning Bayesian


Models using Mammographic Features.”, M.Sc. Thesis., 2010.

[73] Alsarori, F. A. S. “Automatic Detection of Breast Cancer in


Mammogram Images.”, M.Sc. Thesis., 2013.

[74] Susomboon, R., Raicu, D., Furst, J., & Johnson, T. B. “A Co-
occurrence Texture Semi-Invariance to Direction, Distance and
Patient Size.”, Medical Imaging, International Society for Optics and
Photonics., 2008.

[75] Elmanna, M. E. M. “Computer Aided Diagnosis System for Digital


Mammography.” M.Sc. Thesis, Faculty of Engineering, Cairo University
Giza, Egypt., 2013.

[76] Deepa, S. and V. Bharathi . “Textural Feature Extraction and


Classification of Mammogram Images using CCCM and PNN.” IOSR
Journal of Computer Engineering (IOSR-JCE) 10(6): 07-13., 2013.

[77] Li, J. “An Automated Malignant Tumor Localization Algorithm


for Prostate Cancer Detection in Trans-rectal Ultrasound Images.”,
M.Sc. Thesis, Waterloo, Ontario, Canada., 2004.
116
[78] Gorgel, P., Sertbas, A., & Ucan, O. N. “A Comparative Study of
Breast Mass Classification based on Spherical Wavelet Transform
using ANN and KNN Classifiers.”, International Journal of Electronics,
Mechanical And Mechatronics Engineering (2): 1., 2011.

[79] Anyanwu, M. N. and S. G. Shiva. “Comparative Analysis of Serial


Decision Tree Classification Algorithms.”, International Journal of
Computer Science and Security 3(3): 230-240., 2009.

[80] Chaurasia, S., Chakrabarti, P., & Chourasia, N. “An Application of


Classification Techniques on Breast Cancer Prognosis.” International
Journal of Computer Applications 59(3): 6-10., 2012.

[81] Da Fonseca, J. L., Cardoso, J. S., & Domingues, I. “Pre-CADs in


Breast Cancer.”, Faculdade de Engenharia da Universidad Do Porto
FEUP, Idea 2(3)., 2013.

[82] Kiranmayee, M. & M. Subbarao. “Texture Classification Using


Weighted Probabilistic Neural Network.”, International Journal of
Image Processing and Vision Sciences 1(2): 2278 – 1110., 2012.

[83] Simon, B. B., Thomas, V., & Jagadeesh Kumar, P. “Algorithm for
the Detection of Microcalcification in Mammogram on an Embedded
Platform.” International Journal of Scientific & Engineering Research
4(4)., 2013.

117
[84] Djaroudib, K., Ahmed, A. T., & Zidani, A. “Textural Approach for
Mass Abnormality Segmentation in Mammographic Images.”, arXiv
preprint arXiv:1412.1506., 2014.

[85] Mutiso, A. M. “Datamining in Medical Applications: Computer


Aided Diagnosis (CAD) In Medical Imaging with an Emphasis on
Mammography.”, Course 6 Independent Study 6.199 Advanced
Undergraduate Project Final Draft., 1999.

[86] Abir, M. I. K. “Contrast Enhancement of Digital Mammography


Based on Multi-scale Analysis.”, M.Sc. Thesis ,Missouri University of
Science And Technology., 2011.

[87] J Suckling et al. "The Mammographic Image Analysis Society


Digital Mammogram Database.", Exerpta Medica. International
Congress Series 1069 pp375-378,
http://skye.icr.ac.uk/miasdb/miasdb.html ., 1994.

118
APPENDEX
Feature extraction results

(a) (b)

(c)

Figure (5-1) Results for Homogeneity


(a), (b) & (c) worst three results at angle 45o, 90o and 270 o respectively
119
(a) (b)

(c)

Figure (5-2) Results for Energy


(a), (b) & (c) worst three results at angle 45o, 90o and 270 o respectively

120
(a) (b)

(c)
Figure (5-3) Results for Entropy
(a), (b) & (c) worst three results at angle 45o, 90o and 270 o respectively

121
(a) (b)

(c)

Figure (5-4) Results for Contrast


(a), (b) & (c) worst three results at angle 45o, 90o and 270 o respectively

122
(a) (b)

(c)

Figure (5-5) Results for Symmetry


(a), (b) & (c) worst three results at angle 45o, 90o and 270 o respectively

123
(a) (b)

(c)

Figure (5-6) Results for Correlation


(a), (b) & (c) worst three results at angle 45o, 90o and 270 o respectively

124
(a) (b)

(c)

Figure (5-7) Results for Momentum1


(a), (b) & (c) worst three results at angle 45o, 90o and 270 o respectively

125
(a) (b)

(c)

Figure (5-8) Results for Momentum2


(a), (b) & (c) worst three results at angle 45o, 90o and 270 o respectively

126
(a) (b)

(c)

Figure (5-9) Results for Momentum3


(a), (b) & (c) worst three results at angle 45o, 90o and 270 o respectively

127
(a) (b)

(c)

Figure (5-10) Results for Momentum4


(a), (b) & (c) worst three results at angle 45o, 90o and 270 o respectively

128

Você também pode gostar