Escolar Documentos
Profissional Documentos
Cultura Documentos
Part II
Margaret H. Dunham
Department of Computer Science and Engineering
Southern Methodist University
Companion slides for the text by Dr. M.H.Dunham, Data Mining,
Introductory and Advanced Topics, Prentice Hall, 2002.
Prentice Hall
PART I
Introduction
Related Concepts
Data Mining Techniques
PART II
Classification
Clustering
Association Rules
PART III
Web Mining
Spatial Mining
Temporal Mining
Prentice Hall
Classification Outline
Goal: Provide an overview of the classification
problem and introduce some of the basic
algorithms
Prentice Hall
Classification Problem
Given a database D={t1,t2,,tn} and a set
of classes C={C1,,Cm}, the
Classification Problem is to define a
mapping f:DC where each ti is assigned
to one class.
Actually divides D into equivalence
classes.
Prediction is similar, but may be viewed
as having infinite number of classes.
Prentice Hall
Classification Examples
Teachers classify students grades as A,
B, C, D, or F.
Identify mushrooms as poisonous or
edible.
Predict when a river will flood.
Identify individuals with credit risks.
Speech recognition
Pattern recognition
Prentice Hall
x
<90
>=90
x
<80
x
<70
x
<50
F
Prentice Hall
A
>=80
B
>=70
C
>=60
D
Letter A
Letter B
Letter C
Letter D
Letter E
Letter F
Prentice Hall
Classification Techniques
Approach:
1. Create specific model by evaluating
training data (or using domain
experts knowledge).
2. Apply model developed to new data.
Classes must be predefined
Most common techniques use DTs,
NNs, or are based on distances or
statistical methods.
Prentice Hall
Defining Classes
Distance Based
Partitioning Based
Prentice Hall
Issues in Classification
Missing Data
Ignore
Replace with assumed value
Measuring Performance
Classification accuracy on test data
Confusion matrix
OC Curve
Prentice Hall
10
Name
Kristina
Jim
Maggie
Martha
Stephanie
Bob
Kathy
Dave
Worth
Steven
Debbie
Todd
Kim
Amy
Wynette
Gender
F
M
F
F
F
M
F
M
M
M
F
M
F
F
F
Height
1.6m
2m
1.9m
1.88m
1.7m
1.85m
1.6m
1.7m
2.2m
2.1m
1.8m
1.95m
1.9m
1.8m
1.75m
Output1
Short
Tall
Medium
Medium
Short
Medium
Short
Short
Tall
Tall
Medium
Medium
Medium
Medium
Medium
Prentice Hall
Output2
Medium
Medium
Tall
Tall
Medium
Medium
Medium
Medium
Tall
Tall
Medium
Medium
Tall
Medium
Medium
11
Classification Performance
True Positive
False Negative
False Positive
True Negative
Prentice Hall
12
Assignment
Short
Medium
0
4
0
5
0
1
Prentice Hall
Tall
0
3
2
13
Prentice Hall
14
Regression
Prentice Hall
15
Prentice Hall
16
Prentice Hall
17
Division
Prentice Hall
18
Prediction
Prentice Hall
19
Algorithm:
KNN
Prentice Hall
20
Prentice Hall
21
KNN
Prentice Hall
22
KNN Algorithm
Prentice Hall
23
Prentice Hall
24
Decision Tree
Given:
D = {t1, , tn} where ti=<ti1, , tih>
Database schema contains {A1, A2, , Ah}
Classes C={C1, ., Cm}
Decision or Classification Tree is a tree associated
with D such that
Each internal node is labeled with attribute, Ai
Each arc is labeled with predicate which can be
applied to attribute at parent
Each leaf node is labeled with a class, C j
Prentice Hall
25
DT Induction
Prentice Hall
26
DT Splits Area
Gender
M
F
Height
Prentice Hall
27
Comparing DTs
Balanced
Deep
Prentice Hall
28
DT Issues
Choosing Splitting Attributes
Ordering of Splitting Attributes
Splits
Tree Structure
Stopping Criteria
Training Data
Pruning
Prentice Hall
29
Prentice Hall
30
Information
Prentice Hall
31
DT Induction
When all the marbles in the bowl are
mixed up, little information is given.
When the marbles in the bowl are all
from one class and those in the other
two classes are on either side, more
information is given.
32
Information/Entropy
no surprise
entropy = 0
Prentice Hall
33
Entropy
log (1/p)
H(p,1-p)
Prentice Hall
34
ID3
Prentice Hall
35
Prentice Hall
36
C4.5
Prentice Hall
37
CART
38
CART Example
At
Split at 1.8
Prentice Hall
39
Supervised learning
For each tuple in training set, propagate it
through NN. Adjust weights on edges to
improve future classification.
Algorithms: Propagation, Backpropagation,
Gradient Descent
Prentice Hall
40
NN Issues
41
Prentice Hall
42
Propagation
Tuple Input
Output
Prentice Hall
43
NN Propagation Algorithm
Prentice Hall
44
Example Propagation
Prentie Hall
Prentice Hall
45
NN Learning
Adjust weights to perform better with
the associated test data.
Supervised: Use feedback from
knowledge of correct classification.
Unsupervised: No knowledge of
correct classification needed.
Prentice Hall
46
NN Supervised Learning
Prentice Hall
47
Supervised Learning
48
NN Backpropagation
Propagate changes to weights
backward from output layer to input
layer.
Delta Rule: wij= c xij (dj yj)
Gradient Descent: technique to modify
the weights in the graph.
Prentice Hall
49
Backpropagation
Error
Prentice Hall
50
Backpropagation Algorithm
Prentice Hall
51
Gradient Descent
Prentice Hall
52
Prentice Hall
53
Prentice Hall
54
Prentice Hall
55
Types of NNs
Different NN structures used for
different problems.
Perceptron
Self Organizing Feature Map
Radial Basis Function Network
Prentice Hall
56
Perceptron
Prentice Hall
57
Perceptron Example
Suppose:
Summation: S=3x1+2x2-6
Activation: if S>0 then 1 else 0
Prentice Hall
58
59
Kohonen Network
Prentice Hall
60
Kohonen Network
61
Three Layers
Hidden layer Gaussian activation
function
Output layer Linear activation function
Prentice Hall
62
Prentice Hall
63
Antecedent, Consequent
Prentice Hall
64
Prentice Hall
65
Prentice Hall
66
Prentice Hall
67
1R Algorithm
Prentice Hall
68
1R Example
Prentice Hall
69
PRISM Algorithm
Prentice Hall
70
PRISM Example
Prentice Hall
71
Rules have no
ordering of
predicates.
Prentice Hall
72
Clustering Outline
Goal: Provide an overview of the clustering
problem and introduce some of the basic
algorithms
Prentice Hall
73
Clustering Examples
Segment customer database based on
similar buying patterns.
Group houses in a town into
neighborhoods based on similar
features.
Identify new plant species
Identify similar Web usage patterns
Prentice Hall
74
Clustering Example
Prentice Hall
75
Clustering Houses
Geographic
Size
Distance
Based Based
Prentice Hall
76
No prior knowledge
Number of clusters
Meaning of clusters
Unsupervised learning
Prentice Hall
77
Clustering Issues
Outlier handling
Dynamic data
Interpreting results
Evaluating results
Number of clusters
Data to be used
Scalability
Prentice Hall
78
Impact of Outliers on
Clustering
Prentice Hall
79
Clustering Problem
Given a database D={t1,t2,,tn} of tuples
and an integer value k, the Clustering
Problem is to define a mapping
f:D{1,..,k} where each ti is assigned to
one cluster Kj, 1<=j<=k.
A Cluster, Kj, contains precisely those
tuples mapped to it.
Unlike classification problem, clusters
are not known a priori.
Prentice Hall
80
Types of Clustering
Hierarchical Nested set of clusters
created.
Partitional One set of clusters
created.
Incremental Each element handled
one at a time.
Simultaneous All elements handled
together.
Overlapping/Non-overlapping
Prentice Hall
81
Clustering Approaches
Clustering
Hierarchical
Agglomerative
Partitional
Divisive
Categorical
Sampling
Prentice Hall
Large DB
Compression
82
Cluster Parameters
Prentice Hall
83
Prentice Hall
84
Hierarchical Clustering
Divisive
Initially all items in one cluster
Large clusters are successively divided
Top Down
Prentice Hall
85
Hierarchical Algorithms
Single Link
MST Single Link
Complete Link
Average Link
Prentice Hall
86
Dendrogram
87
Levels of Clustering
Prentice Hall
88
Agglomerative Example
A B C D E
A 0
B 1
C 2
D 2
E 3
D
Threshold of
1 2 34 5
A B C D E
Prentice Hall
89
MST Example
A
A B C D E
A 0
B 1
C 2
D 2
E 3
Prentice Hall
90
Agglomerative Algorithm
Prentice Hall
91
Single Link
View all items with links (distances)
between them.
Finds maximal connected components
in this graph.
Two clusters are merged if there is at
least one edge which connects them.
Uses threshold distances at each level.
Could be agglomerative or divisive.
Prentice Hall
92
Prentice Hall
93
Prentice Hall
94
Partitional Clustering
Nonhierarchical
Creates clusters in one step as
opposed to several steps.
Since only one set of clusters is output,
the user normally has to input the
desired number of clusters, k.
Usually deals with static sets.
Prentice Hall
95
Partitional Algorithms
MST
Squared Error
K-Means
Nearest Neighbor
PAM
BEA
GA
Prentice Hall
96
MST Algorithm
Prentice Hall
97
Squared Error
Prentice Hall
98
Prentice Hall
99
K-Means
100
K-Means Example
Prentice Hall
101
K-Means Algorithm
Prentice Hall
102
Nearest Neighbor
Items are iteratively merged into the
existing clusters that are closest.
Incremental
Threshold, t, used to determine if items
are added to existing clusters or a new
cluster is created.
Prentice Hall
103
Prentice Hall
104
PAM
Partitioning Around Medoids (PAM)
(K-Medoids)
Handles outliers well.
Ordering of input does not impact results.
Does not scale well.
Each cluster represented by one item,
called the medoid.
Initial set of k medoids randomly chosen.
Prentice Hall
105
PAM
Prentice Hall
106
Prentice Hall
107
PAM Algorithm
Prentice Hall
108
BEA
109
BEA
Prentice Hall
110
{A,B,C,D,E,F,G,H}
Prentice Hall
111
GA Algorithm
Prentice Hall
112
BIRCH
DBSCAN
CURE
Prentice Hall
113
Prentice Hall
114
BIRCH
Balanced Iterative Reducing and
Clustering using Hierarchies
Incremental, hierarchical, one scan
Save clustering information in a tree
Each entry in the tree contains
information about one cluster
New nodes inserted in closest entry in
tree
Prentice Hall
115
Clustering Feature
CT Triple: (N,LS,SS)
N: Number of points in cluster
LS: Sum of points in the cluster
SS: Sum of squares of points in the cluster
CF Tree
Balanced search tree
Node has CF triple for each child
Leaf node represents cluster and has CF value
for each subcluster in it.
Subcluster has maximum diameter
Prentice Hall
116
BIRCH Algorithm
Prentice Hall
117
Improve Clusters
Prentice Hall
118
DBSCAN
Density Based Spatial Clustering of
Applications with Noise
Outliers will not effect creation of cluster.
Input
119
120
Density Concepts
Prentice Hall
121
DBSCAN Algorithm
Prentice Hall
122
CURE
Clustering Using Representatives
Use many points to represent a cluster
instead of only one
Points will be well scattered
Prentice Hall
123
CURE Approach
Prentice Hall
124
CURE Algorithm
Prentice Hall
125
Prentice Hall
126
Comparison of Clustering
Techniques
Prentice Hall
127
Comparing Techniques
Incremental Algorithms
Advanced AR Techniques
Prentice Hall
128
Uses:
Placement
Advertising
Sales
Coupons
129
Prentice Hall
130
131
Prentice Hall
132
Prentice Hall
133
Prentice Hall
134
Prentice Hall
135
Prentice Hall
136
Apriori
Large Itemset Property:
Any subset of a large itemset is large.
Contrapositive:
If an itemset is not large,
none of its supersets are large.
Prentice Hall
137
Prentice Hall
138
Apriori Ex (contd)
s=30%
= 50%
Prentice Hall
139
Apriori Algorithm
1.
2.
7.
i = 1;
Repeat
i = i + 1;
Ci = Apriori-Gen(Li-1);
Count Ci to determine Li;
8.
3.
4.
5.
6.
Prentice Hall
140
Apriori-Gen
Generate candidates of size i+1 from
large itemsets of size i.
Approach used: join large itemsets of
size i if they agree on i-1
May also prune candidates who have
subsets that are not large.
Prentice Hall
141
Apriori-Gen Example
Prentice Hall
142
Prentice Hall
143
Apriori Adv/Disadv
Advantages:
Uses large itemset property.
Easily parallelized
Easy to implement.
Disadvantages:
Assumes transaction database is memory
resident.
Requires up to m database scans.
Prentice Hall
144
Sampling
Large databases
Sample the database and apply Apriori to the
sample.
Potentially Large Itemsets (PL): Large
itemsets from sample
Negative Border (BD - ):
Generalization of Apriori-Gen applied to
itemsets of varying sizes.
Minimal set of itemsets which are not in PL,
but whose subsets are all in PL.
Prentice Hall
145
PL
PL BD-(PL)
Prentice Hall
146
Sampling Algorithm
1.
2.
3.
4.
5.
6.
7.
8.
Ds = sample of Database D;
PL = Large itemsets in Ds using smalls;
C = PL BD-(PL);
Count C in Database using s;
ML = large itemsets in BD-(PL);
If ML = then done
else C = repeated application of BD -;
Count C in Database;
Prentice Hall
147
Sampling Example
148
Sampling Adv/Disadv
Advantages:
Reduces number of database scans to one
in the best case and two in worst.
Scales better.
Disadvantages:
Potentially large number of candidates in
second pass
Prentice Hall
149
Partitioning
Divide database into partitions D1,D2,
,Dp
Apply Apriori to each partition
Any large itemset must be large in at
least one partition.
Prentice Hall
150
Partitioning Algorithm
1.
2.
3.
4.
5.
Prentice Hall
151
Partitioning Example
L1 ={{Bread}, {Jelly},
{PeanutButter},
{Bread,Jelly},
{Bread,PeanutButter},
{Jelly, PeanutButter},
{Bread,Jelly,PeanutButter}}
D1
D2
S=10%
L2 ={{Bread}, {Milk},
{PeanutButter}, {Bread,Milk},
{Bread,PeanutButter}, {Milk,
PeanutButter},
{Bread,Milk,PeanutButter},
{Beer}, {Beer,Bread},
{Beer,Milk}}
Prentice Hall
152
Partitioning Adv/Disadv
Advantages:
Adapts to available main memory
Easily parallelized
Maximum number of database scans is
two.
Disadvantages:
May have many candidates during second
scan.
Prentice Hall
153
Parallelizing AR Algorithms
Based on Apriori
Techniques differ:
Data Parallelism
Data partitioned
Count Distribution Algorithm
Task Parallelism
Data and candidates partitioned
Data Distribution Algorithm
Prentice Hall
154
Prentice Hall
155
CDA Example
Prentice Hall
156
Prentice Hall
157
DDA Example
Prentice Hall
158
Comparing AR Techniques
Target
Type
Data Type
Data Source
Technique
Itemset Strategy and Data Structure
Transaction Strategy and Data Structure
Optimization
Architecture
Parallelism Strategy
Prentice Hall
159
Comparison of AR Techniques
Prentice Hall
160
Hash Tree
Prentice Hall
161
Prentice Hall
162
Note on ARs
163
Advanced AR Techniques
Generalized Association Rules
Multiple-Level Association Rules
Quantitative Association Rules
Using multiple minimum supports
Correlation Rules
Prentice Hall
164
Prentice Hall
165