Você está na página 1de 17

ACCURACY

ASSESSMENT
Accuracy Assessment is a general term for
comparing the classified data to geographical
data that are assumed to be true, in order to
determine the accuracy of the classification
process.

Land use map generated from


remote sensing data may not be
correct for some particular
category or classes at all locations.

For example, forest cover may be


misclassified as agricultural class.
Classified map needs to be checked by
comparing it with actual ground data by
• visiting site or
• comparing standard maps such as
•previously tested maps,
•aerial photos, or
•other data

To check each point on the ground some


sampling technique are adopted.
Random sampling:

A grid is laid over a map and random samples


(n) are chosen for each category to check on
the ground.

What is the minimum sample (n) size required ?

Suppose n = 10, and after field check it has


been observed that 9 out of 10 have been
correctly classified.

Is it 90% accuracy ?

Since sampling is on random basis it may be


possible that all the 9 points may be by
“chance”, classified correctly.
Any accuracy estimate based on
the sampling lies within a
confidence interval based on the
number of samples.

For example , if n = 20, correctly


classified samples = 18 i.e. 90%
where the confidence level =95%,
the actual accuracy lies between
70% & 97%.

If n = 40,correctly classified
samples = 36, i.e. 90% where the
This shows that the lower &
upper limits narrow as the
sample size increases.

In other words one has more


confidence of the results
obtained from 40 samples data
than for a 20 sample data.

Van Genderen & Lock (1977)


have suggested at least a
The result of the accuracy
check is usually tabulated in
the form of m x m matrix,
where m is the number of
classes under investigation
This matrix is called confusion
matrix,
error matrix or
contingency table.
Following the convention given
by Jensen (1986), the
•thematic classes derived by
remote sensing is given as
columns &
•the ground truth as rows.
The classes of the other different land
uses, if classified and added to that land
use is called Omission Error.

The total interpreted minus the actual in


that land use class is called Commission
Error. Here, from the other classes are
misclassified and added to the wrong
class.

Wrongly Added are Commission Error and


Wrongly Omitted are Omission Error.
Actual Fores Past Soil Urba Wat Total Commiss Omissio
class t ure n er (Grou ion error n error
nd T)

Forest 41 0 5 0 2 48 9 7

Pasture 5 40 2 0 0 47 10 7

Soil 3 0 46 5 0 54 8 13

Urban 0 6 0 44 0 50 6 6

Water 2 0 0 0 48 50 2 2

Interpretat 50 50 50 50 50 250
ion

Confusion matrix : interpreted land use.


Actual Fore SoilUrb Wa Total Com Omissi
class st an ter (Gro missi on
und on error
T) error

Forest 41 5 0 2 48 9 7
Pasture 5 40 0 0 47 10 7
Soil 3 5 6 0 55 8 13
Urban 0 0 44 0 50 6 6
Water 2 0 0 48 50 2 2
Interpre 50 50 50 50 250
tation
Confusion matrix : interpreted land use.
For the first category:
9 samples are added & 7 samples are
omitted which are called commission &
omission errors respectively.

Similarly the accuracy of each class can


be estimated.

Thus the proportion of forest classified


correctly is 41/50 which according to
table gives accuracy of between 0.69 &
0.90 % at 95% confidence level.
This is also called User’s Accuracy
82%
The Overall apparent accuracy is the sum
of the diagonal elements divided by the
total samples for all classes –
215/250=86%

Story & Cogalton (1986) have coined 2


accuracy terms – producer accuracy and
user’s accuracy

no of correctly classified samples of


class x
Pc=---------------------------------------------------
--------------
total no of samples in that class
No of correctly
classified samples of
classes x
User accuracy =
-----------------------------------------------
Total no of samples
classified as category
x

41 divided by 50

Você também pode gostar