Você está na página 1de 119

THE USE OF DIGITAL IMAGE PROCESSING TO FACILITATE DIGITIZING LAND COVER ZONES FROM GRAY LEVEL AERIAL PHOTOS

A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND GEOGRAPHY IN CANDIDACY FOR THE DEGREE OF MASTER OF SCIENCE

By JOAN M. BIEDIGER

NORTHWEST MISSOURI STATE UNIVERSITY MARYVILLE, MISSOURI April 2012

DIGITAL IMAGE PROCESSING

The Use of Digital Image Processing to Facilitate Digitizing Land Cover Zones from Gray Level Aerial Photos Joan Biediger Northwest Missouri State University

THESIS APPROVED

____________________________ Thesis Advisor, Dr. Ming-Chih Hung Date

Dr. Yi-Hwa Wu

____________________________ Date

Dr. Patricia Drews

____________________________ Date

____________________________ Dean of Graduate School, Dr. Gregory Haddock Date

ii

The Use of Digital Image Processing to Facilitate Digitizing Land Cover Zones from Gray Level Aerial Photos

Abstract

Aerial imagery from the 1930s to the early 1990s was predominantly acquired using black and white film. Its use in remote sensing applications and GIS analysis is constrained by its limited spectral information and high spatial resolution. As a historical record and to study long-term land use/land cover change this imagery is a valuable but often underutilized resource. Traditional classification of gray level aerial photos has primarily relied on visual interpretation and digitizing to obtain land cover classifications that can be used in a GIS. This is a time consuming and labor intensive process that can often limit the scale of analysis. This research focused on the use of digital image processing to facilitate visual interpretation and heads up digitizing of gray level imagery. Existing remote sensing software packages have limited functionalities with respect to classifying black and white aerial photos. Traditional image classification alone provides limited results when determining land cover types derived from gray level imagery. This research examined approaching classification as a system which uses digital image processing techniques such as filtering, texture analysis and principle components analysis to improve supervised and unsupervised classification algorithms to provide a base for digitizing land cover types in a GIS. Post processing operations included smoothing the classification result and converting it to a vector layer that can be further refined in a GIS.

iii

Software tools were developed using ArcObjects to aid the process of refining the vector classification. These tools improve the usability and accuracy of the digital image processing results that help facilitate the visual interpretation and digitizing process to gain a usable land use/land cover classification from gray level imagery.

iv

TABLE OF CONTENTS

ABSTRACT. iii LIST OF FIGURES..vii LIST OF TABLES...... viii ACKNOWLEDGMENTS.....ix CHAPTER 1: INTRODUCTION.1 1.1 Research Objective 4 CHAPTER 2: LITERATURE REVIEW..... 5 2.1 Historical Aerial Imagery Uses and Importance... 5 2.2 Classification Problems of High Resolution Panchromatic Imagery.6 2.3 Statistical Texture Indicators. 9 2.4 Image Enhancements and Filtering... 13 2.5 Image Segmentation and Object-based Image Analysis... 15 CHAPTER 3: CONCEPTUAL FRAMEWORK AND METHODOLOGY....17 3.1 Description of Study Area. 17 3.2 Description of Data17 3.3 Methodology..21 3.3.1 Conceptual Overview.... 21 3.3.2 Software Utilized22 3.3.3 Preliminary Image Processes..... 24 3.3.4 Unsupervised Classification.. 27 3.3.5 Supervised Classification.. 33 3.3.6 Image Enhancement and Texture Analysis....35 3.3.7 Object-based Image Analysis.38 3.3.8 Post Processing and Automation40 3.3.9 Accuracy Assessment.....43 CHAPTER 4: ANALYSIS RESULTS AND DISCUSSION.46 4.1 Manual Digitizing.. 46 4.2 Unsupervised Classification.. 48 4.3 Supervised Classification.. 53 4.4 Image Enhancements and Texture Analysis......57 4.5 Object-based Image Analysis 63 4.6 Post Processing and Automation... 65 4.7 Classification Accuracy and Results. 70 CHAPTER 5: CONCLUSION81 5.1 Limitations of the Research... 81 5.2 Potential Future Developments. 81

APPENDIX 1: ERROR MATRIX TABLES.. .84 APPENDIX 2: VECTOR EDITING TOOLBAR .NET CODE..101 REFERENCES......... .106

vi

LIST OF FIGURES Figure 1 Aerial photo of Ogden study area... 18 Figure 2 - Overview of study areas in relationship to the state of Utah.. 18 Figure 3 Aerial photo of Salt Lake City study area...... 19 Figure 4 - Ogden DOQQ study area 20 Figure 5 - Salt Lake City MDOQ study are. 21 Figure 6 Main workflow processes... 23 Figure 7 Ogden dendrogram of ISODATA clustering 10 classes.... 29 Figure 8 Ogden dendrogram of ISODATA clustering 25 classes 30 Figure 9 Ogden dendrogram of ISODATA clustering 100 classes.. 31 Figure 10 Distances between classes from Salt Lake City dendrogram... 32 Figure 11 Training sample distribution for the Ogden image...34 Figure 12 Training sample distribution for Salt Lake City image.... 34 Figure 13 Unstretched images compared to contrast stretched images 36 Figure 14 Post processing ArcGIS Model 41 Figure 15 Polygon raster to vector, smoothing, and smooth simplify...42 Figure 16 Classification using visual interpretation of the Ogden image. 49 Figure 17 Classification using visual interpretation of the Salt Lake City image.49 Figure 18 Ogden image ISODATA classifications.. 51 Figure 19 Salt Lake City image ISODATA classifications.. 53 Figure 20 Minimum distance and support vector machine classification of the Salt Lake City image.... 57 Figure 21 Minimum distance classification of the Ogden image with high pass filter 58 Figure 22 Minimum distance classification of the Ogden image with low pass filter. 59 Figure 23 ISODATA 10 spectral classes Halounova image. 61 Figure 24 SCRM object-based segmentation images... 64 Figure 25 Ogden object-based classification image and post processing system vectors...68 Figure 26 Salt Lake City object-based classification image and post processing system vectors.69 Figure 27 Ogden pixel based classification and post processing system vectors. 69

vii

LIST OF TABLES Table 1 First level classification Ogden land use/land cover classes 25 Table 2 First level classification Salt Lake City land use/land cover classes 27 Table 3 ISODATA overall accuracy results for Ogden and Salt Lake City study areas .. 50 Table 4 Training sample statistics from original Ogden image. 55 Table 5 Training sample statistics from original Salt Lake image.56 Table 6 Ogden image overall accuracy and level 1 completion time.72 Table 7 Salt Lake City image overall accuracy and level 1 completion time.... 74 Table 8 Users accuracies for individual land use/land cover types Ogden study area. 76 Table 9 Users accuracies for individual land use/land cover types Salt Lake City study area... 77 Table 10 Overall accuracy ranges for classification groups.. 78

viii

ACKNOWLEDGMENTS

I would like to thank Dr. Ming-Chih Hung for chairing my thesis committee and for all the support, encouragement and guidance he has given me along the way. I would also like to thank Dr. Yi-Hwa Wu and Dr. Patricia Drews for serving on my thesis committee and for their contributions in developing this thesis. Last but certainly not least I would like to thank my husband Barry for encouraging me through many long nights and weekends while I completed this work. Without your support and love I would never have been able to finish this thesis.

ix

CHAPTER 1: INTRODUCTION

Aerial imagery from the 1930s to the present is a primary data source used to study many natural processes and land use patterns (Carmel and Kadmon 1998, Kadmon and Harari-Kremer 1999). Early aerial imagery from the 1930s to early 1990s is predominantly black and white (panchromatic) film photography meaning there is only one band of data. This type of imagery contains limited spectral information unlike todays satellite digital sensors, which offer more spectral information even in the panchromatic band. The Aerial Photography Field Office (APFO) is a division of the Farm Service Agency (FSA), of the United States Department of Agriculture (USDA). The APFO, located in Salt Lake City, Utah, has one of the nations largest collections of historical aerial imagery dating back to the 1950s. Film from the 1930s through 1940s was sent to the national archives. APFO has over 50,000 rolls of film of which over 60% is black and white (Mathews 2005). This historical aerial imagery is a valuable, largely untapped resource. The film format of the imagery makes it unavailable to GIS and imagery analysis programs unless it is scanned and processed to digital format. There is widespread interest from the public and other government agencies in making this imagery available and usable in digital format. Recently, more historical imagery from the 1950s to 1990s is being scanned to digital format for use in change detection projects for the Farm Service Agency (FSA). According to Brian Vanderbilt (personal communication, 01 Sep 2009) FSA is interested in studying agricultural loss patterns over long periods so that processes of change can be more fully understood. One of the challenges with these types of projects is that land

use/land cover classification with the imagery usually involves visual interpretation and manual digitizing, due to the difficulty of using digital image processing techniques with the historical panchromatic imagery. Manual digitizing is a time consuming process for multiple years of imagery, as each photo requires its own analysis. There are not enough image analysts within FSA to manage the increasing workload for projects requiring the use of historical imagery. Another concern is that study areas are limited in scale because of the time and resources needed to digitize land cover types on the imagery. There is interest and need to explore digital options for land cover classification so that the use of these historical imagery datasets can be expanded. The ability to facilitate digitizing of land cover types on historical aerial imagery would make it a more usable resource to study long term land use/land cover changes. Classification of this type of imagery is very labor intensive which often limits the size of study areas. If the imagery could be utilized on a more broad scale, we can gain greater historical perspective on changes such as agricultural loss over time. Increased accuracy and repeatability of results obtained by using digital image processing could make the results of long-term change detection projects more valid rather than having to rely on varying levels of image interpretation skills if a project requires several image analysts to interpret imagery for a project. Historical aerial imagery offers a unique opportunity to study long-term patterns of land use/land cover change by offering the analyst a more extensive historical perspective on geographic processes such as land use/land cover change, urban expansion and vegetation patterns (Kadmon and Harari-Kremer 1999, Awwad 2003, Alhaddad et al., 2009). Producing a thematic map through image classification is one of the most

common imagery analysis tasks in remote sensing. Image classification techniques such as unsupervised and supervised classification, NDVI, spectral signatures, and spectral band combinations have limited usability with panchromatic aerial imagery as they rely heavily on spectral information, which is limited with this type of imagery. Visual interpretation of imagery does not rely on spectral information alone to classify imagery. Visual interpretation makes use of scene qualities such as texture, shape, arrangement of objects and context of elements in an image. The human visual system is very efficient at pattern recognition and in many ways is superior to existing machine processing methods, but on the other hand inherent subjectivity and the inability of the eye to extract complex patterns can limit interpretation. Digital image processing techniques that incorporate the use of texture, tone, shape, pattern recognition and objectbased image analysis can be used to enhance traditional methods of supervised and unsupervised classification especially with gray level aerial imagery (Caridade et al., 2008). A great deal of research has been done on the most effective ways of classifying multispectral imagery and mapping the results (Jensen, 2005). There is relatively little research on how digital image processing of historical panchromatic imagery can improve or reduce manual interpretation for image analysis and GIS analysis. In this thesis research, digital image processing techniques including texture analysis, convolution filters, and object-based image analysis were considered in respect to how they can improve the classification of panchromatic aerial imagery and how this improvement can facilitate digitizing and in some cases possibly eliminate it. A post processing system involving image smoothing, raster to vector conversion, polygon

smoothing and simplification, and custom polygon editing tools for use in ESRIs ArcMap GIS software was used to improve an initial digital image classification. The post processing system can be used to improve most digital image classifications. The quality of the baseline land use/land cover classification was the main factor in how efficient it was to create a usable thematic layer. Continued study in this area could yield new approaches to land cover classification of gray level imagery. If historical imagery has the ability to be used effectively in a digital environment, then more of it may be scanned and become more readily available, which would benefit the geospatial community. 1.1 Research Objective The objective of this project is to establish a working model that utilizes digital image processing to facilitate or assist the user with digitizing land cover zones from gray level aerial photos. This study approaches the problem of digitizing land cover zones by first classifying the aerial photo and then by establishing a post processing system employing vector layers for use in a GIS. There is limited research available in using digital image processing to enhance the classification process of gray level aerial photos and the digitizing process. Digital image processing may not be able to completely replace visual interpretation of this type of imagery, but it may be able to make the process more efficient.

CHAPTER 2: LITERATURE REVIEW 2.1 Historical Aerial Imagery Uses and Importance Historical imagery as referred to in this study refers to imagery acquired by an aerial camera mounted in an airplane. The photography has been directly imaged onto film and is also referred to as analog photography as opposed to modern digital imagery. This historical imagery is black and white and may be referred to as either panchromatic or gray level. Black and White, gray level, and panchromatic are terms which refer to imagery composed of shades of gray. The imagery used in this study has a pixel depth of 8 bits where the binary representation assumes that 0 is black and 255 is white. Between 0 and 255 raw pixel values are grayscale and the digital numbers correspond to different levels of gray. For example a digital number of 127 will correspond to a medium gray in the photo. This panchromatic imagery has a single band where digital numbers represent the spectral reflectance from the visible light range. Historical panchromatic imagery contains brightness values but has limited spectral information available in the visible wavelengths (0.4-0.7m), unlike the panchromatic band of a satellite image such as Landsat 7, which generally is sensitive into the near infrared wavelengths (.52-.0.9m) (Hoffer 1984). Historical aerial photographs are a valuable and important data source for studying long term (20 80 year) change processes such as land use/land cover change and vegetation and environmental dynamics. These historic photos present a snapshot in time that may offer insight into the current state of land use/land cover change processes and what patterns may have affected their growth and stability. Much of the imagery

available for long-term analysis is black and white aerial photography (Carmel and Kadmon 1998, Hudak and Wessman 1998, Caridade et al. 2008). The historical record that has been captured from aerial photography provides a long temporal history to work with and provides an extensive frame of reference in which to assess the magnitude of land use/land cover change. Advances in GIS, photogrammetry, image analysis, and digital image processing have increased the potential to use historical aerial photography for many types of change analysis including land use/land cover change (Okeke and Karnieli 2006). Gray level historical aerial photos used to produce land cover maps are generally created through techniques such as visual interpretation and manual digitizing (Carmel and Kadmon 1998, Kadmon and Harari-Kremer 1999). This is a very time consuming and labor intensive process. This fact has a tendency to limit analysis to small areas. The digitizing itself is generally dependent on the ability of the interpreter and may lead to results that are not objective due to skill level and human bias (Kadmon and HarariKremer 1999). The assumption is often made that manual interpretation is 100 % accurate but assessing the accuracy of this method is difficult according to Congalton and Green (1993) and Carmel and Kadmon (1998). 2.2 Classification Problems of High Resolution Panchromatic Imagery The historical aerial imagery analyzed in this project is limited in spectral information and has high spatial detail. These two variables can present some difficulties with the use of common digital classification and image processing techniques. The first challenge is the spectral resolution, which is only one band. This band lacks detailed spectral information. Most panchromatic aerial films are sensitive to the visible spectrum but also

require filtering to take into account haze and atmospheric conditions. The film is generally filter exposed to green and red visible wavelengths and not the blue wavelengths to cut down on atmospheric haze. The resulting image records in black and white the tonal variations of the landscape in the scene (U.S. Army Corp of Engineers 1995). Common classification methods are limited in accuracy and usability when there is only one band to work with (Short and Short 1987, Anderson and Cobb 2004, Caridade et al. 2008). Research from Carmel and Harari-Kremer (1999) and Carmel and Kadmon (1998) have approached the limitations of having only one band of information to analyze in several ways. Carmel and Kadmon (1998) used a combination of illumination adjustment and a modified maximum likelihood classifier that used neighborhood statistics to achieve classification accuracies of over 80% for study of long-term vegetation patterns using gray level aerial imagery. This research showed that the relationship between neighborhood pixels was an important factor in achieving improved classification accuracy. Carmel and Harari-Kremer (1999) concentrated on training data and ancillary data to produce vegetation maps from black and white aerial photos from 1962 and 1992. The accuracy of using a maximum likelihood classifier was about 80%. Their study stresses the importance of carefully considered training data and the utility of digital image processing of historical aerial photography in vegetation change detection studies. Mast et al. (1997) researched long-term change detection of forest ecotones using gray level aerial imagery from 1937 1990. Density slicing was used after determining the range of brightness values for tree cover across all imagery to get a classification of tree cover and no such. Results were satisfactory although no accuracy

assessment was mentioned, but again the significance of object brightness values for gray level imagery was established. The second challenge when analyzing this imagery is that higher spatial resolution does not generalize features to the degree coarse or medium scale imagery does, which allows much more detail to be considered in an image. Individual trees, buildings and sidewalks become visible when image detail is more perceptible in these 1-meter resolution images. This factor makes visual interpretation easier but can cause problems with automated classification, especially when spectral information is limited or nonexistent. High spatial resolution can increase within-class variances, which can cause uncertainty between classes. Browning et al. (2009) in their study of historical aerial imagery as a data source emphasized the importance of object scale when analyzing imagery. Some objects may be larger than a pixel, referred to as H-resolution, and some objects may be smaller than a pixel, which is referred to as L-resolution. This factor can make imagery with multiple scale objects more difficult to get consistent classification results across a scene. Spatial autocorrelation is also an important factor when considering this concept, as all natural scenes in remote sensing will have some type of spatial autocorrelation to create a scene, so that the image organization is something other than random noise (Strahler et al. 1986). The challenges of limited spectral information and high spatial detail can lead to a number of features in an image having similar gray level signatures and a great deal of confusion between class types (Fauvel and Chanussot 2007). In turn a per pixel classifier such as the maximum likelihood classifier has difficulty distinguishing between a medium gray field and water in a panchromatic image. Panchromatic image

classification can be improved by considering the relationship between neighborhood pixels as in texture analysis and object-based image analysis (Alhaddad et al. 2009, Myint and Lam 2005, and Caridade et al., 2008). 2.3 Statistical Texture Indicators Image texture is one of the most important visual indicators in distinguishing between homogenous and heterogeneous regions in a scene. The human interpreter uses shape, texture, size, pattern, shadow, arrangement and context of elements in an aerial photo to distinguish between objects in the image (Campbell 2008). According to Tuceryan and Jain (1998) texture is easy to discern in an image but it can be a difficult concept to define and there is not one generally accepted definition. One way to define texture is to consider it as the spatial variation of the intensity values in a region of an image (Tuceryan and Jain 1998). This regional variation in intensity values implies that the evaluation of texture is a neighborhood process and that a single pixel does not create texture on its own. Texture is also a quality of an image scene that corresponds to a pattern that is part of the structure of the image. In a natural scene an area of farmland and a forested area comprise two separate visual patterns in separable regions. These regions may also contain secondary patterns having characteristics such as brightness, shape, size, etc. A field may also have a planting pattern and a forest may be comprised of deciduous and coniferous trees giving the area a distinctive sub pattern that has its own brightness, shape, size, etc (Srinivasan and Shobha 2008). Texture as a property of an object or regional feature in an image can be described as fine, smooth, coarse, etc. Tone is the range of shades of gray in an image. According to Haralick (1979), tone and texture are

interdependent concepts in that both are always present in an image to varying degrees. This interrelationship between tone and texture is explained by Haralick (1979) as patches in an image that either have little variation in tonal primitives (tone) or a patch that has a great variation of tonal primitives (texture). The work of Haralick et al. (1973) was the foundation for most of the later research relating to image texture analysis. Their work provided a computational method to determine textural characteristics in an image scene and discussed several widely used textural statistics used in image texture recognition. These statistics included: contrast, correlation, angular second moment, inverse difference moment and entropy. Contrast measures the amount of local variation in an image. Correlation measures the linear dependency of gray levels in the image. Angular second moment measures local homogeneity. Inverse difference moment also measures local homogeneity but relates inversely to contrast. Entropy measures randomness of values. Image analysis may be performed using these measures either alone or in combination. There are three main approaches to texture analysis. These approaches include statistical, spectral and structural. Statistical methods are based on local statistical parameters such as the co-occurrence matrix and variability within moving windows. Spectral methods include analysis using the Fourier transform and structural methods emphasize the shape of image primitives (Srinivasan and Shobha 2008). This study utilized statistical methods to include the co-occurrence matrix, the occurrence measures and moving windows. By evaluating the spatial distribution of gray values using statistical methods, a set of statistics can be derived from the distributions of neighboring features throughout the image. There are first order and second order texture statistics.

10

First order statistics such as mean, standard deviation, and variance analyze pixel brightness values without analyzing the relationships between the pixels. Second order statistics on the other hand analyze the relationships between two pixels and these measures include contrast, dissimilarity, homogeneity, entropy, and angular second moment (Srinivasan and Shobha 2008). First order and second order statistics are used in this study as a method to improve the classification accuracy of panchromatic aerial photos. The analysis of texture is a technique that has been used to aid and increase classification accuracy in both gray level image analysis and multispectral analysis. Haralick et al. (1973) conducted the first major study of texture as an imagery analysis tool. They demonstrated the utility of the Gray Level Co-occurrence Matrix (GLCM) as an analysis tool for panchromatic aerial photographs and multispectral imagery even though computer processing constraints of the time hindered their study. The classification accuracy in their study was 82% for the panchromatic aerial imagery. Caridade et al. (2008) used the GLCM and a variety of moving window sizes to achieve an overall classification accuracy of black and white aerial photos of 83.4% using four land cover classes. The GLCM uses statistics such as dissimilarity, angular second moment, homogeneity, contrast, entropy etc. to statistically determine the frequency of pixel pairs of gray levels in the image. Caridade et al. (2008) also discusses the variation of land cover type accuracies throughout an image. Their study shows that certain land cover types such as water may achieve accuracy levels of 100% while others such as bare ground are much lower at 76.5%. Cots-Folch et al. (2007) used the GLCM to train a neural network classifier but the highest accuracy obtained was only 74%. Their study

11

stated that better training data and ancillary data sources could be used to improve the results. Maillard (2003) compared the GLCM to semi-variogram and Fourier spectra methods and found that the GLCM works better in areas where textures are easily distinguished and the semi-variogram is better in areas where texture is more similar. The Fourier method was less successful than either of the other two methods. Alhaddad et al. (2009) found that the GLCM and mathematical morphology produced results which were closer to visual interpretation than other texture analysis methods. One of the main utilities of texture analysis as it applies to improving the classification of panchromatic imagery in particular is that it increases the dimensionality of the imagery from one band to multiple bands. A new band is created for each texture function. This increased dimensionality can help alleviate some of the problems of class separability that arise when trying to classify historical aerial photos (Halounova 2009). Halounova used a combination of texture, filtering and object oriented classification to achieve overall accuracy levels between 89% and 92%. Their methodology of increasing the dimensionality of panchromatic imagery to try to achieve more separability between land use/land cover classes was an important influence on this thesis research. In areas of heterogeneous objects, the texture information in neighborhood pixels is a consideration. Common classification algorithms that rely on spectral information at the pixel level do not consider spatial information. This spatial information can become very important when trying to discern land cover types such as urban areas (Myint and Lam 2005). Two types of analysis can assist the classification process: region-based analysis and window based analysis. Region-based analysis involves using image segmentation and window based analysis can be used in pre- or post-classification to filter noise from

12

the results (Gong et al. 1992). The importance of the spatial aspect of texture analysis is illustrated in many studies involving texture analysis (Haralick 1973, Gong et al. 1992, Hudak and Wessman 1998, Myint and Lam 2005, Erener and Duzgun 2009, Pacifici et al 2009). This study used region-based analysis during object-based image analysis and window based analysis through the GLCM. 2.4 Image Enhancements and Filtering Texture analysis in combination with image pre-processing such as principal component analysis has been explored by Awwad (2003). His study, which utilized a 1941 gray level photo, used texture analysis windows of different sizes and then combined the results to create an image with sixteen layers. Principle components analysis (PCA) was used to reduce the dimensionality of the resulting image. He combined several digital processing techniques but overall accuracy was only 58%. Much of the literature on using digital image processing techniques for classifying gray level aerial photos does not make use of multiple texture window sizes in combination to return a result. Even though examples are rare in the literature and accuracy was low as reported by Awwad (2003), the technique has promise. Halounova (2009, 2005) also combined several texture window sizes but used filtering and object oriented classification rather than PCA to achieve classification accuracies over 90%. Image enhancements such as filtering and texture add multiple channels to the one band panchromatic image and allow the image to be processed in a similar fashion to a multiple band image. There is room for more research using this type of methodology with different parameters and different pre- or post-processing results such as convolution filtering, edge detection and smoothing windows.

13

Edge detection is another important consideration when trying to separate a scene into distinct objects. A natural scene such as an aerial photo does not necessarily have a clear relationship between an object and a background. Anderson and Cobb (2004) provided a new unsupervised hybrid classification algorithm based on edge detection and thresholding for pixel classification. Nearest edge thresholding outperformed both the maximum likelihood and ISODATA clustering classification schemes. Their study illustrated the importance of edge detection between features in gray level aerial photos. Li et al. (2008) also conducted research, which concentrated on the importance of edge detection and shape characteristics. The process used was automated using ArcGIS Model Builder and results were compared to manual digitizing with the model correctly identifying 70% of the manual classifications. Hu et al. (2008) used grayscale thresholding in regards to image segmentation and emphasized the importance of transition regions between objects in a scene and the ability to segment objects in an image. Transition regions between objects can be problematic when classifying complex scenes, as there can be multiple areas in the image with different gray scales between objects causing classification errors and a salt and pepper effect. Texture filters in combination with neural network classifiers are another methodology that has shown some success in land use/land cover classification of gray level aerial photos. Ashish (2002) used several artificial neural network (ANN) classifiers based on histograms, texture and spatial parameters with some success on 1993 gray level aerial photos. Textural parameters yielded the highest overall accuracy at 92%. His study further showed the importance of texture parameters for classification of gray level aerial photos. Another study conducted by Pacifici et al. (2009) used a neural network

14

classifier and a simplification procedure with some success on the panchromatic bands of WorldView-1 satellite imagery. After the simplification procedure called network pruning was used on the imagery, texture was optimized and input features were reduced producing classification accuracy above 90% in relation to the Kappa coefficient. Their study provided another example of how texture parameters can improve the classification accuracy of different types of classifiers using high resolution panchromatic imagery. 2.5 Image Segmentation and Object-based Image Analysis Considering the high spatial resolution of gray level aerial photos and the lack of spectral information, object-based image analysis is another technique that has been successful in classifying high spatial resolution imagery. Object-based image analysis (OBIA) is a method of image analysis that uses objects in a scene rather than individual pixels to derive information from the imagery. OBIA is a two-part process consisting of image segmentation and then image classification. The image is first divided into homogenous and adjacent regions, which take into account texture, region context, shape and spectral information during the segmentation phase. Image segmentation reduces the complexity of the image, and produces regions in the image, which can in turn be considered meaningful to the image interpreter. OBIA was compared to pixel based classification in a study by Pillai and Wesberg (2005) using gray level aerial imagery from 1965 and 1995. Their study illustrated how scale dependency can affect classification results depending on the objects studied. Scale dependency of individual landscape elements can also affect the usefulness of texture parameters as illustrated in Resler et al. (2004). Change at the scale of individual trees

15

was not statistically significant between pixel based classification and object-based classification. Object-based classification was more accurate when comparing patches of trees in high spatial-resolution panchromatic imagery. Their study illustrates the importance of determining land use categories and object scale when classifying imagery. Elmqvist et al. (2008) performed OBIA on the panchromatic band of an Ikonos image and found that spectral information provided the best segmentation results. Classification accuracies were fairly low for their study but outperformed pixel based classification. Laliberte et al. (2004) used a combination of low-pass filtering and object-based image analysis on gray level aerial photos successfully integrating gray level aerial photos and satellite imagery in a change detection study. Middleton et al. (2008) successfully used feature extraction and a support vector machine (SVM) supervised classifier to extract features on a 1947 aerial image in a change detection study. One of the main conclusions of their study was that classification accuracy of the panchromatic image was based on image quality. Historic panchromatic imagery is not always of good quality due to age or deterioration of the film. A successful methodology for classifying this type of imagery needs to be successful for various levels of image quality. The literature regarding classification of gray level aerial photos concentrates for the most part on replacing manual digitizing with digital image processing techniques. There is a gap in the literature in regard to using digital image processing to help facilitate digitizing. By combining digital image analysis techniques such as texture and objectbased image analysis with GIS vector capabilities, digitizing land cover classification zones can be enhanced and in some cases possibly eliminated.

16

CHAPTER 3: CONCEPTUAL FRAMEWORK AND METHODOLOGY 3.1 Description of the Study Area The study area for this project is near Ogden, Utah (Figure 1). The area is in north central Utah (Figure 2) and consists of a variety of land cover types including agricultural land, impervious surfaces, grassland, forest and water. The Ogden study area does not provide an example of dense urban land cover so a secondary area of interest was chosen in Salt Lake City, Utah (Figure 3). The Salt Lake City study area includes a park and a variety of residential and commercial land cover. By using two study areas with a variety of textures and objects in the scene, this research can show the usefulness of digital image processing across two completely different areas and images. The classification results concentrate on the Ogden imagery as this imagery has better defined and larger areas of land class types. The Salt Lake City image is used mainly to see how the same techniques can be used in an urban area. Urban areas have their own unique classification challenges that are increased when trying to classify panchromatic imagery. Another reason the Ogden image was the main focus of this research is that this imagery was originally flown for FSA for agricultural purposes. It is also likely that much of the historical imagery in the vault at APFO will be used to further study historical agricultural change processes. 3.2 Description of Data The image of Ogden, Utah from 1958 was obtained from the Aerial Photography Field Offices internal imagery storage network. The Ogden study area was clipped from a digital orthophoto quarter quadrangle (DOQQ) 4111256ne from 1958 (Figure 4) and covers approximately 0.5 square miles. The image was scanned from black and white

17

Figure 1 Aerial photo of Ogden study area

Figure 2 - Overview of study areas in relationship to the state of Utah

18

Figure 3 Aerial photo of Salt Lake City study area

film at APFO using a standard 25 microns which produces about 1016 Dots Per Inch (DPI). The imagery was originally flown at 40,000 feet producing a pixel resolution of 1 meter and the bit depth of the image is 8 bits. This imagery was also ortho rectified at APFO using the Socet Set 4x software suite and was rectified to the Universal Transverse Mercator (UTM) coordinate system zone 12, North American Datum of 1983 (NAD 83). The imagery is in GeoTIFF format, which can be used in a variety of imagery analysis and GIS programs. The image of Salt Lake City, Utah from 1977 (Figure 5) was obtained from the Utah State Automated Geographic Reference Center interactive imagery website: http://gis.utah.gov/images/sgidraster/SLCo_1977_DOQ.html. The Salt Lake City study area was clipped from a Mosaiked Digital Orthophoto Quadrangle (MDOQ) q1219_1977

19

and was scanned and ortho rectified at APFO using the same parameters and methods as the 1958 Ogden imagery. Q1219_1977 is a mosaic that was created from original DOQQs using Socet Set 4x and interactive seaming. The image resolution is 1 meter and the bit depth is 8 bits.

Figure 4 - Ogden DOQQ Study Area

20

Figure 5 - Salt Lake City MDOQ Study Area 3.3 Methodology 3.3.1 Conceptual Overview The research for this study involved a number of steps. The preliminary image processing included creating a subset of the study area from both the 1958 imagery and the 1977 imagery. A subset was used to cut down on processing and digitizing time. Once the study areas were created, the classification scheme was determined and finally heads up digitizing was performed on both images in order to obtain the digitized baseline information for comparison to automated classification and to use as ground truth data to test the accuracy of the digital imaging techniques. After the preliminary 21

processing was completed, a number of digital image processing techniques were performed on the imagery (see Figure 6). The original imagery was classified using supervised and unsupervised classifiers to form the classification baseline information. Then four main digital image processing techniques were used to try to improve the classification. These four processes were: convolution filtering, texture analysis, principle components analysis, and object-based image classification. Texture analysis was used to create layer-stacked images which increased the dimensionality of the original one band image to improve classification results. Principle components analysis was used to decrease the dimensionality of the multiple layer texture images and in one case the first principle component image derived from the multi-layer texture image was layer stacked with the original one band image. The final digital image processing component in the research was image post-processing to refine the most promising results for GIS analysis. After image post-processing an accuracy assessment was completed to compare the results of each classification with the digitized baseline information obtained by visual interpretation (heads up digitizing). 3.3.2 Software Utilized There were three software programs used in this project as no single software suite available to me provided all the tools needed for this research. The imagery analysis programs used were ERDAS Imagine version 11.0, ENVI 4.8 and ENVI EX 4.8. The GIS software used is ArcMap 10.0. ERDAS Imagine has a good set of texture analysis and filtering tools. ENVI EX and ENVI have the benefit of integration with the GIS software and ENVI EX provided a wizard based feature extraction toolset for objectbased image classification. The main interface used to provide the baseline land use/land

22

Figure 6 Main Workflow Processes

23

cover zones to aid or facilitate the manual digitizing process is ArcMap 10 as this software has good vector tools, and the ability to integrate ENVI image analysis tools into ArcMap Model Builder. 3.3.3 Preliminary Processes The study area was clipped from the original DOQQs using the ERDAS Imagine subset tool. The area covers approximately 0.5 miles in both project areas to facilitate digitizing and image processing. Much of the image processing including the use of convolution filters; texture analysis and classification methods required trial and error to find the best settings and analysis methods for the imagery. The best results were analyzed further using post processing, vector conversion and editing. Heads up digitizing was performed on the Ogden and Salt Lake City imagery. This provided the digitized baseline information as ground truth to be used later in the classification accuracy assessment. Heads up digitizing was performed using ESRIs ArcMap 10.0 software. A geodatabase was created for both the Ogden imagery and the Salt Lake City imagery. One person performed the visual interpretation of the imagery for the sake of consistency. The interpreter has had eight years of work experience using photo interpretation to create a variety of map types for the Defense Mapping Agency (now the National Geospatial Intelligence Agency). The times were recorded so that a comparison can be made between manual digitizing and digital image processing to determine the efficiency of digital image processing. The determination of land use classes was an important consideration as it had a great deal of impact in the final results of image classification especially for panchromatic

24

imagery since so many land use/land cover types have similar digital number (DN) values. Classification schemes in previous studies using black and white aerial imagery have used relatively limited categories (Kadmon and Harari-Kremer 1999, Laliberte et al. 2004, Okeke and Karnieli 2006, and Pringle et al. 2009). This study includes three levels of classification detail for the study areas. The approach looked at the classification of the imagery in a bottom up manner going from a high level of detail in representing the land cover types existing in the imagery to grouping these types into larger categories. This strategy was used to determine how useful detailed digital analysis of the imagery was compared to visual interpretation. The first level of classification of the Ogden imagery was based on eight land use/land cover classes including water, forest, grassland, dark fields, medium fields, light fields, bare earth and impervious surface (Table 1). At this level it was too difficult to represent the cropland as one class as there is too much variation between fallow fields and fields that are growing or wet. There was also confusion between the most representative digital number values between dark, medium, and light fields as there are pattern variations in the respective fields.

Table 1 First level classification Ogden land use/land cover classes


Class Name Water Forest Grassland Dark Fields Medium Fields Light Fields Bare Earth Impervious Surface Description Lakes, Reservoirs, Rivers Areas of trees with a canopy cover greater than 50% Areas dominated by grasses and herbaceous plants with little or no tree or shrub cover Agricultural cropland area characterized by dark gray tone DN ~ 0-122 Agricultural cropland area characterized by medium gray tone DN ~ 100-188 Agricultural cropland area characterized by light gray tone DN ~ 151-200 Areas of earth, sand, and rock with little to no vegetation Buildings, roads, parking lots

25

The second level of classification took the eight classes and combined them into three larger groups: cropland, vegetation, and other. Finally, the third level of classification consisted of cropland and non-cropland. The results of these classifications and their impact on classification accuracy were obtained by combining the results of the initial classifications rather than running new supervised and unsupervised classifications to reflect these combined groupings. The classification system used on the Salt Lake City image also used a bottom up approach starting out with a more detailed classification and then moving to more general groupings. The first level of classification consisted of five land types including commercial, transportation, trees, grass, and residential (Table 2). The second level classification was reduced to built up areas, vegetation, and transportation. The third level of classification consisted of built up areas and non-built up areas. The Salt Lake City image has entirely different characteristics from the Ogden image, as the Salt Lake City image is comprised of a mixed type urban area without any agriculture, bare earth, forest, or large bodies of water. The added classification difficulty in the Salt Lake City image was that the commercial and residential areas are made up of a mixture of manmade and natural materials. These areas consisted of thousands of small buildings and may be surrounded by either grass or concrete, all of which provide a very complex pattern of shapes and surfaces which were tonally very similar. There were many tonal similarities existing in the Ogden imagery as well but the land cover types such as dark fields, light fields, water, etc. are fairly homogenous blocks unlike the patchwork of the urban areas.

26

Table 2 First level classification Salt Lake City land use/land cover classes
Class Name Commercial Transportation Residential Grass Trees Description Built up area consisting of industrial, commercial complexes Transportation network including major streets and highways Mixed area that includes single family homes, apartments, trees, and grass Areas dominated by grasses and herbaceous plants (yards, fields) Woody vegetation < 20ft tall

3.3.4 Unsupervised Classification Unsupervised classification was performed on the original subset of the Ogden and Salt Lake City images to provide the unsupervised classification baseline information for comparison to digital classifications with image enhancements. This initial classification was completed using ENVI 4.8 tools for ArcGIS and the ISODATA clustering algorithm. This clustering algorithm essentially divides the image into naturally occurring groups of pixels. Similar pixels are grouped together. Three classification sets were used to process the imagery: 10, 25, and 100 spectral classes. After the imagery was classified, these groups were interactively assigned an information class by visually comparing the classified image and/or reference data. Since many of the spectral classes have similar tonal values and statistics, it was necessary to assign some of these mixed classes to either the most numerous type or the type with the most concentrated areas of pixels. There was room for interpretation, and there is a certain amount of subjectivity involved in assigning these classes. The interpreter needs to be familiar with the study area, and when some classes are divided between seemingly equal areas, it was difficult to determine which was the best class to assign the pixels to. In some cases a spectral class was divided between 3 or 4 information classes. At this stage there was not a method to split these classes into their respective groups using the ENVI or ArcGIS software. It is

27

possible to use masking and a technique called cluster busting, but this methodology was not used in this research, as it requires a significant amount of extra processing. The unsupervised classification process did provide some useful general information about the imagery. It was very difficult to assign classes to the detail level land classification system used for both the Ogden and the Salt Lake City images. After aggregating classes and assigning them a land use/land cover type from the classification scheme, there were about five classes that could be distinguished in the Ogden image and three in the Salt Lake City image. A useful tool to visualize how the clusters in an image are derived is a dendrogram. Dendrograms were created using the ArcGIS software for the same number of classes and iterations as the unsupervised classifications (Figures 7, 8, 9). A dendrogram is a graphic diagram in the form of a tree that is used to analyze clusters in a signature file (ESRI 2011). The dendrograms are used to show the clustering process from individual classes to one large cluster. The dendrogram tool takes an input signature file created in ArcMap and creates the diagram based on a hierarchical clustering diagram. The classes are clusters of pixels and the graph illustrates the distances between merged classes. The dendrogram helps to illustrate how the 10, 25, and 100 classes are distributed using the ISODATA classifier. Many of the classes overlap and are very close together numerically, which is why unsupervised classification on panchromatic imagery often gives the user unsatisfactory results. The dendrograms also illustrate the relatively small changes in class distances between having 10, 25, and 100 classes. Dendrograms of the Salt Lake City imagery were very similar except for slight differences of distances between pairs of combined classes (Figure 10). The

28

ISODATA classifier only returned 67 classes instead of 100 for the Salt Lake City image and 93 out of 100 for the Ogden image.

Figure 7 - Ogden dendrogram of ISODATA clustering 10 classes

29

Figure 8 - Ogden dendrogram of ISODATA clustering 25 classes

30

Figure 9 - Ogden dendrogram of ISODATA clustering 100 classes

31

10 Classes

25 Classes

100 Classes

Figure 10 Distances between classes from Salt Lake City dendrograms

A K-Means unsupervised classifier was also used to classify an Ogden texture image incorporating the mean, variance and homogeneity bands. This classifier provided a more satisfactory result on the texture images than the ISODATA classifier did. The KMeans classifier in the ENVI software uses a set number of classes provided by the analyst, and classes are determined after the classifier iterates through the image and the optimal separability is reached based on the distance to mean (ENVI 2011). The ISODATA classifier had difficulties with the texture image and returned a completely 32

gray image unless the classes were increased to well over 25. Considering how time consuming it was to assign classes to the result the K-Means classifier was used. Ten classes and 25 classes were used on the texture image. 3.3.5 Supervised Classification Supervised classification was performed on the original image subsets to create the supervised classification baseline information. Later on, another supervised classification was performed on images which had been digitally processed or enhanced (filtering or texture analysis). Results of the latter supervised classification were compared to the supervised classification baseline information to determine if these digital image process enhancements improved classification. Supervised classification was performed using ENVI and ArcGIS 10 software. Supervised classification unlike unsupervised classification involves the user creating training samples from land use/land cover classes that are determined to be present in the imagery. The training sets called region of interest (ROI) were created using ENVI software. This training data was used throughout the supervised classifications performed on the original imagery, texture images, PCA images, and the filtered images. The final training sets for both study areas were determined by trial and error. A training set was developed which had about twice as many samples, but this set did not significantly improve classification results for either image. These larger sets did however increase processing time, so in the interest of efficiency smaller training sets were used throughout (Figure 11 and 12). Training sets are inherently subjective and do require the analyst to be able to distinguish land use/land cover types.

33

Figure 11 Training sample distribution for the Ogden image

Figure 12 Training sample distribution for Salt Lake City Image

34

Several supervised classifiers were used to evaluate the imagery using ENVI software. The minimum distance classifier, the maximum likelihood classifier, neural net, and SVM classifiers were examined. Each classifier provides distinct advantages and disadvantages. The minimum distance to means classifier determines the mean of each pre-defined class and then classifies pixels into the appropriate class by using the Euclidean distance of the closest mean. One of the advantages to this algorithm is that it classifies all pixels and processes very quickly. The maximum likelihood classifier assumes that each class is normally distributed and is based on the highest probability that a pixel will be assigned to a particular class. When classes have a multimodal distribution this classifier will not provide optimum results. An advantage of this method is that the classifier considers the mean and covariance of the samples. The neural net classifier provided by ENVI software uses back propagation to determine class assignment of pixels. An advantage of the neural net classifier is that it does not make assumptions about the distribution of the data. The Support Vector Machine (SVM) classifier available in the ENVI software works with any number of bands and has good accuracy when automatically separating pixels into classes. This classifier also maximizes the boundary between classes, which may be useful for distinguishing land use/land cover types with similar characteristics. Another advantage of this classifier is that it works well on imagery that has a lot of noise (ENVI 2011, Jensen 2005). 3.3.6 Image Enhancement and Texture Analysis Digital image processing techniques were explored to determine if classification results could be improved. Texture analysis, convolution filtering, and contrast stretching enhance some of the spatial characteristics of the imagery. For example, contrast

35

stretching brings out more differences between light and dark areas of the imagery, and convolution filters can enhance edges. Low pass filters can smooth out areas of noise in an image such as the variations found throughout the field areas in the Ogden imagery, while high pass filters make the image appear more crisp or sharp (Jensen 2005). Convolution filtering, contrast stretching and texture filtering were used in a variety of combinations to enhance the study areas and try to improve classification. A two standard deviation contrast stretch was applied to both study areas to enhance the contrast and sharpness of the imagery. Both original images lacked definition in the light and dark areas of the image (Figure 13). The Ogden study area had a DN range of 0-235 and the Salt Lake City study area had a DN range of 0-187. All subsequent filtering and texture analysis was performed on the stretched images.

Unstretched

Stretched

Figure 13 Unstretched images compared to contrast stretched images

36

Convolution filtering was performed on the study areas using ENVI software. High pass filtering was used to help sharpen the imagery using a variety of kernel sizes: 3x3, 5x5, 7x7, and 11x11. Low pass filtering was applied to the imagery to smooth out noise in the field areas. Again 3x3, 5x5, 7x7, and 11x11 kernels were examined. As the kernel gets larger with low pass filtering, the detail becomes more generalized or blurred as this type of filtering preserves the low frequency parts of the image. A median filter was also examined using the previously mentioned kernel sizes. This filter has a smoothing effect on the image but the edges remain somewhat crisper than the low pass filter. ENVI also provides several edge enhancing filters that were used to process the original study images. The filters used in this study were Laplacian, Roberts and Sobel. The Laplacian filter has an editable window size whereas the Roberts and Sobel filters do not have editable kernels or window sizes. Edge filtered images were created using the Laplacian filter using window sizes of 3x3, 5x5, 7x7 and 11x11. The Laplacian filter was also used in combination with the Gaussian low pass filter to try and reduce some of the noise that results when creating the Laplacian filtered images. Texture images were created using ENVI software and are based on the GLCM which includes the following texture characteristics: mean, variance, homogeneity, contrast, dissimilarity, entropy, second moment and correlation. Another set of texture images were created using the Occurrence measures which consist of data range, mean, variance, entropy, and skewness. Each set of texture images was created using a 3x3, 5x5, 7x7 and 11x11 processing window. The processing window measures the number of times each gray level occurs in that particular part of the image (ENVI 2011). As the processing window becomes larger, image detail is lost. The texture images created using the

37

GLCM are eight band images, and the texture occurrence images are five band images; thus the dimensionality of the imagery is significantly increased by the use of texture. These two texture images were also layer stacked with the original imagery to create nine-band and six-band images. Additional nine-band and six-band images were also created from these two texture images layer stacked with a filtered original image. The resulting images were then classified using unsupervised and supervised classifiers. The accuracy of these classifications was then compared to the classification baseline information using an error matrix. Principle components analysis was used to reduce the number of bands on several composite images. In this way the dimensionality of the imagery is reduced but most of the information in the imagery is maintained. PCA was performed on a multi-layer image consisting of images created from variance, mean, and homogeneity texture operators, plus the original unprocessed image. The result was a two-layer image which incorporates information from the original image and the texture layers. ENVI software also provides tools to perform mathematical morphology filtering which is a non-linear process based on shape. Morphology filtering was performed on both the original imagery and 5x5 occurrence texture images. Supervised and unsupervised classification was then performed to determine the accuracy as compared to the classification baseline information. 3.3.7. Object-based Image Analysis Another digital image processing technique which was explored in this research was object-based image analysis. Object-based image analysis is based on regions or groups of pixels in an image rather than single pixels. Feature extraction was performed using

38

ENVI EX which provides object-based tools that utilize spatial, spectral, and textural features. The object-based analysis provided by the ENVI software uses an edge-based segmentation algorithm and requires only the scale level as an input parameter. The scale levels range from 0-100 where a high scale level reduces the number of segments that are defined, and a low scale level increases the number of segments that are defined. should be a balance in determining the scale level by trying to choose a scale that delineates the image object boundaries as well as possible. This level is likely to be different depending on the characteristics of the imagery being analyzed. ENVI provides an interactive preview window to help determine an appropriate scale level for an image. The preview window allows you to see what kind of effect changing the scale level of the segmentation has on the objects of interest in the image scene before the segmentation runs. This helps to avoid creating numerous unsuccessful segmentation images. After the initial segmentation has been performed, image segment merging can be done. ENVI uses the Lambda-Schedule algorithm that iteratively merges segments by using a combination of spectral and spatial information. This step is especially helpful when an image has been over segmented as it enables the aggregation of small segments that may occur from image object variation (ENVI 2011). After segmentation the next step is to find objects and classify the imagery. Objects were chosen interactively from the segmented image and the image was then classified. ENVI EX offers either a K-means classifier or a SVM classifier. Classification and post processing was performed using both available OBIA classifiers. The final step before classification in the ENVI EX feature extraction workflow is the refine results window. In this window there are options to export vectors and smooth the results similar to using a majority filter on a There

39

classified image. The process for using the feature extraction tools in ENVI EX is designed to make the process of OBIA user friendly. ENVI 4.8 also offers an OBIA classification method called size-constrained region merging (SCRM). This tool is an extension that can be added to ENVI. The tool partitions an image into reasonably homogenous polygons based on a minimum size threshold. The output of the tool is a vector file and an image file. The vector file can be used directly as an initial source to assist visual interpretation, and the image can be further classified using either unsupervised or supervised classification. One of the limitations of this extension is that there is a size limitation of 2MB for the image (Castilla and Hay 2007). All of the layer stacked imagery exceeded the size limitation for using this tool. SCRM was used on the original imagery, the one band dissimilarity, mean, homogeneity, and variance texture images. The second moment, entropy, and contrast bands were not used, as there appears to be a lot of correlation between them and the bands that were selected. The correlation band does not have enough usable information in it to segment it into objects. The output image was then classified using the SVM classifier. 3.3.8. Post Processing and Automation The classified images created from the previously mentioned digital processing techniques and classifiers contained varying quantities of island pixels and salt and pepper noise. There are numerous methodologies to reduce these types of areas in a classified image. Majority and minority filtering, clump, sieve, and combine classes are some of the commonly available tools provided in GIS and image analysis software. These processes reduce the complexity of the classification and allow a more cohesive

40

result for further analysis. Post classification processing may also produce error in the final imagery by smoothing and combining the wrong classes together. It is also not practical to remove noise pixel by pixel, as there may be thousands of areas to examine. The next step in this research was to produce a vector polygon layer that can assist in visual interpretation of the imagery. In order to simplify the procedure of processing the classified rasters and converting them to a vector layer that facilitates visual interpretation, a model was developed using ArcGIS Model Builder (Figure 14). This model allows the user to input a classified image, apply a smoothing kernel, aggregate island pixels to a specified tolerance, convert the raster to a vector layer, and smooth and simplify the resulting polygons. For consistency a majority filter using a 3x3 window and aggregation using a minimum threshold of 25 was used on all the classified images examined. The model parameters for smoothing and simplifying polygons were left open so that adjustments can be made for different images.

Figure 14 Post Processing ArcGIS Model

41

One of the challenges of using vector files that have been converted from raster files is that polygons have a stepped appearance that follows pixel boundaries. This characteristic appearance is much different from a vector file created through heads up digitizing. A human digitizer classifies an image into recognizable objects using shape, context, texture, shadows, etc. to help determine the boundaries of objects. This would be very difficult if not impossible for a human digitizer to create land use/land cover boundaries at the pixel level. This is one of the main differences between automated classification and classification performed by visual interpretation. The polygon smoothing and aggregation steps used in the model help to reduce some of the stepped appearance created by the raster to vector conversion process (Figure 15). After polygons underwent smoothing and simplification, the result appeared much closer to results obtained through visual interpretation. This process was also an advantage if polygons needed to be reshaped. There are fewer vertices for each polygon after completing these operations. Once the vector layer had been processed through the model, it was edited using a custom toolbar in ArcGIS 10 software. The custom toolbar includes a combination of

Figure 15 Polygon raster to vector, smoothing, and smooth and simplify

42

out of the box tools (Selection Tool and Cut Polygon Tool) and several custom tools created using C#.net and ArcObjects. The purpose of the custom toolbar is to provide functions to remove small islands by merging them to other neighboring pixels. It was implemented as an Add-in which was easily added to the ArcGIS 10 user interface. The toolbar consists of four custom tools: select by area, merge with smallest neighbor, merge with largest neighbor, and merge with selected polygon. These tools are very similar to raster majority and minority filtering except that the user has more control over them. The tools were then used to further refine the classification using visual interpretation. The automated classifications in essence become the starting point for the manual digitizing effort for the study areas. 3.3.9. Accuracy Assessment One of the most serious limitations of historical imagery is ground-truthing. The imagery is between 33 and 52 years old, and it is likely that many of the objects in the imagery have changed or no longer exist today. Ground-truthing was limited to visual interpretation and image accuracy. The baseline information derived from heads up digitizing was used as ground truth to evaluate the accuracy of the classification of both the original images and the images where digital image processing has been used (i.e. filtering, texture, PCA and segmentation). An evaluation tool called a confusion matrix (or error matrix) was used between classification baseline information and classifications after image processing enhancement so that there is a comparison of accuracy results. To save time and labor, only the classifications deemed best were evaluated. The confusion matrix can help to

43

visually represent classification error by use of a table, and it is used to help validate the results of the image classification compared to the ground truth. A stratified random sample of points was used as a sampling strategy for the accuracy assessment. The samples for each image were created using Hawths sampling tools for ArcMap. The values for the points were derived from the digitized baseline information. Each class consisted of forty sample points except for extremely sparse areas such as impervious surface on the Ogden image and grass on the Salt Lake City image. These sparse areas were underrepresented by a simple random sampling strategy and as such did not give an accurate assessment. Using the stratified sampling strategy allows each land cover type to have a statistically significant number of points. The Ogden image high level classification scheme used eight classes. Forty sample points were chosen for the seven most predominant classes and thirty for the sparse class totaling 310 sample points. The Salt Lake City high level classification scheme used five classes. Again forty sample points were chosen for the four most predominant classes and thirty for the sparse class totaling 190 sample points. The sample point numbers for each class in the study areas are statistically significant and for the sake of time and effort a relatively small number of points was chosen for each area based on the number of land use/land cover classes determined for each area. The Extract Values to Points tool was used to get values from the classified image and the ground truth. These values were then combined in one column (e.g. 1-1, 1-3, etc.) to obtain unique value pairs and then the summarize tool in ArcMap was used to obtain the count. The values were then entered into an Excel spreadsheet which was set up to calculate percentages of overall accuracy, producers accuracy, errors of omission, users

44

accuracy, errors of commission, single class accuracy, and the Kappa coefficient. Refer to Appendix 1 for a sample of the error matrixes used in this research.

45

CHAPTER 4: ANALYSIS RESULTS AND DISCUSSION 4.1. Manual Digitizing Heads up digitizing of land cover classes on any type of imagery whether it is multi spectral or panchromatic allows the user more control over the results of the classification. The results of this method of classification in general do not require further editing or post processing. On the other hand subjectivity of the digitizer has an effect on the results of the classification. It is unlikely that a digitizer would be able to classify an image exactly the same every time. Digitizing took place in two sessions with the Ogden imagery taking approximately five hours to complete and the Salt Lake City image took approximately three hours to complete. The Salt Lake City image has 331 polygons compared to the Ogden image which has 172 polygons. The Ogden image took much longer to digitize even though there are approximately half the amount of polygons. The polygons and land cover configurations were more complicated when considering the integration of grassland, forest and water areas on the image. The features on the Salt Lake City image are laid out in a grid pattern separated by wide streets so even though there were almost twice as many polygons to digitize the process went more quickly. An important aspect of this research was to show that digital image processing of historical panchromatic imagery could enhance and facilitate visual interpretation of the imagery on a variety of terrains and features. The visual interpretation of the imagery required a zoom level of between 1:1,500 and 1:3,000 on the Salt Lake City image and 1:1,000 and 1:4,000 on the Ogden image. These zoom levels were determined by the digitizer by how well they could see the details in

46

the imagery while still being able to have some reference to the context of objects being examined. In the experience of the digitizer a more consistent result is also achieved if there is not a large variance in the viewing scale of the objects in the scene. If an area is digitized at 1:3,000 and another area at 1:24,000 then the details being observed will not be consistent throughout the study area. Digital image processing on the other hand classifies by pixel without involving scale issues. This is a major difference in the methodology of classification. Digitizing at varying scales is both an advantage and disadvantage compared to digital classification. If the scale is zoomed in at the pixel level, it was impossible to discern what the objects in the imagery were. A large variance in scale can lead to inconsistency, but a small variance in digitizing scale can help the digitizer to consider a features relationship to surrounding objects when determining what the object is, unlike most per pixel digital classifications. By using a small variance in digitizing scale for land use/land cover classification of panchromatic imagery, both detail and consistency can be maintained while the expert knowledge of relationships and contexts of features can be utilized. This project used relatively small areas of interest. After examining the land use/land cover classes from the beginning of the project to its conclusion, there were areas of the initial digitizing which on further analysis could have been refined or changed, especially in diverse areas containing many intricate changes in the landscape. There was a tendency to generalize areas where the land use/land cover types are fragmented. This tendency is most notable in the Ogden image in the southern half of the image where the forested areas are broken up by water and grasslands. The initial digitizing was not changed to reflect new perceptions of the land class areas on the imagery. Some of these

47

inconsistencies have an effect on the final accuracy of the digital classifications, as it was apparent that at some points the digital classification was more correct than the visual interpretation. This is a limitation of the research. One of the major differences found in this research between the manual digitizing classification and the digital image processing classification was the level of detail achieved in the classifications. In the Ogden image the total number of polygons digitized was 172 (Figure 16) and the total number of polygons digitized for the Salt Lake City image was 331 (Figure 17). The digital classifications in comparison before post processing yielded several thousand polygons. After post processing most digital image classifications still exceeded the digitized baseline information but results averaged about 500-1000 polygons. It was a difficult task to digitize very detailed areas on the imagery. This study has shown that by utilizing digital image processing techniques to help facilitate visual interpretation of land use/land cover classes, the analyst can take advantage of the detail and repeatability that digital processes provide while improving the classification accuracy using a GIS in post processing the results. Results using visual interpretation and heads up digitizing may provide more initial accuracy, but digital image processing lends some added consistency to the process. 4.2. Unsupervised Classification Supervised and unsupervised classification results varied depending on the image, the classification method, pre-processing, and post-processing. Panchromatic imagery presents many challenges as previously mentioned in this study. The heterogeneity of

the study area also has an effect on how successful classification is. This study has

48

Figure 16 Classification using visual interpretation of the Ogden image

Figure 17 - Classification using visual interpretation of the Salt Lake City image

49

concentrated on supervised classification as this methodology gave better classification results and was far less time consuming once the training classes were obtained. Overall classification accuracy for unsupervised classification was low on both images ranging from 25-40% on both study areas for the detailed classification. The more generalized classification schemes improved the results by 8-50% (Table 3). The largest improvement was from level 1 to level 3 using 10 spectral classes, ISODATA classifier and the Halounova image. Running the unsupervised classification with more classes did not generally improve accuracy except in the Salt Lake City image with the level 2 land use/land cover classification scheme. Unsupervised classification with ten spectral classes provided the best overall accuracy on both the Salt Lake City and Ogden images. One of the major problems in assigning information classes to spectral classes was that there was so much overlap between classes such as forest and dark fields, and water and medium fields. There was no easy way to separate these areas on the raster image. These unsupervised classifications appear very similar to each other visually (Figure 18). The 100 class ISODATA was more difficult to assign classes to as many of the areas were very small and appeared to be evenly divided at times between two or three opposing classes such as water, grassland and medium fields.

Table 3 ISODATA overall accuracy results for Ogden and Salt Lake City study areas Land use/land cover Classification Scheme Ogden level 1 Ogden level 2 Ogden level 3 Salt Lake City level 1 Salt Lake City level 2 Salt Lake City level 3 10 Classes 39% 48% 55% 39% 51% 67% 25 Classes 40% 50% 56% 38% 51% 64% 100 Classes 39% 47% 49% 38% 53% 65%

50

Although unsupervised classification showed low accuracy in both study areas, the results showed some important trends in the data. In the Ogden image it was very difficult to extract more than five classes which was an indication that land cover types such as water, medium fields and grassland are very similar. Panchromatic imagery would require more pre- and post-processing to achieve a more accurate classification using eight land cover types. As the classes are aggregated into larger parent classes the classification accuracy increased accordingly. Unsupervised classification even on a small study area such as this was more time consuming than supervised classification and provided somewhat unsatisfactory results. The Salt Lake City image proved difficult in a different way in that the mixed urban area consisted of commercial, residential and transportation areas which appear very distinct using visual interpretation but present difficulties for digital classifiers. Urban areas are uniquely difficult to classify on multispectral imagery, as there is such a mixture of impervious surfaces. Black and white high spatial resolution imagery complicates this

10 spectral classes

25 spectral classes

100 spectral classes

Figure 18 Ogden image ISODATA classifications

51

situation, as there was an extreme overlap between classes, because features such as buildings and mixed surfaces like parking lots and vegetation exist in both residential and commercial areas making it difficult to distinguish these areas. None of the ISODATA classifications of the Salt Lake City imagery were able to distinguish between all five, detail level land cover types. Trees, transportation, and commercial land cover types were the only three land cover types that could be classified from the 10, 25, and 100 spectral class ISODATA classifications (Figure 19). Many areas of overlap exist between the commercial and transportation classes in all three unsupervised classifications. The transportation network in this image is a very distinct linear feature when classifying the imagery through visual interpretation, but there are many tonal variations in the pavement which causes a great deal of confusion for most traditional unsupervised classifiers. Grass and residential land cover types were unable to be distinguished from commercial, transportation and trees as there was considerable tonal overlap between these areas. A 10 and 25 spectral class K-Means unsupervised classification was performed on the Ogden imagery using a layer stacked image consisting of the original image and the following texture characteristics: mean, variance, and homogeneity. Surprisingly the use of texture did not improve the unsupervised classification using the level 1 land use/land cover types. Overall accuracy was 25% for 10 classes and 34% for 25 classes. This is most likely due to the fact that there was little to no distinction between the field areas as most of them exhibit a smooth surface. Also the field areas and the water areas were confused as well. Aggregating the classification into the more generalized classes increased accuracy significantly in the unsupervised classification. This was particularly

52

10 spectral classes

25 spectral classes

100 spectral classes

Figure 19 Salt Lake City image ISODATA classifications

apparent in the texture image. Accuracy increased to 54% for the level 2 classification (3 land use/land cover types classification scheme) and increased to 71% for the level 3 classification (2 land use/land cover types classification scheme). The Halounova image which consisted of texture and filtered layers did not provide improvement for the Ogden image level 1 classification scheme using unsupervised classification, but did slightly improve the Salt Lake City level 1 overall accuracy. Due to the poor accuracy results using texture and unsupervised classification no further analysis was performed in either study area. 4.3 Supervised Classification Supervised classification of panchromatic imagery again presents many challenges. The SVM classifier was used to perform the supervised classification as it has the ability to process single band imagery and it provided better results. The supervised classifiers available in ENVI are limited when using single band data as many options such as maximum likelihood, spectral angle divergence, and neural net all require more than one

53

band of data to classify the image. The classification baseline for the original Ogden image had a poor overall accuracy of 39%. The training data statistics showed how challenging it is to distinguish a detailed classification on an unprocessed panchromatic image. The training areas displayed

either bimodal or multimodal histograms which in itself is a challenge for classifiers such as maximum likelihood where the premise is that the data should have a normal distribution (Jensen 2005, Campbell 2008). Another challenge in classifying the Ogden image was that certain land cover types such as forest and impervious surface have a large standard deviation. If visual interpretation is used to classify the imagery, we see that forested areas display a lot of texture and that there is a lot of variation in the tonal properties of this land cover. Impervious surface has the same problem in that some of the roads are very light and others are a medium gray. The min and max values across the training set for the individual classes also overlap. Several different training sets were examined but this problem occurred in all sets examined. The land cover types that had the most overlap with other classes were forest with a min of 0 and a max of 162 and impervious surface with a min of 74 and a max of 223 (Table 4). There are no other spectral characteristics to go from to help distinguish these types of subtleties in gray level imagery. The Salt Lake City image presented even more challenges in part due to the characteristics of the image and the detailed land cover classification scheme. The land cover types were very detailed and textured. Overlap between classes is impossible to avoid in this type of area using a gray level image. The training data statistics again help

54

Table 4 Training sample statistics from original Ogden image


Land Cover Type Water Forest Grassland Dark Field Medium Field Light Field Impervious Surface Bare Earth Min Max Mean StDev Points 129 0 81 53 94 148 74 180 177 162 197 102 150 194 223 227 151.6 70.3 129.9 78.2 124.1 179 170.7 197.8 13.1 33.7 13.8 12 13.5 6.5 28.3 7.5 1611 3246 1944 2706 5056 1918 732 1393

to illustrate the overlap which occurs between commercial, residential and transportation classes throughout this image (Table 5). The histograms were either bimodal or multimodal. Although the histogram for transportation approached a normal distribution, there were still many peaks and valleys indicating variations in gray levels in the image for this land cover type. Supervised classification results showed that it was very difficult to extract more than 8 classes on the Ogden image and 5 classes on the Salt Lake City image. One of the limitations of using panchromatic imagery for land use/land cover classification is that the DN values which make up the signature for many land use/land cover types contain a significant amount of confusion. Real world features may be difficult to identify without taking into account their spatial context (Hung and Wu 2005). Land use/land cover types may need to be generalized. For example, detail like corn or wheat fields may not be characterized using panchromatic imagery, but dark fields and light fields or cropland may be possible. The increased accuracy achieved when aggregating land use/land cover

55

types into the level 2 and level 3 classification schemes support this conclusion. Training samples tested with a greater number of pixels increased the confusion between classes such as water and medium fields and forest and dark and medium fields. These larger samples had a broader min and max range for all classes. This research showed that although it is possible to use supervised classification on gray level imagery, it does require a significant amount of post processing of both the classification result and the vector file. The initial supervised classification is very noisy (Figure 20) compared to some of the results obtained using texture and object-based analysis. All classifiers had difficulty distinguishing between transportation areas and grass and residential areas. The most confusion occurred between residential and commercial areas with almost no distinguishable residential areas correctly classified. Both Minimum Distance and SVM classifiers were able to distinguish trees throughout the image better than other land cover types. One of reasons this occurred was because the trees training sample had a mean which was much farther away from other land cover types.

Table 5 Training sample statistics from original Salt Lake City image
Land Cover Type Trees Grass Transportation Residential Commercial Min Max Mean StDev Points 0 41 60 6 35 133 156 169 169 179 38.8 92.9 108.5 112 130.4 18.4 26.3 14.8 31.6 21.6 3246 907 9286 10738 14221

56

Minimum Distance

Support Vector Machine

Figure 20 Minimum distance and support vector machine classification of the Salt Lake City image

4.4. Image Enhancement and Texture Analysis Many combinations of image filtering, texture operators, principle components analysis and convolution filters were used to try and improve classification baseline information. Convolution filtering alone was only moderately successful. Low pass filtering was somewhat more successful than high pass filtering as the high pass filter produced an excessive amount of noise and reduced the number of distinguishable land use/land cover types in the resulting classifications. Window sizes used for filtering ranged from 3x3 to 11x11 incremented in odd numbers for both the high pass and the low pass filters. It appears from examining the images produced from the high pass filtering that as the window size increases, the contrast between edges of objects become more prominent. The low pass filtering causes the image to become smoother as the window

57

size increases although the blurring along edges reduced classification accuracy and caused most of the cropland classes to become confused when using an ISODATA classifier. Results were better using supervised classification. A minimum distance classifier was used on the one band filtered images. The high pass filter results were too noisy to be useful especially with the 3x3 filter where individual features such as forest and grassland are almost indistinguishable (Figure 21). The low pass filter caused confusion between classes such as medium fields, grassland and water and dark fields and forest (Figure 22). A combination median filter and Gaussian high pass filter was used to create a two layer image using 3x3 and 11x11 window size, respectively. The result was similar to the low pass filter but was noisier in the field and forest areas. These combination images did not appear to improve classification results so no further analysis was conducted on these images.

3x3 Window

11x11 Window

Figure 21 Minimum distance classification of the Ogden image with high pass filter

58

3x3 Window

11x11 Window

Figure 22 Minimum distance classification of the Ogden image with low pass filter Halounova (2009) combined filtering, texture, and object oriented classification to classify panchromatic aerial photos. Similar combinations were examined in this study using both pixel based and object-based classification for the Ogden study area. A multi band image was created using a median filter, Gaussian high pass filter, mean texture measure with 11x11 and 21x21 filter sizes, variance texture measure with11x11 and 21x21 filter sizes, and dissimilarity texture measure with 11x11 and 21x21 filter sizes. This image is similar to the most successful classification used in the Halounova (2009) study with the only differences being the variance texture measure as opposed to the standard deviation, and the Gaussian filter did not have an option for 9 standard deviations in the ENVI software. The 11x11 texture window was used to preserve the smallest objects in the image. In the Ogden image most of the individual trees fit this window although there were a couple of buildings which were smaller, but it was

59

determined that the majority of small objects were better represented by the 11x11 window. The larger window sizes are used to mainly filter noise from the image. The pixel-based supervised classifications using the level 1 classification scheme were poor for the minimum distance and neural net classifiers at 35%. Level 2 and level 3 classification schemes increase substantially ranging from 50%-55% for level 2 and 79% for level 3. An ISODATA unsupervised classification was also run on this image using 10 spectral classes. Class names were assigned based on the majority level 1 land use/land cover type contained in each spectral class. Due to significant confusion between level 1 land use/land cover types, the level 1 classification scheme was unable to be represented by the 10 spectral class ISODATA classification. Class 1 was labeled as medium field because this was the majority field type contained in this spectral class. The three field types (medium, light, and dark) used in the level 1 classification were clustered together in class 1 due to the effects of filtering and texture on this image. The other spectral classes were assigned as follows: classes 2-3 were labeled as grassland, and classes 4-10 were labeled as forest (figure 23). This helps to explain why the level 1 overall accuracy was so low at 32% and increases substantially to 82% for the level 3 classification scheme. The level 3 classification scheme using ISODATA unsupervised classification was also comparable to supervised classification overall accuracy which was 79% for both the neural net and minimum distance classifiers for this image. A Halounova based image was also created for the Salt Lake City study area. The resulting classifications were slightly more successful for ISODATA using 10 spectral classes at 42%. The Halounova image classified with the SVM classifier had the best overall accuracy for supervised classification of the Salt Lake City study area at 43%.

60

Figure 23 ISODATA 10 Spectral Classes Halounova Image One of the shortcomings of combining the texture and filter images was that it created edge artifacts where the size of the imagery changes with the larger processing windows. The neural net and minimum distance classifiers handled these areas better than the maximum likelihood and ISODATA classifiers. Object-based classification was slightly less accurate than the pixel based classification. It was difficult to segment the image and obtain a good representation of classes. Overall accuracy of object-based classification for level 1 was 35%, level 2 was 52%, and level 3 was 77%. One of the biggest problems with the object-based classification was that there was significant confusion between water, medium fields, and dark fields. The results may be due to the study area

61

characteristics or software as Halounova (2009) used E-Cognition software that appears to have a more robust segmentation algorithm than ENVI. The most promising image enhancements were texture operators and principle components analysis. The occurrence texture images using moving window sizes of 3x3, 5x5, 7x7 and 11x11 were the most successful texture images. The supervised classification baseline information on the Ogden image provided an overall accuracy improvement of between 11-13%. The 3x3 window had the highest accuracy at 52% for the detailed classification. Overall accuracy improved considerably as the land cover scheme was generalized. The 3x3 occurrence measure image increased in overall accuracy at level 2 to 65% and level 3 to 77%. The GLCM images with eight layers were somewhat difficult to work with in that several of the resulting layers such as entropy, second moment, dissimilarity, and correlation use floating point values that range from 0 1. The 3x3 window produced an image with too much noise and presented problems for some classifiers including neural net and maximum likelihood where the result was a few speckled areas or a completely gray image. The GLCM images were easier to work with after the individual layers were split and saved as 8 bit unsigned integer TIFF images. In this case the DN ranges were all within 0-255. As previously mentioned there is still a good deal of overlap between texture characteristics such as homogeneity and second moment. Through trial and error combinations that included the mean, variance, and contrast bands achieved some of the more successful classification results ranging from 50% overall accuracy to 98% overall accuracy.

62

Other image enhancements using the detailed level classification system scheme were edge enhancement 1 band image (39% overall accuracy), 3x3 co-occurrence layers stacked with original 9 band image (50% overall accuracy), laplacian filter 1 band image (34% overall accuracy), closed morphology filter 1 band image (43% overall accuracy), and a PCA 2 band image, original image plus the first principle component of the mean, contrast and variance 3-band image (50% overall accuracy). Generally, processes that smoothed the image rather than sharpened the image were more successful. Texture analysis was successful in separating highly textured areas such as forest from cropland, but had difficulty in separating medium and light fields from water areas as both are similar in tonal range and have a relatively smooth texture. 4.5 Object-based Image Analysis Object-based image analysis offered some very promising results using the SRCM region based approach and the ENVI EX object-based tools. Unfortunately, the SRCM based classification could not be verified in the same manner as the other classifications. The software produced an image which was unable to be projected in ArcMap so visually the images looked very good for the Salt Lake City and Ogden imagery (Figure 24), but the extract values to points tool was unable to be used. The Ogden image produced a shapefile output that could be lined up with the original imagery, but the shapefile option would not work for the Salt Lake City image. In order to determine the accuracy of the SRCM it would be necessary to assign a land use/land cover classification to each polygon. It did not seem practical to do this since there were over 1000 polygons for the Ogden study area. Converting the polygons to a raster and then classifying the regions proved unsuccessful as the SRCM output was based on the number of regions and did not

63

contain any spectral data. This tool produced visually very good results, but it has limitations and it does not have consistent performance. If these limiting factors could be improved, this tool may be more successful than the ENVI EX object-based tools. Object-based classification using ENVI EX feature extraction proved to be the most successful classification of the original unprocessed Salt Lake City image with an overall accuracy of 53%. This is still relatively low, but for the detailed classification this is a 14% improvement over the best pixel based classification results. As the land cover classification scheme was generalized, the overall accuracy improved to 64% for level 2 and 74% for level 3. The object-based classification had the advantage of using spatial and textural relationships to break the imagery up into their respective classes. One of the difficulties of OBIA is that there are generally no set rules for determining the segmentation of the image. The segmenting scale and merging scale were determined from the preview function and seeing if the objects in the scene were well represented.

Ogden image

Salt Lake City image

Figure 24 SCRM object-based segmentation images

64

A low segmentation scale seemed more successful if the image was slightly over segmented to keep the field boundaries, although this caused some over segmentation in the forest areas. The merging step after the initial segmentation helped to eliminate some of the extra segments caused by keeping the field boundaries. For the Ogden project area a scale of 45 and a merging scale of 75 were used. Two types of classification are available in the ENVI software to classify the imagery, rule based and example based. This study uses example based classifications, as it is the simplest and most straightforward approach. The Ogden image produced a fairly successful object-based classification on a PCA texture image with an overall accuracy of 50%. The generalized classifications yielded an overall accuracy of 61% for level 2 and 77% for level 3. The object-based classification on the Ogden image did a good job of distinguishing large areas in the imagery, but there was difficulty distinguishing between the medium and light field classes. This had the effect of lowering the overall accuracy of the level 1 classification scheme. The object-based classification for both images had less noise than other classification methods. Since the object-based classifications were the most successful and produced results with less noise than the per pixel supervised and unsupervised classifications, they were used to test the post-processing system explained in the next section. A per pixel classification based on the methods used by Halounova (2009) was used for comparison. 4.6 Post Processing and Automation Post processing the image classification rasters was an important step in order to use these results to facilitate visual interpretation. Supervised classification produced better

65

results in general than unsupervised classification, but there is also a practicality aspect to the usefulness of digital image processing to aid an image analyst. It became apparent after much trial and error using different classifiers and processing methods that there was not one single method or process that was ideal for classifying panchromatic imagery. A good base classification makes the job of visual interpretation easier, but it is not entirely necessary if a good system is put into place to utilize the digital image classification. The post processing model and the polygon editing toolbar provide a way to give the user a base to start from even if it is less than ideal. This flexibility in the system was an important consideration due to the variability that the user may encounter between land use/land cover types for different projects and variability in general image quality. A post-processing system was designed to improve and clean up final land use/land cover results obtained from the classified images. The system consists of 3 steps. Step 1 is to run the post-processing ArcGIS Model (Figure 15), which converts the image into a polygon layer. Step 2 is to interactively edit the resulting polygon layer using the custom vector editing toolbar. Finally step 3 is to perform an accuracy assessment to determine the success of the results. Heads up digitizing is often performed using GIS software with the output being a land cover classification vector layer. A vector layer provides a more flexible medium for visual interpretation as each member of a class is represented as a feature rather than a non-contiguous area. Vertices and attributes are easily edited in vector format. The custom polygon editing tool was used to delete any remaining noise and to reduce small island pixels throughout the classification. Refining the object-based classification

66

through the post processing system mentioned above took between 3 and 5 additional hours for each image which is similar in time to digitizing. Depending on the project an overall quick cleanup could be performed in about an hour and would likely add substantial accuracy to the classification. Most of the system involved using the merge with largest neighbor tool and the merge with selected polygon tool. The merge with selected polygon tool was used most often, as occasionally what appears to be the largest neighbor was difficult to discern. The object-based classification of the study areas provided a good base to work with as there were fewer areas of noise and island pixels. The post processing system was very successful at improving the overall accuracy of the classification for both images (Figures 25 and 26). For the Ogden image using the level 1 scheme, overall accuracy was increased from 56% to 85% with a Kappa index of 83% indicating a substantial agreement between the classification and the reference data. The level 2 schemes overall accuracy was increased from 80% to 93% with a Kappa Index of 89%, and the level 3 scheme produced an overall accuracy of 98% improved from 94% with a Kappa index of 97%. The Salt Lake City image also yielded substantial accuracy improvements from 53% to 72% (65% Kappa index), from 63% to 76% (65% Kappa index), and from 71% to 82% accuracy (63% Kappa index) respectively in decreasing level of land cover classification scheme. A per pixel classification based on Halounova (2009) methodology using texture and filtering was also used to determine whether the post processing system could improve a classification with an initially lower overall accuracy. Before using the post processing system overall accuracy of this image was 35% (Figure 27). The post processing system

67

analysis of the Halounova (2009) based image took five hours compared to three hours for the object-based analysis. The object of this study was not to try to perfect the classification but to effectively and efficiently improve the results in terms of time consumption. After post processing the overall accuracy for the level 1 classification scheme was improved from 35% to 75% (72% Kappa index). Level 2 was improved from 50% to 87% and level 3 from 79% to 98%. The improvements in level 2 and level 3 are similar to the gains seen for object-based classification. This is likely due to the good separation between cropland and all other classes on this image. This study has shown that the post processing system can improve a poor initial classification substantially.

Figure 25 Ogden object-based classification image and post processing system vectors

68

Figure 26 Salt Lake City object-based classification image and post processing system vectors

Figure 27 Ogden pixel based classification and post processing system vectors

69

4.7 Classification Accuracy and Results This research showed that classification accuracy for panchromatic imagery could be variable depending on the image and the classification scheme. One of the biggest factors that affected classification accuracy was the number of land cover categories. Some projects require a high level of classification accuracy. This research has also shown that by using digital image processing techniques and a system of post processing on the results, it is possible to achieve a high level of accuracy (over 80%) for a project requiring detailed land use/land cover classification. The Kappa index used to determine how much of the accuracy may be due to chance was quite variable throughout this study ranging from 25% to upwards of 97%. The higher the number the more agreement there was between the reference data (digitized baseline information) and the classification image. It is notable that the Kappa index was extremely high after the post-processing system was applied. This high Kappa value shows that the agreement between the ground truth classification and the post-processing results increased, and positive results had less to do with chance alone. Overall accuracy does provide a benchmark to determine in general how the classifications compared to each other. Please refer to Appendix 1 for a sample of error matrixes used for this research. Classification results between supervised and unsupervised classification were very similar on the original panchromatic imagery for the level 1 classification scheme. Overall accuracy for unsupervised classification ranged from 39%-40% on the Ogden image while the supervised classification baseline information overall accuracy was 39%. The overall accuracy of the Salt Lake City image unsupervised classification ranged from 38% to 39% and the supervised classification baseline was 38%. Both study areas had

70

very low overall accuracy. There was not one classifier either supervised or unsupervised which stood out as being significantly better than another for the classification of the original panchromatic image in either study area. Please refer to tables 6 and 7 for a comparison of classification results for the Ogden and Salt Lake City study areas. From a users perspective supervised classification provides more usable and easier to work with results given that identified classes are returned. This in turn offers a better base that can be run through the post processing system proposed in this research. Supervised classification takes into account the classes that the user has determined beforehand and does not require further identification of classes. Identifying classes and labeling them is much more time consuming in the unsupervised classifications (Tables 6 and 7). Unsupervised classification results were useful for seeing how classes were grouped in the imagery and were a good illustration of why the more generalized classifications had higher overall accuracy. After assigning labels to classes from the unsupervised classification due to confusion and combining classes, there were only 4-7 level 1 land use/land cover classes that could be categorized in the Ogden study area depending on the number of spectral classes chosen and whether texture/filtering was used. The object-based classifications on the other hand had the most promising results in this study for use in the post processing system, although the system works to improve even an unsatisfactory classification base. When specific land use/land cover types were examined, the unsupervised classification baseline was very good at classifying the dark, medium, and light fields but very poor at classifying water, forest, grassland, impervious surface and bare earth. Supervised classification worked well on dark fields, water, and light fields. In both

71

unsupervised and supervised classifications the dark fields users accuracy was consistently the highest, ranging from 60%-92.5%. Users accuracy for impervious surface and bare earth was consistently low for both methodologies ranging from 0%35%. See table 8 for a comparison of the users accuracy for individual land use/land cover results for the Ogden study area.

Table 6 Ogden image overall accuracy and level 1 completion time


Classification method ISODATA 10 spectral classes - original imagery ISODATA 25 spectral classes - original imagery ISODATA 100 spectral classes - original imagery ISODATA 10 spectral classes Halounova image Unsupervised K-Means 10 spectral classes original, texture (mean, variance, homogeneity) SVM - original imagery Minimum Distance Halounova 9 layer Neural Net Halounova 9 layer SVM - 3x3 texture occurrence measures SVM 5x5 texture occurrence measures SVM 7x7 texture occurrence measures SVM 11 x 11 texture occurrence measures SVM 3x3 Co-occurrence measures and original SVM PCA, original, 3x3 texture (mean, homogeneity) SVM PCA, original, 3x3 texture (mean, contrast, variance) SVM 5x5 edge enhance SVM Laplacian filter add back 80% original SVM closed morphology filter SVM Object-based PCA, original, 3x3 texture (mean, contrast, and variance) Neural Net Halounova 9 layer post processing system SVM 11x11 texture occurrence measures post processing system SVM Object-based post processing system PCA, original, texture (mean, contrast, and variance) obtained by aggregating the land use/land cover types Time Level 1 17 m 22 m 57 m 15 m 12 m 6m 1m 30 m 8m 8m 8m 8m 8m 7m 7m 6m 7m 7m 15 m 5h 1h 27m 3h Classification scheme Level 1 Level 2 Level 3 39% 48% 55% 40% 50% 56% 39% 47% 49% 32% 51% 82% 25% 39% 35% 35% 52% 51% 50% 46% 50% 35% 50% 39% 34% 43% 56% 75% 61% 85% 54% 53% 50% 55% 65% 65% 63% 61% 62% 53% 61% 50% 46% 53% 80% 87% 72% 93% 71% 60% 79% 79% 77% 78% 77% 80% 76% 62% 75% 60% 54% 62% 94% 98% 92% 98%

*Please note times were only recorded for level 1 classification results as level 2 and level 3 results were

72

Adding texture layers offered fairly significant improvement on the overall accuracy using the level 1 classification scheme for the Ogden study area. Only slight improvement was seen in the Salt Lake City image. Overall accuracy for the Ogden image using the occurrence texture measures ranged from 46%-52% and the Salt Lake City image ranged from 38%-42%. The size of the texture window for the Ogden image did not significantly impact the overall accuracy of the image until the window size reached 11x11. At this point overall accuracy started to drop more quickly than between the smaller windows (52%-46%). The overall range of accuracy is still fairly small at 6%. The texture window size for level 2 and level 3 classification schemes had even less impact on overall accuracy with a range of only 3%-4% difference. The texture window size had the opposite effect on the Salt Lake City image. Overall accuracy increased very gradually as the texture window got larger (38%-42%). This helps to show the differences in the two study areas. The Ogden study area is characterized by more homogenous features such as forest and cropland whereas the Salt Lake City study area has diverse land use/land cover classes such as commercial and residential which are more heterogeneous in content consisting of a variety of natural and manmade materials in widely varying shapes and sizes. The use of texture also had an effect on the various land use/land cover classes in both study areas. Texture increased the users accuracy of features such as forest, grassland, dark fields, and impervious surface while water, bare earth, and medium fields decreased. Light fields had mixed results either decreasing or increasing depending on the texture window size. The Salt Lake City land use/land cover classes were also affected by texture. Texture measures increased the users accuracy of grass, and residential classes

73

Table 7 Salt Lake City image overall accuracy and level 1 completion time
Classification method ISODATA 10 spectral classes - original imagery ISODATA 25 spectral classes - original imagery ISODATA 100 spectral classes - original imagery ISODATA 10 spectral classes Halounova image Minimum Distance Classifier original imagery SVM original imagery Maximum Likelihood 5x5 texture occurrence, saturation stretch Minimum Distance Classifier Halounova image Neural Net Halounova image Neural Net 5x5 texture occurrence SVM 3x3 texture occurrence measures SVM 5x5 texture occurrence measures SVM 11x11 occurrence measures SVM 5x5 texture occurrence measures, saturation Stretch SVM Halounova Image SVM Object-based original imagery Time Level 1 10m 22m 46m 8m 1m 5m 1m 1m 40m 14m 15m 15m 16m 16m 30m 10m Classification scheme Level 1 Level 2 39% 51% 38% 51% 38% 53% 42% 60% 38% 38% 42% 42% 39% 37% 38% 41% 42% 41% 43% 53% 54% 45% 66% 60% 48% 46% 45% 48% 52% 66% 52% 64% Level 3 67% 64% 65% 67% 63% 58% 63% 65% 54% 57% 55% 56% 58% 75% 58% 71%

SVM Object-based Post Processing System original 3h 72% 76% 82% imagery *Please note times were only recorded for level 1 classification results as level 2 and level 3 results were obtained by aggregating the land use/land cover types

and decreased the users accuracy of trees and commercial classes. The transportation class users accuracy increased or decreased depending on the texture window size. The individual land use/land cover classes in both study areas were affected by texture window size depending on how homogenous or heterogeneous the features were. Classes that were more homogenous like water, medium fields, and bare earth decreased as window sizes increased. Classes that were more heterogeneous like forest, grassland, and impervious surface increased in accuracy as window sizes increased up to the 11x11 window size where accuracy then started to decline again. This demonstrated that there was not a single texture processing window that was able to effectively characterize all the textures for either study area (Tables 8 and 9).

74

Object-based classification had the highest overall accuracy for both study areas. For the level 1 classification scheme the Ogden study area had an overall accuracy of 56%, which was an improvement over pixel based classifications of 4%-31%. The level 2 and level 3 classification schemes showed an even greater improvement of 15%-33% and 12%-45% respectively. The level 3 classification scheme was the only classification which attained over 90% accuracy without post-processing. This indicated that the object-based classification was good at distinguishing cropland from other land use/land cover classes. This is due to the spectral homogeneity of dark and medium fields and the shape homogeneity of the fields. Users accuracy for individual land use/land cover classes in the level 1 classification scheme varied for object-based classification. Objectbased classification showed significant improvement over pixel based classification for water, dark fields, impervious surface, and bare earth from 10%-72.5%. Forest, grassland, and medium fields varied depending on the classifier and texture properties. Light field was the only feature that had a decrease in accuracy ranging from 27.5%-75% depending on whether texture or supervised, unsupervised classification was initially used. This decrease in accuracy was most likely due to confusion between light fields and medium fields that could be caused by some spectral similarities, and the close proximity of medium and light fields in the Ogden study area. Object-based classification was also generally more successful than pixel based classification for the Salt Lake City study area. The overall accuracy for the level 1 classification scheme was 53% which was a 10%-16% improvement. Level 2 and level 3 classification schemes also showed improvement from 4%-17% over most of the pixel based classifiers except the 5x5 texture processing window using the occurrence

75

Table 8 Users accuracies for individual land use/land cover types Ogden study area
Classification method Land use/land cover Water ISODATA 10 spectral classes original imagery ISODATA 25 spectral classes original imagery ISODATA 100 spectral classes original imagery ISODATA 10 spectral classes Halounova image Unsupervised K-Means 10 spectral classes original, texture (mean, variance, homogeneity) 0% 15% 0% 0% Forest 27.5% 30% 30% 90% Grassland 17.5% 7.5% 5% 30% Dark Fields 92.5% 87.5% 85% 72.5% Medium Fields 62.5% 62.5% 77.5% 52.5% Light Fields 67.5% 90% 92.5% 0% Impervious Surface 0% 0% 0% 0% Bare Earth 35% 17.5% 15% 0%

0%

27.5%

17.5%

92.5%

62.5%

67.5%

0%

35%

SVM original imagery

62.5%

32.5%

0%

77.5%

45%

65%

0%

22.5%

Minimum distance Halounova image Neural Net Halounova image SVM 3x3 Texture occurrence measures SVM 5x5 Texture occurrence measure SVM 7x7 Texture occurrence measures SVM 11x11 occurrence measures SVM 3x3 Co-occurrence measures and original SVM PCA, original, 3x3 texture (mean, homogeneity) SVM PCA, original, 3x3 texture (mean, contrast, variance) SVM 5x5 edge enhance SVM Laplacian filter add back 80% original SVM closed morphology filter

13% 7.5% 52.5% 45% 35% 32.5% 47.5% 2.5% 62.5% 80% 22.5% 70%

38% 62.5% 75% 77.5% 77.5% 67.5% 72.5% 67.5% 77.5% 37.5% 22.5% 37.5%

38% 17.5% 47.5% 55% 52.5% 35% 32.5% 12.5% 32.5% 0% 0% 20%

60% 60% 82.5% 80% 82.5% 80% 80% 65% 82.5% 67.5% 67.5% 82.5%

25% 37.5% 35% 22.5% 17.5% 20% 32.5% 57.5% 30% 52.5% 55% 47.5%

63% 42.5% 62.5% 72.5% 67.5% 65% 70% 52.5% 47.5% 47.5% 67.5% 62.5%

47% 53.3% 36.7% 46.7% 56.7% 63.3% 40% 0% 43.3% 0% 6.7% 0%

0% 7.5% 17.5% 7.5% 10% 7.5% 20% 17.5% 20% 17.5% 25% 15%

SVM Object-based PCA, original, 3x3 texture (mean, contrast, and variance)

80%

70%

12.5%

92.5%

57.5%

15%

73.3%

50%

Neural Net Halounova image post processing system SVM 11x11 texture occurrence measures post processing system SVM Object-based post processing system PCA, original, 3x3 texture (mean, contrast, variance)

72.5% 45%

72.5% 70%

45% 57.5%

97.5% 85%

97.5% 67.5%

85% 92.5%

90% 43.3%

47.5% 20%

90%

85%

55%

95%

95%

85%

96.7%

80%

76

Table 9 Users accuracies for individual land use/land cover types Salt Lake City study area
Classification method Land use/land cover Trees ISODATA 10 spectral classes original imagery ISODATA 25 spectral classes - original imagery ISODATA 100 spectral classes original imagery ISODATA 10 spectral classes Halounova image Minimum Distance Classifier original imagery SVM original imagery Maximum Likelihood 5x5 texture occurrence, saturation stretch Minimum Distance Classifier Halounova image Neural Net Halounova image Neural Net 5x5 texture occurrence SVM 3x3 Texture occurrence measures SVM 5x5 Texture occurrence measure SVM 11x11 occurrence measures SVM 5x5 texture occurrence measures, saturation stretch SVM Halounova Image SVM Object-based 5x5 occurrence measures SVM Object-based Post Processing System original imagery 55% 55% 57.5% 80% 42.5% 37.5% Grass 0% 0% 0% 0% 46.7% 0% Transportation 60% 50% 45% 65% 20% 50% Residential 0% 0% 0% 10% 17.5% 20% Commercial 70% 75% 80% 45% 67.5% 72.5%

30% 22.5% 20% 27.5% 27.5% 27.5% 20% 22.5% 25% 37.5% 40%

53.3% 50% 0% 0% 0% 3.3% 10% 36.7% 0% 60% 66.7%

42.5% 60% 55% 50% 47.5% 42.5% 57.5% 57.5% 52.5% 50% 82.5%

37.5% 25% 80% 47.5% 40% 72.7% 60% 32.5% 70% 50% 72.5%

50% 52.5% 32.5% 52.5% 65% 60% 52.5% 52.5% 57.5% 70% 97.5%

measures which was more accurate by 2%-4%. The Salt Lake City study area showed slightly less improvement than the Ogden study area which was likely due to the variation of objects on the ground in the commercial and residential land use/land cover classes. Please see table 10 for a comparison of overall accuracy ranges for the level 1 classification scheme and the range of improvement gained for levels 2 and 3 for each classification group. Users accuracies for individual land use/land cover types using object-based classification in the Salt Lake City study area were mixed and in general did not show asmuch improvement as the Ogden study area. Grass was the only feature which showed

77

significant improvement between 14%-60% for object-based classification. All other land use/land cover classes had mixed results depending on which classification strategy it was compared to. For example, the commercial class showed an increase in accuracy when compared to texture and filtering but showed a decrease in accuracy when compared to the unsupervised classification group of images. Transportation, residential, and commercial land use/land cover classes showed a similar trend. The commercial, residential, and transportation areas are very heterogeneous spectrally and even though spatially they are fairly homogenous, the spectral variations seemed to affect the accuracy of these classes. Object-based classification appeared to handle the more homogenous classes like trees and grass better than the other land use/land cover types. This study has shown that object-based classification is a very promising technique for high resolution panchromatic aerial imagery. This methodology was more successful in the Ogden study area that is characterized by more distinct regions of land use/land cover types than the urban area that characterized the Salt Lake City study area. The post processing system provided the most accurate results for both study areas. Overall accuracy for level 1 classification ranged from 61%-85% with improvement

Table 10 Overall accuracy ranges for classification groups


Classification group Level 1 accuracy range Level 2 accuracy range Level 3 accuracy range Range of overall improvement 5-57% 6-25% 2-46% 8-41% 8-37%

Unsupervised classifications Baseline classifications Texture/Filtering/PCA classifications Object-based classifications Post processing system classifications

25-42% 38-39% 34-52% 53-56% 61-85%

47-60% 45-54% 45-66% 64-80% 72-93%

49-82% 58-63% 54-80% 71-94% 82-98%

78

between 8%-37% for levels 2 and 3. The object-based classification for the Ogden study area has the highest overall accuracy for the level 1 classification scheme at 85%. All land use/land cover categories for both study areas showed significant improvement. Notable areas of improvement included water, bare earth, impervious surface, transportation, and grass. Effectiveness and efficiency of the post processing system were two of the primary goals of this research. This research has shown that the system effectively improves classification accuracy. The post processing system is an efficient method based on user interaction, but it did add significant time to the image classification. Image-processing software can automatically classify, post-process, and produce a usable land use/land cover layer in minutes. In comparison the post processing system in this study added several hours onto this process in order to gain a more accurate land use/land cover layer for use in a GIS. Since the system is based on user interaction, the time can be variable depending on how much detail or what accuracy level may be required for a project. If a generalized classification using three or less classes and accuracy between 70% and 80% would be appropriate for a project, then the post processing system can be completed in two to three hours. This is a modest time-savings compared to manual digitizing, although the resulting classifications contained far more detail than the manually digitized results. The Ogden image took approximately five hours to digitize and the Salt Lake City image about three hours. Performing automated classification using image processing software is the most efficient method to produce a land use/land cover classification layer in this study. It took about 1 minute using a minimum distance classifier and about 30 minutes using a neural net classifier, but the accuracy of the

79

resulting classification was often low. The post processing system added both consistency and accuracy to the classifications but did add several hours to the process. The object-based classifications provided a better starting base but added about three to four hours to get a fairly accurate final land use/land cover classification. In contrast a poor starting base with an overall accuracy of 35% took about five hours to achieve an improved overall accuracy of 75%.

80

CHAPTER 5: CONCLUSION 5.1 Limitations of the Research The point of this research was to facilitate visual interpretation of panchromatic historical imagery by using digital image processing. Imagery is variable in both quality and scene characteristics so that what is successful on one dataset may not be successful on other datasets. Another limit to the methods in this research was the sheer number of digital image processing techniques which could be used on the imagery. Thousands of combinations and settings were available to process the imagery, but due to time limitations relatively few were feasible to study. Availability of algorithms and software capabilities are also a limitation as commercial software provides relatively few choices compared to open source programs available from a variety of sources. Another limitation was knowledge of advanced algorithms and programming to determine new classification algorithms that may benefit single band image classification. It was determined that using readily available software and the capabilities they offer was more useful if the research is to be applied in a practical manner for the study of the Farm Service Agencys historical aerial photos. The overall research goals were achieved in that it was shown that by combining digital image processing with a system of postprocessing that allows user interaction, image classification accuracy can be improved by at least 20%. 5.2 Potential Future Developments The current vector tools created for this project could be expanded to include more editing and reporting capabilities. It would be beneficial to include an undo button, a field editing option and real time statistical reporting. Also including more of the

81

advanced ENVI functionality through the use of IDL and Python programming into the ArcGIS interface could expand the options for supervised classification including the SVM, SCRM Object-based, neural net, and the self organizing map. The process of classification is more streamlined for the user if only one software package is required to perform all the steps. Through trial and error many filters and combinations of texture images were assessed for their usefulness in classification of panchromatic imagery. There are almost endless combinations using texture measures, filtering, PCA, and contrast stretching. A relatively small number of these combinations were assessed for this research. This aspect of the study lends itself to further research in the future as there may be other combinations which could be more successful. Further research is also possible in the area of classification algorithms designed specifically for panchromatic imagery. Most classifiers are optimally designed to take advantage of multi-spectral bands. As computer-processing power continues to advance, it may be possible to develop new algorithms that have the capability of better distinguishing distinct land cover classes with limited spectral information. More integration between texture measures and object-based segmentation and feature extraction needs to be explored further. These digital techniques show obvious advantages and improvements in classifying panchromatic imagery, but results could still be improved especially in urban areas. One of the findings in this research was that certain classification techniques are more successful than others on particular land use/land cover types. This suggests that it may be possible to achieve high classification accuracy using a hybrid approach where each

82

land use/land cover type is classified separately using a technique which yields a high accuracy for that category. For example, in the Ogden study area a 10 spectral class ISODATA unsupervised classification yielded 92.5% accuracy for dark fields but low accuracy for water (0%) and impervious surface (0%). Through masking and a process of elimination water and impervious surface could be classified using a more successful technique such as object-based classification where accuracy was higher 80% and 50% respectively. Each land use/land cover type in the image could then be successively classified using a high accuracy technique. This would likely be a multi-process classification but the possibility of less post processing and higher accuracy has a lot of potential for future study. As historical imagery becomes more readily available to the public through technology such as web-based image services, classification tools for use in these services would have wide appeal to government and the remote sensing community. The ability to make use of historical data to study long-term land use trends is one of the most important aspects of this research. Black and white aerial photography is an underutilized resource at present, but developing more tools to access the information contained in the imagery will broaden its appeal to the remote sensing community.

83

APPENDIX 1: ERROR MATRIX TABLES Ogden unsupervised classification of original unprocessed image
Pixels Classification Data Unsupervised Ogden Data 10 Spectral Classes Reference Water Forest Grassland Dark Medium Light Impervious Data Fields Fields Fields Surface 0 0 0 0 0 0 0 Water 3 11 1 0 0 0 1 Forest 3 7 7 2 3 0 10 Grassland 0 12 8 37 9 0 2 Dark Fields Medium 20 8 20 0 25 2 8 Fields 12 2 2 1 3 27 6 Light Fields Impervious 0 0 0 0 0 0 0 Surface 2 0 2 0 0 11 3 Bare Earth Column 40 40 40 40 40 40 30 Total Users 0% 27.5% 17.5% 92.5% 62.5% 67.5% 0% Accuracy Errors of 100% 72.5% 82.5% 7.5% 37.5% 32.5% 100% Commission 39% Overall Accuracy 30% Kappa Index Pixels Classification Data Unsupervised Ogden Data 25 Spectral Classes Reference Water Forest Grassland Dark Medium Light Impervious Data Fields Fields Fields Surface 6 0 3 0 2 0 2 Water 3 12 1 0 0 0 1 Forest 1 5 3 2 2 0 8 Grassland 0 9 8 35 6 0 0 Dark Fields Medium 25 4 10 18 11 21 1 Fields 11 2 3 1 3 36 6 Light Fields Impervious 0 0 0 0 0 0 0 Surface 1 1 1 1 2 0 3 Bare Earth Column 40 40 40 40 40 40 30 Total Users 15% 30% 7.5% 87.5% 62.5% 90% 0% Accuracy Errors of 85% 70% 92.5% 12.5% 37.5% 10% 100% Commission 40% Overall Accuracy 31% Kappa Index Bare Earth 0 1 2 6 8 9 0 14 40 35% 65% Row Total 0 17 34 74 91 62 0 32 121 Producers Accuracy 0% 64.7% 20.6% 50% 27.5% 43.5% 0 43.8% Errors of Omission 100% 35.3% 79.4% 50% 72.5% 56.5% 100% 56.3%

Bare Earth 3 1 1 5 9 14 0 7 40 17.5% 82.5%

Row Total 16 18 22 63 99 76 0 16 124

Producers Accuracy 37.5% 66.6% 13.6% 55.5% 25.2% 47.3% 0% 43.7%

Errors of Omission 62.5% 33.4% 86.4% 44.4% 74.7% 52.6% 100% 56.3%

84

Pixels Classification Data Unsupervised Ogden Data 100 Spectral Classes Reference Water Forest Grassland Dark Medium Light Impervious Data Fields Fields Fields Surface 0 0 0 0 0 0 0 Water 3 12 1 1 0 0 1 Forest 0 1 2 1 0 0 1 Grassland 0 10 8 34 6 0 0 Dark Fields Medium 24 15 25 3 31 3 19 Fields 12 2 3 1 3 37 6 Light Fields Impervious 0 0 0 0 0 0 0 Surface 1 0 1 0 0 0 3 Bare Earth Column 40 40 40 40 40 40 30 Total Users 0% 30% 5% 85% 77.5% 92.5% 0% Accuracy Errors of 100% 70% 95% 15% 22.5% 7.5% 100% Commission 39% Overall Accuracy 30% Kappa Index

Bare Earth 1 1 0 5 10 17 0 6 40 15% 85%

Row Total 1 19 5 63 130 81 0 11 122

Producers Accuracy 0% 63.2% 40% 54% 23.8% 45.7% 0% 54.5%

Errors of Omission 100% 36.8% 60% 46% 76.2% 54.3% 100% 45.5%

Pixels Classification Data Unsupervised Ogden Data 10 Spectral Classes Halounova image Reference Water Forest Grassland Dark Medium Light Impervious Bare Data Fields Fields Fields Surface Earth 0 0 0 0 0 0 0 0 Water 15 36 24 4 10 6 20 26 Forest 12 4 12 7 9 3 10 13 Grassland 0 0 0 29 0 0 0 0 Dark Fields Medium 13 0 4 0 21 31 0 1 Fields 0 0 0 0 0 0 0 0 Light Fields Impervious 0 0 0 0 0 0 0 0 Surface 0 0 0 0 0 0 0 0 Bare Earth Column 40 40 40 40 40 40 30 40 Total Users 0% 90% 30% 72.5% 52.5% 0% 0% 0% Accuracy Errors of 100% 10% 70% 27.5% 47.5% 100% 100% 100% Commission 32% Overall Accuracy 21% Kappa Index

Row Total 0 141 70 29 70 0 0 0 98

Producers Accuracy 0% 25.5% 17.1% 100% 30% 0% 0% 0%

Errors of Omission 100% 74.5% 82.9% 0% 70% 100% 100% 100%

Ogden level 1 classification scheme


Pixels Classification Data Supervised Ogden Original Unprocessed SVM Reference Water Forest Grassland Dark Medium Light Impervious Data Fields Fields Fields Surface 25 5 10 0 16 14 11 Water 3 13 3 1 1 0 1 Forest 0 0 0 0 0 0 0 Grassland 0 8 4 31 2 0 0 Dark Fields Medium 5 13 19 7 18 0 13 Fields 6 1 2 1 3 26 2 Light Fields Impervious 0 0 0 0 0 0 0 Surface 1 0 2 0 0 0 3 Bare Earth Column 40 40 40 40 40 40 30 Total Users 62.5% 32.5% 0% 77.5% 45% 65% 0% Accuracy Errors of 37.5% 67.5% 100% 22.5% 55% 35% 100% Commission 39% Overall Accuracy 30% Kappa Index Bare Earth 12 1 0 5 4 9 0 9 40 22.5% 77.5% Row Total 93 23 0 50 79 50 0 15 122 Producers Accuracy 26.9% 56.5% 0% 62% 22.8% 52% 0% 60% Errors of Omission 73.1% 43.5% 100% 38% 77.2% 48% 100% 40%

85

Pixels Classification Data Supervised 3x3 Occurrence SVM Reference Water Forest Grassland Dark Medium Data Fields Fields 21 0 1 0 11 Water 4 30 9 3 2 Forest 3 5 19 1 8 Grassland 0 0 1 33 1 Dark Fields Medium 3 1 4 2 14 Fields 6 0 2 0 3 Light Fields Impervious 3 4 4 1 1 Surface 0 0 0 0 0 Bare Earth Column 40 40 40 40 40 Total Users 52.5% 75% 47.5% 82.5% 35% Accuracy Errors of 47.5% 25% 52.5% 17.5% 65% Commission 52% Overall Accuracy 45% Kappa Index Pixels Classification Data Supervised 5x5 Occurrence SVM Reference Water Forest Grassland Dark Medium Data Fields Fields 18 0 0 0 10 Water 4 31 6 3 2 Forest 7 4 22 1 15 Grassland 1 0 2 32 0 Dark Fields Medium 1 0 1 2 9 Fields 5 0 3 0 2 Light Fields Impervious 4 5 6 2 2 Surface 0 0 0 0 0 Bare Earth Column 40 40 40 40 40 Total Users 45% 77.5% 55% 80% 22.5% Accuracy Errors of 55% 22.5% 45% 20% 77.5% Commission 51% Overall Accuracy 44% Kappa Index

Light Fields 11 0 2 0 0 25 2 0 40 62..5% 37.5%

Impervious Surface 1 2 10 0 4 0 11 2 30 36.7% 63.3%

Bare Earth 0 6 3 1 0 8 15 7 40 17.5% 82.5%

Row Total 45 56 51 36 28 44 41 9 160

Producers Accuracy 46.7% 53.6% 37.3% 91.7% 50% 56.8% 26.8% 77.8%

Errors of Omission 53.3% 46.4% 62.7% 8.3% 50% 43.2% 73.2% 22.2%

Light Fields 8 0 1 0 0 29 2 0 40 72.5% 27.5%

Impervious Surface 1 2 9 0 4 0 14 0 30 46.7% 53.3%

Bare Earth 0 4 4 1 0 5 23 3 40 7.5% 92.5%

Row Total 37 52 63 36 17 44 58 3 158

Producers Accuracy 48.6% 59.6% 34.9% 88.9% 52.9% 65.9% 24.1% 100%

Errors of Omission 51.4% 40.4% 65.1% 11.1% 47.1% 34.1% 75.9% 0%

Pixels Classification Data Supervised Minimum Distance Classifier Halounova 9 layer - original, texture, filters Reference Water Forest Grassland Dark Medium Light Impervious Bare Row Producers Data Fields Fields Fields Surface Earth Total Accuracy 5 0 0 0 10 5 0 0 20 25% Water 7 15 12 2 5 4 6 9 60 25% Forest 12 4 15 10 9 2 10 14 76 19.7% Grassland 0 0 1 24 0 0 0 0 25 96% Dark Fields Medium 1 0 1 2 10 0 0 0 14 71.4% Fields 3 0 0 0 0 25 0 1 29 86.2% Light Fields Impervious 8 21 11 2 5 2 14 16 79 17.7% Surface 4 0 0 0 1 2 0 0 7 0% Bare Earth Column 108 40 40 40 40 40 40 30 40 Total Users 13% 38% 38% 60% 25% 63% 47% 0% Accuracy Errors of 87% 62% 62% 40% 75% 37% 53% 100% Commission 35% Overall Accuracy 26% Kappa Index

Errors of Omission 75% 75% 80.3% 4% 28.6% 13.8% 82.3% 100%

86

Pixels Classification Data Supervised Neural Net Classifier Halounova 9 layer - original, texture, filters Reference Water Forest Grassland Dark Medium Light Impervious Bare Row Producers Data Fields Fields Fields Surface Earth Total Accuracy 3 0 0 0 4 13 0 1 21 14.3% Water 3 25 12 4 3 0 5 5 57 43.9% Forest 14 2 7 6 7 2 9 6 53 13.2% Grassland 0 0 1 24 0 0 0 0 25 96% Dark Fields Medium 4 0 1 5 15 1 0 0 26 57.7% Fields 2 0 0 0 1 17 0 0 20 85% Light Fields Impervious 13 13 17 1 9 4 16 25 98 16.3% Surface 1 0 2 0 1 3 0 3 10 30% Bare Earth Column 40 40 40 40 40 40 30 40 110 Total Users 7.5% 62.5% 17.5% 60% 37.5% 42.5% 53.3% 7.5% Accuracy Errors of 92.5% 37.5% 82.5% 40% 62.5% 57.5% 46.7% 92.5% Commission 35% Overall Accuracy 27% Kappa Index

Errors of Omission 85.7% 56.1% 86.8% 4% 42.3% 15% 83.7% 70%

Pixels Classification Data Supervised Neural Net Classifier Halounova 9 layer - original, texture, filters Post Processing System Reference Water Forest Grassland Dark Medium Light Impervious Bare Row Producers Errors of Data Fields Fields Fields Surface Earth Total Accuracy Omission 29 0 0 0 0 0 0 0 29 100% 0% Water 3 29 9 1 0 0 2 3 47 61.7% 38.3% Forest 1 2 18 0 0 0 1 2 24 75% 25% Grassland 0 0 0 39 0 0 0 0 39 100% 0% Dark Fields Medium 0 0 0 0 39 2 0 0 41 95.1% 4.9% Fields 0 0 0 0 0 34 0 0 34 100% 0% Light Fields Impervious 7 9 12 0 1 4 27 16 76 35.5% 64.5% Surface 0 0 1 0 0 0 0 19 20 95% 5% Bare Earth Column 40 40 40 40 40 40 30 40 234 Total Users 72.5% 72.5% 45% 97.5% 97.5% 85% 90% 47.5% Accuracy Errors of 27.5% 27.5% 55% 2.5% 2.5% 15% 10% 52.5% Commission 75% Overall Accuracy 72% Kappa Index Pixels Classification Data Supervised Ogden PCA 3x3tex orig, mean, contrast, variance SVM Object-based classification Reference Water Forest Grassland Dark Medium Light Impervious Bare Row Producers Errors of Data Fields Fields Fields Surface Earth Total Accuracy Omission 32 0 0 0 0 0 0 4 36 88.9% 11.1% Water 3 28 15 2 2 0 1 6 57 49.1% 50.9% Forest 0 1 5 0 1 0 4 0 11 45.5% 54.5% Grassland 0 0 0 37 1 0 1 0 39 94.9% 5.1% Dark Fields Medium 0 1 0 0 23 34 0 0 58 39.7% 60.3% Fields 0 0 1 1 2 6 0 0 10 60% 40% Light Fields Impervious 1 8 11 0 4 0 22 10 56 39.3% 60.7% Surface 4 2 8 0 7 0 2 20 43 46.5% 53.5% Bare Earth Column 40 40 40 40 40 40 30 40 173 Total Users 80% 70% 12.5% 92.5% 57.5% 15% 73.3% 50 Accuracy Errors of 20% 30% 87.5% 7.5% 42.5% 85% 26.7% 50 Commission 56% Overall Accuracy 50% Kappa Index

87

Pixels

Reference Data Water Forest Grassland Dark Fields Medium 0 Fields 0 Light Fields Impervious 0 Surface 0 Bare Earth Column 40 Total Users 90% Accuracy Errors of 10% Commission Overall Accuracy Kappa Index

Classification Data Supervised Ogden PCA 3x3tex orig, mean, contrast, variance SVM Object-based Post Processing System Water Forest Grassland Dark Medium Light Impervious Bare Row Producers Errors of Fields Fields Fields Surface Earth Total Accuracy Omission 36 0 0 0 0 0 0 4 40 90% 10% 3 34 11 1 1 0 0 4 54 63% 37% 1 2 22 0 0 0 1 0 26 84.6% 15.4% 0 0 0 38 1 0 0 0 39 97.4% 2.6% 2 0 0 2 40 85% 15% 85% 83% 0 0 0 7 40 55% 45% 0 1 0 0 40 95% 5% 38 0 0 0 40 95% 5% 6 34 0 0 40 85% 15% 0 0 29 0 30 96.7% 3.3% 0 0 0 32 40 80% 20% 46 35 29 41 263 82.6% 97.1% 100% 78% 17.4% 2.9% 0% 22%

Ogden level 2 classification scheme


Pixels Reference Data Vegetation Cropland Other Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index Pixels Reference Data Vegetation Cropland Other Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index Pixels Reference Data Vegetation Cropland Other Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index Classification Data Unsupervised Ogden Data 10 Spectral Classes Vegetation Croplan Other Row Total Producers d Accuracy 26 5 20 51 51% 52 104 71 227 45.8% 2 11 19 32 59.4% 80 120 110 149 32.5% 86.7% 17.3% 67.5% 13.3% 82.7% 48% 19% Classification Data Unsupervised Ogden Data 25 Spectral Classes Vegetation Croplan Other Row Total Producers d Accuracy 21 4 15 40 52.5% 54 111 73 238 46.6% 5 5 22 32 68.8% 80 120 110 154 26.3% 92.5% 20% 73.7% 7.5% 80% 50% 20% Classification Data Unsupervised Ogden Data 100 Spectral Classes Vegetation Croplan Other Row Total Producers d Accuracy 16 2 6 24 66.7% 63 118 93 274 43.1% 1 0 11 12 91.7% 80 120 110 145 20% 98.3% 10% 80% 1.7% 90% 47% 15% Errors of Omission 49% 54.2% 40.6%

Errors of Omission 47.5% 53.4% 31.3%

Errors of Omission 33.3% 56.9% 8.3%

88

Pixels Reference Data Vegetation Cropland Other Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index Pixels Reference Data Vegetation Cropland Bare Earth Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index Pixels Reference Data Vegetation Cropland Other Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index Pixels Reference Data Vegetation Cropland Other Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index Pixels Reference Data Vegetation Cropland Other Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index

Classification Data Unsupervised Ogden Data 10 Spectral Classes Halounova image Vegetation Croplan Other Row Total Producers Errors of Omission d Accuracy 76 39 96 211 36% 64% 4 81 14 99 81.8% 18.2% 0 0 0 0 0% 100% 80 120 110 157 95% 67.5% 0% 5% 32.5% 100% 51% 30% Classification Data Supervised 3x3 Occurrence SVM Vegetation Croplan Other Row Total d 63 16 28 107 8 78 22 108 9 26 60 95 80 120 110 201 78.8% 65% 54.5% 21.2% 65% 47% Classification Data Supervised Ogden PCA 3x3tex orig, mean, contrast, variance SVM Objectbased post processing system Vegetatio Croplan Other Row Total Producers Errors of Omission n d Accuracy 69 2 8 79 87.3% 12.7% 2 118 1 121 97.5% 2.5% 9 0 101 110 91.8% 8.2% 80 120 110 288 86.3% 98.3% 91.8% 13.8% 1.7% 8.2% 93% 89% Classification Data Supervised Ogden PCA 3x3tex orig, mean, contrast, variance SVM Objectbased classification Vegetatio Croplan Other Row Total Producers Errors of Omission n d Accuracy 49 5 14 68 72.1% 27.9% 2 104 1 107 97.2% 2.8% 29 11 95 135 70.4% 29.6% 80 120 110 248 61.3% 86.7% 86.4% 38.8% 80% 69% Classification Data Supervised Ogden Original Unprocessed SVM Vegetation Croplan Other Row Total Producers d Accuracy 16 2 5 23 69.6% 47 88 44 179 49.2% 17 30 61 108 56.5% 80 120 110 165 20% 73.3% 55.5% 80% 53% 26% 26.7% 44.5% 13.3% 13.6% 35% 45.5%

Producers Accuracy 58.9% 72.2% 63.2%

Errors of Omission 41.1% 27.8% 36.8%

Errors of Omission 30.4% 50.8% 43.5%

89

Pixel Reference Data Vegetation Cropland Other Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index Pixels Reference Data Vegetation Cropland Other Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index Pixels Reference Data Vegetation Cropland Other Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index Pixels Reference Data Vegetation Cropland Other Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index

Classification Data Supervised Halounova 9 layer - original, texture, filters Vegetation Croplan Other Row Total Producers d Accuracy 46 32 58 136 33.8% 2 61 5 68 89.7% 32 27 47 106 44.3% 80 120 110 154 57.5% 50.8% 42.7% 42.5% 50% 26% 49.2% 57.3%

Errors of Omission 66.2% 10.3% 55.7%

Classification Data Supervised Neural Net Classifier Halounova 9 layer - original, texture, filters Vegetation Croplan Other Row Total Producers Errors of Omission d Accuracy 46 22 42 110 41.8% 58.2% 2 63 6 71 88.7% 11.3% 32 35 62 129 48.1% 51.9% 80 120 110 171 57.5% 52.5% 56.4% 42.5% 55% 33% Classification Data Unsupervised Isodata Classifier Halounova 9 layer - original, texture, filters Vegetation Croplan Other Row Total Producers Errors of Omission d Accuracy 60 20 61 141 42.6% 57.4% 11 93 28 132 70.5% 29.5% 9 7 21 37 56.8% 43.2% 80 120 110 174 75% 77.5% 19.1% 25% 56% 35% Classification Data Supervised Neural Net Classifier Halounova 9 layer - original, texture, filters Post Processing System Vegetation Croplan Other Row Total Producers Errors of Omission d Accuracy 58 1 12 71 81.7% 18.3% 0 114 0 114 100% 0% 22 5 98 125 78.4% 21.6% 80 120 110 270 72.5% 95% 89.1% 27.5% 87% 80% 5% 10.9% 22% 80.9% 47.5% 43.6%

90

Ogden level 3 classification scheme


Pixels Reference Data Cropland Non-Cropland Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index Pixels Reference Data Cropland Non-Cropland Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index Pixels Reference Data Cropland Non-Cropland Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index Pixels Reference Data Cropland Non-Cropland Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index Pixels Reference Data Cropland Non-Cropland Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index Pixels Reference Data Cropland Non-Cropland Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index Classification Data Unsupervised Ogden Data 10 Spectral Classes Cropland Non-Cropland Row Total Producers Accuracy 104 123 227 45.8% 16 67 83 80.7% 120 190 171 86.7% 35.3% 13.3% 64.7% 55% 19% Classification Data Unsupervised Ogden Data 25 Spectral Classes Cropland Non-Cropland Row Total Producers Accuracy 111 127 238 46.6% 9 63 72 87.5% 120 190 174 92.5% 33.2% 7.5% 66.8% 56% 22% Classification Data Unsupervised Ogden Data 100 Spectral Classes Cropland Non-Cropland Row Total Producers Accuracy 118 156 274 43.1% 2 34 36 94.4% 120 190 152 98.3% 17.9% 1.7% 82.1% 49% 13% Errors of Omission 54.2% 19.3%

Errors of Omission 53.4% 12.5%

Errors of Omission 56.9% 5.6%

Classification Data Unsupervised Ogden Data 10 Spectral Classes Halounova image Cropland Non-Cropland Row Total Producers Accuracy Errors of Omission 81 18 99 81.8% 18.2% 39 172 211 81.5% 18.5% 120 190 253 67.5% 90.5% 32.5% 9.5% 82% 60% Classification Data Supervised Ogden PCA 3x3tex original, mean, contrast, variance SVM Object-based post processing system Cropland Non-Cropland Row Total Producers Accuracy Errors of Omission 118 3 121 97.5% 2.5% 2 187 189 98.9% 1.1% 120 190 305 98.3% 98.4% 1.7% 1.6% 98% 97% Classification Data Supervised Ogden PCA 3x3tex orig, mean, contrast, variance SVM Objectbased classification Cropland Non-Cropland Row Total Producers Accuracy Errors of Omission 104 3 107 97.2% 2.8% 16 187 203 92.1% 7.9% 120 190 291 86.7% 98.4% 13.3% 1.6% 94% 87%

91

Pixels Reference Data Cropland Non-Cropland Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index Pixels Reference Data Cropland Non-Cropland Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index Pixels Reference Data Cropland Non-Cropland Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index Pixels Reference Data Cropland Non-Cropland Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index Pixels Reference Data Cropland Non-Cropland Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index Pixels Reference Data Cropland Non-Cropland Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index

Classification Data Supervised Ogden Original Unprocessed SVM Cropland Non-Cropland Row Total Producers Accuracy 88 91 179 49.2% 32 99 131 75.6% 120 190 187 73.3% 52.1% 26.7% 47.9% 60% 23% Classification Data Supervised 3x3 Occurrence SVM Cropland Non-Cropland Row Total Producers Accuracy 78 30 108 72.2% 42 160 202 79.2% 120 190 238 65% 84.2% 35% 15.8% 77% 50%

Errors of Omission 50.8% 24.4%

Errors of Omission 27.8% 20.8%

Classification Data Supervised Halounova 9 layer - original, texture, filters Cropland Non-Cropland Row Total Producers Accuracy Errors of Omission 61 7 68 79.7% 10.3% 59 183 242 75.6% 24.4% 120 190 244 50.8% 96.3% 49.2% 3.7% 79% 51% Classification Data Supervised Neural Net Classifier Halounova 9 layer - original, texture, filters Cropland Non-Cropland Row Total Producers Accuracy Errors of Omission 63 8 71 88.7% 11.3% 57 182 239 76.2% 23.8% 120 190 245 52.5% 95.8% 47.5% 4.2% 79% 52% Classification Data Unsupervised Isodata Classifier Halounova 9 layer - original, texture, filters Cropland Non-Cropland Row Total Producers Accuracy Errors of Omission 93 39 132 70.5% 29.5% 27 151 178 84.8% 15.2% 120 190 244 77% 79.5% 22% 20.5 79% 56% Classification Data Supervised Neural Net Classifier Halounova 9 layer - original, texture, filters Post Processing System Cropland Non-Cropland Row Total Producers Accuracy Errors of Omission 114 0 114 100% 0% 6 190 196 96.9% 3.1% 120 190 304 95% 100% 5% 0% 98% 96%

92

Salt Lake City unsupervised classification


Pixels Reference Data Trees Grass Transportation Residential Commercial Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index Pixels Reference Data Trees Grass Transportation Residential Commercial Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index Pixels Reference Data Trees Grass Transportation Residential Commercial Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index Classification Data Unsupervised Salt Lake City Original ISODATA 10 Spectral Classes Trees Grass Transportation Residential Commercial Row Producers Total Accuracy 22 12 4 9 2 49 44.9% 0 0 0 0 0 0 0% 12 14 24 20 10 80 30% 0 0 0 0 0 0 0% 6 4 12 11 28 61 45.9% 40 30 40 40 40 74 55% 45% 39% 23% Classification Data Unsupervised Salt Lake City Original ISODATA 25 Spectral Classes Trees Grass Transportation Residential Commercial Row Producers Total Accuracy 22 12 4 9 2 49 44.9% 0 0 0 0 0 0 0% 9 11 20 18 8 66 30.3% 0 0 0 0 0 0 0% 9 7 16 13 30 75 48.4% 40 30 40 40 40 72 55% 45% 38% 30% Classification Data Unsupervised Salt Lake City Original ISODATA 100 Spectral Classes Trees Grass Transportation Residential Commercial Row Producers Total Accuracy 23 13 5 11 2 54 42.6% 0 0 0 0 0 0 0% 8 9 18 14 6 55 32.7% 0 0 0 0 0 0 0% 9 8 17 15 32 81 39.5% 40 30 40 40 40 73 57.5% 42.5% 38% 22% 0% 100% 45% 55% 0% 100% 80% 20% 0% 100% 50% 50% 0% 100% 75% 25% 0% 100% 60% 40% 0% 100% 70% 30% Errors of Omission 55.1% 0% 70% 0% 54.1%

Errors of Omission 55.1% 100% 69.7% 100% 51.6%

Errors of Omission 57.4% 0% 67.3% 0% 60.5%

93

Salt Lake City level 1 classification scheme


Pixels Reference Data Trees Grass Transportation Residential Commercial Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index Pixels Reference Data Trees Grass Transportation Residential Commercial Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index Pixels Reference Data Trees Grass Transportation Residential Commercial Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index Classification Data Salt Lake City Original Supervised Minimum Distance Classifier Trees Grass Transportation Residential Commercial Row Producers Total Accuracy 17 3 2 4 0 26 65.4% 12 14 9 12 6 53 26.4% 1 4 8 7 2 22 36.4% 4 6 11 7 5 33 21.2% 6 3 10 10 27 56 48.2% 40 30 40 40 40 73 42.5% 57.5% 38% 23% Classification Data Salt Lake City Original Supervised Support Vector Machine Classifier Trees Grass Transportation Residential Commercial Row Producers Total Accuracy 15 0 1 2 0 18 83.3% 0 0 0 0 0 0 0% 11 13 20 19 9 72 27.8% 8 13 5 8 2 36 22.2% 6 4 14 11 29 64 45.3% 40 30 40 40 40 72 37.5% 62.5% 38% 21% Classification Data Salt Lake City 5x5 Texture Occurrence Supervised Support Vector Machine Classifier Trees Grass Transportation Residential Commercial Row Producers Errors of Total Accuracy Omission 11 0 2 1 0 14 78.6% 21.4% 0 1 0 0 0 1 100% 0% 4 8 17 8 8 45 37.8% 62.2% 24 19 9 24 8 84 28.6% 71.4% 1 2 12 0 24 39 61.5% 38.5% 40 30 40 33 40 77 27.5% 72.5% 41% 27% 3.3% 96.7% 42.5% 57.5% 72.7% 27.3% 60% 40% 0% 100% 50% 50% 20% 80% 72.5% 27.5% 46.7% 53.3% 20% 80% 17.5% 82.5% 67.5% 32.5% Errors of Omission 34.6% 73.6% 63.6% 78.8% 51.8%

Errors of Omission 16.7% 0% 72.2% 77.8% 54.7%

94

Pixels Reference Data Trees Grass Transportation Residential Commercial Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index Pixels Reference Data Trees Grass Transportation Residential Commercial Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index Pixels Reference Data Trees Grass Transportation Residential Commercial Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index

Classification Data Salt Lake City 5x5 Texture Occurrence Saturation Stretch Supervised Support Maximum Likelihood Trees Grass Transportation Residential Commercial Row Producers Errors of Total Accuracy Omission 12 0 2 1 0 15 80% 20% 7 16 9 8 3 43 37.2% 62.8% 4 7 17 9 13 50 34% 66% 16 6 1 15 4 42 35.7% 64.3% 1 1 11 7 20 40 50% 50% 40 30 40 40 40 80 30% 70% 42% 28% Classification Data Salt Lake City 11x11 Texture Occurrence Support Vector Machine Classifier Trees Grass Transportation Residential Commercial Row Producers Errors of Total Accuracy Omission 8 1 3 1 0 13 61.5% 38.5% 2 3 1 1 0 7 42.9% 57.1% 2 5 23 11 5 46 50% 50% 27 18 7 24 14 90 26.7% 73.3% 1 3 6 3 21 34 61.8% 38.2% 40 30 40 40 40 79 20% 80% 42% 26% Classification Data Salt Lake City Neural Net Classifier Halounova Image Trees Grass Transportation Residential Commercial Row Total 8 2 3 6 1 20 0 0 0 0 0 0 1 6 22 2 11 42 31 22 8 32 15 108 0 0 7 0 13 20 40 30 40 40 40 75 20% 80% 39% 23% 0% 100% 55% 45% 80% 20% 32.5% 67.5% 10% 90% 57.5% 42.5% 60% 40% 52.5% 47.5% 53.3% 46.7% 42.5% 57.5% 37.5% 62.5% 50% 50%

Producers Accuracy 40% 0% 52.4% 29.6% 65%

Errors of Omission 60% 100% 47.6% 70.4% 35%

95

Pixels Reference Data Trees Grass Transportation Residential Commercial Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index Pixels Reference Data Trees Grass Transportation Residential Commercial Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index

Classification Data Supervised Salt Lake City Object-based Support Vector Machine Classifier Trees Grass Transportation Residential Commercial Row Producers Errors of Total Accuracy Omission 15 2 1 1 0 19 78.9% 21.1% 11 18 6 15 2 52 34.6% 65.4% 2 3 20 3 3 31 64.5% 35.5% 11 7 4 20 7 49 40.8% 59.2% 1 0 9 1 28 39 71.8% 28.2% 40 30 40 40 40 101 37.5% 62.5% 53% 42% Classification Data Supervised Salt Lake City Object-based Support Vector Machine Post Processing System Trees Grass Transportation Residential Commercial Row Producers Errors of Total Accuracy Omission 16 2 1 1 0 20 80% 20% 5 20 3 6 0 34 58.8% 41.2% 2 4 33 3 1 43 76.7% 23.3% 6 0 0 29 0 35 82.9% 17.1% 11 4 3 1 39 58 67.2% 32.8% 40 30 40 40 40 137 40% 60% 72% 65% 66.7% 33.3% 82.5% 17.5% 72.5% 27.5% 97.5% 2.5% 60% 40% 50% 50% 50% 50% 70% 30%

Salt Lake City level 2 classification scheme


Pixels Reference Data Built up Area Vegetation Transportation Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index Pixels Reference Data Built up Area Vegetation Transportation Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index Classification Data Unsupervised Salt Lake City Original ISODATA 10 Spectral Classes Built up Vegetation Transportation Row Producers Errors of Area Total Accuracy Omission 39 10 12 61 63.9% 36.1% 11 34 4 49 69.4% 30.6% 30 26 24 80 30% 70% 80 70 40 97 48.7% 48.6% 60% 51.3% 51% 20% Classification Data Unsupervised Salt Lake City Original ISODATA 25 Spectral Classes Built up Vegetation Transportation Row Producers Errors of Area Total Accuracy Omission 43 16 16 75 57.3% 42.7% 11 34 4 49 69.4% 30.6% 26 20 20 66 30.3% 69.7% 80 70 40 97 53.8% 48.6% 50% 46.2% 51% 20% 51.4% 50% 51.4% 40%

96

Pixels Reference Data Built up Area Vegetation Transportation Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index Pixels Reference Data Built up Area Vegetation Transportation Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index Pixels Reference Data Built up Area Vegetation Transportation Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index Pixels Reference Data Built up Area Vegetation Transportation Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index

Classification Data Salt Lake City Original Supervised Minimum Distance Classifier Built up Vegetation Transportation Row Producers Errors of Area Total Accuracy Omission 49 19 21 89 55.1% 44.9% 22 46 11 79 58.2% 41.8% 9 5 8 22 36.4% 63.6% 80 70 40 103 61.3% 65.7% 20% 38.7% 54% 25% Classification Data Salt Lake City Original Supervised Support Vector Machine Classifier Built up Vegetation Transportation Row Producers Errors of Area Total Accuracy Omission 50 31 19 100 50% 50% 2 15 1 18 83.3% 16.7% 28 24 20 72 27.8% 72.2% 80 70 40 85 62.5% 21.4% 50% 37.5% 45% 8% Classification Data Supervised Salt Lake City Object-based Support Vector Machine Classifier Built up Vegetation Transportation Row Producers Errors of Area Total Accuracy Omission 56 19 13 88 63.6% 36.4% 18 46 7 71 64.8% 35.2% 6 5 20 31 64.5% 35.5% 80 70 40 122 70% 65.7% 50% 30% 64% 41% Classification Data Supervised Salt Lake City Object-based 5x5 Occurrence Texture Saturation Stretch Support Vector Machine Classifier Built up Vegetation Transportation Row Producers Errors of Area Total Accuracy Omission 56 12 12 80 70% 30% 22 46 5 73 63% 37% 2 12 23 37 62.2% 37.8% 80 71 40 125 70% 65.7% 57.5% 30% 66% 44% 34.3% 42.5% 34.3% 50% 78.6% 50% 34.3% 80%

97

Pixels Reference Data Built up Area Vegetation Transportation Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index Pixels Reference Data Built up Area Vegetation Transportation Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index Pixels Reference Data Built up Area Vegetation Transportation Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index

Classification Data Supervised Salt Lake City Neural Net Classifier Built up Vegetation Transportation Row Producers Area Total Accuracy 60 53 15 128 46.9% 7 10 3 20 50% 13 7 22 42 52.4% 80 70 40 92 75% 14.3% 55% 25% 39% 23% 85.7% 45%

Errors of Omission 53.1% 50% 47.6%

Classification Data Supervised Salt Lake City Object-based 11x11 Occurrence Texture Support Vector Machine Classifier Built up Vegetation Transportation Row Producers Errors of Area Total Accuracy Omission 62 49 13 124 50% 50% 2 14 4 20 70% 30% 16 7 23 46 50% 50% 80 70 40 99 77.5% 20% 57.5% 22.5% 52% 20% Classification Data Supervised Salt Lake City Object-based Support Vector Machine Post Processing System Built up Vegetation Transportation Row Producers Errors of Area Total Accuracy Omission 69 21 3 93 74.2% 25.8% 1 43 4 48 89.6% 10.4% 4 6 33 43 76.7% 23.3% 74 70 40 145 93.2% 61.4% 82.5% 6.8% 76% 65% 38.6% 17.5% 80% 42.5%

Salt Lake City level 3 classification scheme


Pixels Reference Data Built up Area Non Built up Area Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index Classification Data Unsupervised Salt Lake City Original ISODATA 10 Spectral Classes Built up Non Built up Row Total Producers Accuracy Errors of Omission Area Area 39 22 61 63.9% 36.1% 41 88 129 68.2% 31.8% 80 110 127 48.8% 80% 51.2% 67% 30% 20%

98

Pixels Reference Data Built up Area Non Built up Area Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index Pixels Reference Data Built up Area Non Built up Area Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index Pixels Reference Data Built up Area Non Built up Area Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index Pixels Reference Data Built up Area Non Built up Area Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index Pixels Reference Data Built up Area Non Built up Area Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index

Classification Data Unsupervised Salt Lake City Original ISODATA 25 Spectral Classes Built up Non Built up Area Row Total Producers Errors of Omission Area Accuracy 43 32 75 57.3% 42.7% 37 78 115 67.8% 32.2% 80 110 121 53.8% 70.9% 46.2% 64% 25% Classification Data Salt Lake City 5x5 Texture Occurrence Supervised Support Vector Machine Classifier Built up Non Built up Area Row Total Producers Errors of Omission Area Accuracy 63 67 130 48.5% 51.5% 17 43 60 71.7% 28.3% 80 110 106 78.8% 39.1% 21.2% 56% 16% Classification Data Salt Lake City 5x5 Texture Occurrence Saturation Stretch Supervised Support Maximum Likelihood Built up Non Built up Area Row Total Producers Errors of Omission Area Accuracy 46 36 82 56.1% 43.9% 34 74 108 68.5% 31.5% 80 110 120 57.5% 67.3% 42.5% 63% 25% Classification Data Salt Lake City 11x11 Texture Occurrence Supervised Support Vector Machine Classifier Built up Non Built up Area Row Total Producers Errors of Omission Area Accuracy 62 62 124 50% 50% 18 48 66 72.7% 27.3% 80 110 110 77.5% 43.6% 22.5% 58% 20% Classification Data Salt Lake City Neural Net Classifier Built up Non Built up Area Row Total Producers Area Accuracy 60 68 128 46.9% 20 42 62 67.7% 80 110 102 75% 38.2% 25% 54% 12% 61.8% 56.4% 32.7% 60.9% 29.1%

Errors of Omission 53.1% 32.3%

99

Pixels Reference Data Built up Area Non Built up Area Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index Pixel Reference Data Built up Area Non Built up Area Column Total Users Accuracy Errors of Commission Overall Accuracy Kappa Index

Classification Data Supervised Salt Lake City Object-based Support Vector Machine Classifier Built up Non Built up Area Row Total Producers Errors of Omission Area Accuracy 56 32 88 63.6% 36.4% 24 78 102 76.5% 23.5% 80 110 134 70% 70.9% 30% 71% 40% Classification Data Supervised Salt Lake City Object-based Support Vector Machine Post Processing System Built up Non Built up Area Row Total Producers Errors of Omission Area Accuracy 69 24 93 74.2% 25.8% 11 86 97 88.7% 11.3% 80 110 155 86.3% 78.2% 13.2% 82% 63% 21.8% 29.1%

100

APPENDIX 2: VECTOR EDITING TOOLBAR C#.NET CODE Merge Polygon Button


using ESRI.ArcGIS.ArcMapUI; using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Windows.Forms; using ESRI.ArcGIS.Carto; using ESRI.ArcGIS.Editor; using ESRI.ArcGIS.Geometry; using ESRI.ArcGIS.Geodatabase; namespace MergePolys { class MergePolysMethods { //neighborType is either "smallest" or "largest" depending on the button clicked. static internal void Merge(string neighborType) { try { IActiveView activeView = ArcMap.Document.ActiveView; IEditor3 theEditor; IEditLayers editLayers; IFeatureSelection featSel; string message; //Check that edit session is set up correctly for merging and set up featSel if (SetupOK(out theEditor, out editLayers, out featSel, out message) != true) { MessageBox.Show(message); return; } ICursor cursor; IFeatureCursor featCursor; IFeature currentFeat; IFeature featToBeMergedWith; ITopologicalOperator topoOp; featSel.SelectionSet.Search(null, false, out cursor); featCursor = cursor as IFeatureCursor;

currentFeat = featCursor.NextFeature(); while (currentFeat != null) { featToBeMergedWith = GetAdjacentFeature(currentFeat, neighborType); if (featToBeMergedWith != null) { topoOp = featToBeMergedWith.ShapeCopy as ITopologicalOperator; featToBeMergedWith.Shape = topoOp.Union(currentFeat.ShapeCopy); featToBeMergedWith.Store(); currentFeat.Delete(); } else { MessageBox.Show("No polygons are adjacent to: " + currentFeat.OID); } currentFeat = featCursor.NextFeature(); } ArcMap.Document.ActiveView.Refresh(); } catch (Exception ex)

101

{ MessageBox.Show("Error: " + ex.Message + Environment.NewLine + "Merge aborted!"); } } static internal void Merge(IPoint clickPoint) { try { IActiveView activeView = ArcMap.Document.ActiveView; IEditor3 theEditor; IEditLayers editLayers; IFeatureSelection featSel; string message;

//Check that edit session is set up correctly for merging and set up featSel if (SetupOK(out theEditor, out editLayers, out featSel, out message) != true) { MessageBox.Show(message); return; } if (featSel.SelectionSet.Count > Properties.Settings.Default.MaxSelBeforeWarn) { string dialogMessage = "There are " + Properties.Settings.Default.MaxSelBeforeWarn.ToString() + " features selected to be merged. Are you sure you want to merge all those that touch the clicked polygon?"; const string caption = "Warning"; var result = MessageBox.Show(dialogMessage, caption, MessageBoxButtons.YesNo, MessageBoxIcon.Exclamation); if (result == DialogResult.No) { return; } } ICursor cursor; IFeatureCursor featCursor; IFeature currentFeat; IFeature featToBeMergedWith; ITopologicalOperator topoOp; featSel.SelectionSet.Search(null, false, out cursor); featCursor = cursor as IFeatureCursor;

currentFeat = featCursor.NextFeature(); //get feature selected by mouse click clickPoint.SpatialReference = currentFeat.Shape.SpatialReference; ISpatialFilter clickSpatFilter = new SpatialFilter(); clickSpatFilter.Geometry = clickPoint as IGeometry; clickSpatFilter.SpatialRel = ESRI.ArcGIS.Geodatabase.esriSpatialRelEnum.esriSpatialRelIntersects; IQueryFilter queryFilter = (IQueryFilter)clickSpatFilter; IFeatureCursor toBeMergedWithFeatCursor = editLayers.CurrentLayer.FeatureClass.Search(queryFilter, false); int numMerged = 0; featToBeMergedWith = toBeMergedWithFeatCursor.NextFeature(); while (currentFeat != null) { if (FeaturesTouch(featToBeMergedWith, currentFeat)) { topoOp = featToBeMergedWith.ShapeCopy as ITopologicalOperator; featToBeMergedWith.Shape = topoOp.Union(currentFeat.ShapeCopy); featToBeMergedWith.Store(); currentFeat.Delete(); numMerged++; } currentFeat = featCursor.NextFeature(); } if(numMerged==0)

102

{ MessageBox.Show("Merge polygon did not touch selected features"); } else{ ArcMap.Document.ActiveView.Refresh(); } } catch (Exception ex) { MessageBox.Show("Error: " + ex.Message + Environment.NewLine + "Merge aborted!"); } } static private IFeature GetAdjacentFeature(IFeature featToBeMerged, string neighborType) { ISpatialFilter spatFilter = new SpatialFilter(); IFeatureClass theFeatClass = featToBeMerged.Class as IFeatureClass; spatFilter.Geometry = featToBeMerged.Shape; spatFilter.SpatialRel = esriSpatialRelEnum.esriSpatialRelIntersects; IFeatureCursor featCursor = theFeatClass.Search(spatFilter, false); IFeature tempFeat; IFeature featToBeMergedWith = null; Double threshold = 0; Double tempFeatArea = 0;

tempFeat = featCursor.NextFeature(); while (tempFeat != null) { if (tempFeat.OID != featToBeMerged.OID) { tempFeatArea = GetArea(tempFeat.Shape as IArea); if (featToBeMergedWith == null) { featToBeMergedWith = tempFeat; threshold = tempFeatArea; } else { switch (neighborType) { case "smallest": if (tempFeatArea < threshold) { featToBeMergedWith = tempFeat; threshold = tempFeatArea; } break; case "largest": if (tempFeatArea > threshold) { featToBeMergedWith = tempFeat; threshold = tempFeatArea; } break; } } } tempFeat = featCursor.NextFeature(); } return featToBeMergedWith; } static private bool FeaturesTouch(IFeature featA, IFeature featB) { IRelationalOperator relationalOperator = (IRelationalOperator)featA.Shape; return relationalOperator.Touches(featB.Shape); }

103

static private Double GetArea(IArea thePolygon) { return thePolygon.Area; } static private bool SetupOK(out IEditor3 theEditor, out IEditLayers editLayers, out IFeatureSelection featSel, out string message) { try { theEditor = null; editLayers = null; featSel = null; theEditor = ArcMap.Application.FindExtensionByName("ESRI Object Editor") as IEditor3; if (theEditor.EditState != esriEditState.esriStateEditing) { message = "Please start an editing session first!"; return false; } editLayers = theEditor as IEditLayers; if (editLayers.CurrentLayer.FeatureClass.ShapeType != esriGeometryType.esriGeometryPolygon) { message = "Current edit layer must be a polygon layer."; return false; } featSel = editLayers.CurrentLayer as IFeatureSelection; if (featSel.SelectionSet.Count == 0) { message = "No features to be merged have been selected."; return false; } message = "OK"; return true; } catch { throw new Exception("Error checking edit session."); } } } }

Select by Area Button


using ESRI.ArcGIS.esriSystem; using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using ESRI.ArcGIS.ArcMapUI; using ESRI.ArcGIS.Carto; using ESRI.ArcGIS.Editor; using ESRI.ArcGIS.Geometry; using ESRI.ArcGIS.Geodatabase; namespace MergePolys { public partial class SelectByArea : Form

104

{ public SelectByArea() { InitializeComponent(); } private void btn_Cancel_Click(object sender, EventArgs e) { this.Dispose(); } private void btn_Select_Click(object sender, EventArgs e) { IActiveView activeView = ArcMap.Document.ActiveView; IEditor3 theEditor; IEditLayers editLayers; IFeatureClass featCLS; try { theEditor = ArcMap.Application.FindExtensionByName("ESRI Object Editor") as IEditor3; if (theEditor.EditState != esriEditState.esriStateEditing) { MessageBox.Show("Please start an editing session first!"); this.Dispose(); } else { editLayers = theEditor as IEditLayers; featCLS = editLayers.CurrentLayer.FeatureClass; if (featCLS.ShapeType != esriGeometryType.esriGeometryPolygon) { MessageBox.Show("Current edit layer must be a polygon layer."); } else { IFeatureSelection featureSelection = editLayers.CurrentLayer as IFeatureSelection; IQueryFilter qf = new QueryFilterClass(); qf.WhereClause = "Shape_Area < " + this.txt_Area.Text; activeView.PartialRefresh(esriViewDrawPhase.esriViewGeoSelection, null, null); featureSelection.SelectFeatures(qf, esriSelectionResultEnum.esriSelectionResultNew, false); activeView.PartialRefresh(esriViewDrawPhase.esriViewGeoSelection, null, null); this.Close(); } } } catch (Exception ex) { MessageBox.Show("Error selecting by area - " + ex.Message); this.Dispose(); } } } }

105

REFERENCES Alhaddad, B., Roca, J., and Burns, M., 2009. Monitoring urban sprawl from historical aerial photographs and satellite imagery using texture analysis and mathematical morphology approaches. In: A: European Congress of the Regional Science Association International. "Territorial Cohesion of Europe & Integrative Planning: 49th European Congress of the Regional Science Association International". 25-29 August 2009 Lodz. Belgium: European Regional Science Association, 1-9. Anderson, J.J. and Cobb, N.S., 2004. Tree cover discrimination in panchromatic aerial imagery of pinyon-juniper woodlands. Photogrammetric Engineering & Remote Sensing, 70 (9), 1063-1068. Awwad, W.A., 2003. Land cover mapping: a comparison between manual digitizing and automated classification of black and white historical aerial photography. Thesis (MS). University of Florida. Ashish, D., 2002. Land classification of aerial images using artificial neural networks. Thesis (MS). University of Georgia. Browning, D.M., Archer, S.R. and Byrne, A.T., 2009. Field validation of 1930s aerial photography: What are we missing? Journal of Arid Environments, 73, 844-853. Campbell, J.B, 2008. Introduction to Remote Sensing. 4th ed. New York: Guilford Press. Caridade, C.M.R., Marcal, A.R.S., and Mendonca, T., 2008. The use of texture for image classification of black and white air photographs. International Journal of Remote Sensing, 29 (2), 593-607. Carmel, Y. and Kadmon, R., 1998. Computerized classification of Mediterranean vegetation using panchromatic aerial photographs. Journal of Vegetation Science, 9, 445-454. Castilla, G. and Hay, G., 2007. An automated delineation tool for assisted interpretation of digital imagery. [online] American Society for Photogrammetry and Remote Sensing 2007 Annual Conference Tampa, Florida. Available from: http://www.asprs.org/a/publications/proceedings/tampa2007/0014.pdf [Accessed 17 September 2011]. Congalton, R. and Green K., 1993. A practical look at the sources of confusion in error matrix generation. Photogrammetric Engineering & Remote Sensing, 59 (5), 641644.

106

Cots-Folch, R., Aitkenhead, M. J., and Martinez-Casasnovas, J.A, 2007. Mapping land cover from detailed aerial photography data using textural and neural network analysis. International Journal of Remote Sensing, 28 (7), 1625-1642. Elmqvist, B., Ardo, J. and Olsson, L., 2008. Land use studies in drylands: an evaluation of object-oriented classification of very high resolution panchromatic imagery. International Journal of Remote Sensing, 29 (24), 7129-7140. ENVI, 2011. ENVI 4.8 Software Documentation Applying K-Means classification. Erener, A. and Duzgun, S. H., 2009. A methodology for land use change detection of high resolution pan images based on texture analysis. Italian Journal of Remote Sensing, 41 (2), 47-59. ESRI, 2011. ArcGIS 10 Desktop Help How Dendrogram works. [online] ArcGIS Resource Center. Available from: http://help.arcgis.com/en/arcgisdesktop/10.0/help/index.html#/How_Dendrogram _works/009z000000q6000000/ [Accessed 15 September 2011]. Fauvel, M. and Chanussot, J., 2007. A joint spatial and spectral SVMs classification of panchromatic images. Geoscience and Remote Sensing Symposium, 1497-1500. Gong, P., Marceau, D.J. and Howarth, P.J., 1992. A comparison of spatial feature extraction algorithms for land-use classification with SPOT HRV data. Remote Sensing of Environment, 40 (2), 137-151. Halounova, L., 2009. Object oriented land cover classification of panchromatic analog image data [online]. Available from: http://gis.vsb.cz/zsv/images/stories/publikace/halounovajourremsensing_jan2009. pdf [Accessed Nov 4 2011]. Halounova, L., 2005. The automatic classification of B&W aerial photos [online]. Available from: http://www.isprs.org/proceedings/XXXV/congress/comm3/papers/313.pdf [Accessed Nov 4 2011]. Haralick, R.M, Shanmugam, K. and Dinstein, I., 1973. Textural Features for Image Classification. IEEE Transactions on Systems, Man and Cybernetics, 3 (6), 610621. Haralick, R. 1979. Statistical and structural approaches to texture. Proceedings of the IEEE, 67 (5), 786-804. Hoffer, R.M., 1984. Remote Sensing to Measure the Distribution and Structure of Vegetation in The Role of Terrestrial Vegetation in the Global Carbon Cycle: Measurement by Remote Sensing. In: Woodwell, G. M., ed. Volume 23 of

107

Scientific Committee on Problems of the Environment Series. Sussex, John Wiley & Sons Ltd., 131-159. Hu, Q., Luo, S., Qiao, Y. and Qian, G., 2008. Supervised grayscale thresholding based on transition regions. Image and Vision Computing, 26, 1677-1684. Hudak, A.T. and Wessman, C.A., 1998. Textural analysis of historical aerial photography to characterize woody plant encroachment in South African savanna. Remote Sensing of Environment, 66, 317-330. Hung, M.-C., and Wu, Y.-H., 2005. Mapping and visualizing the Great Salt Lake landscape dynamics using multi-temporal satellite images, 1972-1976. International Journal of Remote Sensing, 26, 1815-1834. Jensen, J.R., 2005. Introductory digital image processing a remote sensing perspective 3rd ed. New Jersey: Pearson Prentice Hall. Kadmon, R. and Harari-Kremer, R., 1999. Studying long-term vegetation dynamics using digital processing of historical aerial photographs. Remote Sensing of Environment, 68, 164-176. Laliberte, A.S, Rango, A., Havstad, K.M., Paris, J.F., Reldon, B.F., McNeely, R. and Gonzalez, A.L., 2004. Object-oriented image analysis for mapping shrub encroachment from 1937 to 2003 in southern New Mexico. Remote Sensing of Environment, 93, 198-210. Li, Y., Onasch, C.M. and Guo, Y., 2008. GIS-based detection of grain boundaries. Journal of Structural Geology, 30, 431-443. Maillard, P., 2003. Comparing texture analysis methods through classification. Photogrammetric Engineering & Remote Sensing, 69 (4), 357-367. Mast, J.N, Veblen, T.T. and Hodgson, M.E., 1997. Tree invasion within a pine/grassland ecotone: an approach with historic aerial photography and GIS modeling. Forest Ecology and Management, 93, 181-194. Mathews, Louise, 2005. Aerial Photography Field Office (APFO): Historical imagery holdings for the United States Department of Agriculture (USDA) [online]. Farm Service Agency. Available from: http://www.fsa.usda.gov/Internet/FSA_File/vault_holdings2.pdf [Accessed 8 February 2011]. Middleton, M., Narhi, P., Sutinen, M.L. and Sutinen, R., 2008. Object-based change detection of historical aerial photographs reveals altitudinal forest expansion [online]. International Society for Photogrammetry and Remote Sensing Geographic Object-based Image Analysis Calgary Canada. Available from:

108

http://www.isprs.org/proceedings/XXXVIII/4C1/Sessions/Session4/6791_middleton_Proc_pap.pdf [Accessed 25 November 2010]. Myint, S. W. and Lam, N., 2005. A study of lacunarity-based texture analysis approaches to improve urban image classification. Computers, Environment and Urban Systems, 29, 501-523. Okeke, F. and Karnieli A., 2006. Methods for fuzzy classification and accuracy assessement of historical aerial photographs for vegetation change analysis. Part I: Algorithm development. International Journal of Remote Sensing, 27, 153176. Pacifici, F., Chini, M. and Emery, W.J., 2009. A neural network approach using multiscale textural metrics from very high-resolution panchromatic imagery for urban land-use classification. Remote Sensing of Environment, 113, 1276-1292. Pillai, R.B. and Weisberg, P.J., 2005. Object-oriented classification of repeat aerial photography for quantifying woodland expansion in central Nevada. 20th Biennial Workshop on Aerial Photography, Videography, and High Resolution Digital Imagery for Resource Assessment, 3-5 October2005, Weslaco, Texas. Pringle, R.M., Syfert, M., Webb, J.K., and Shine, R., 2009. Quantifying historical changes in habitat availability for endangered species: use of pixel- and objectbased remote sensing. Journal of Applied Ecology, 46, 544-553. Resler, L.M., Fonstad, M.A. and Butler, D.R., 2004. Mapping the alpine treeline ecotone with digital aerial photography and textural analysis. Geocarto International, 19 (1), 37-44. Short, D. and Short D., 1987. Studying long term community dynamics using image processing. In: Tenhunen, J.D., ed. Plant response to stress in Mediterranean climates. Berlin: Springer-Verlag, 165-171. Srinivasan, G.N. and Shobha, G., 2008. Statistical Texture Analysis. Proceedings of World Academy of Science, Engineering and Technology, 36, 1264-1269. Strahler, A.H., Woodcock, C.E. and Smith J.A., 1986. On the nature of models in remote sensing. Remote Sensing of Environment, 20, 121-139. Tuceryan, M. and Jain, A.K., 1998. Texture Analysis. In: Chen, C.H., Pau, L.F., and Wang, P.S.P, eds. The handbook of pattern recognition and computer vision 2nd edition. World Scientific Publishing Co., 207-248.

109

U.S. Army Corps of Engineers, 1995. Manual No. 1110-1-1802 on Engineering and Design: Geophysical exploration for engineering and environmental investigations, chapter 9: Remote Sensing [online]. Available from: http://140.194.76.129/publications/eng-manuals/em1110-1-1802/toc.htm [Accessed 1 January 2011].

110

Você também pode gostar