Você está na página 1de 9

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)

Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com Volume 2, Issue 4, July August 2013 ISSN 2278-6856

Similarity analysis approach to identify moving objects


Sweta
M. Tech. student ECE, UIET, MDU, Rohtak, Haryana

Abstract: The object detection is one of major research area


required in number of applications such as the detection of the plane or the missile in the space. The presented work will be performed on images as well on videos to identify such objects. While performing the work on images, a dynamic model is presented to perform the object detection. At first the feature analysis will be performed to separate the background and the foreground over the image and then the edge detection along with segmentation will be implemented to identify the object over the image. In case of videos same kind of work will be implemented. At first a similarity measure approach will be implemented to identify the object frame over the video. This frame will be converted to the image and finally the object detection over the image will be implemented Keywords: feature analysis, edge detection, segmentation, similarity measure approach, object detection.

(b) Pre-processing The principle objective of the image enhancement is to process an image for a specific task so that the processed image is better viewed than the original image. Image enhancement methods basically fall into two domains, spatial and frequency domain. 1) Spatial domain: As the name suggests in this approach different methods are used, which will affect the manipulation of pixel values of an image 2) Frequency domain: In this method first a Fourier transform of the image is computed and then different operations are performed on them and finally results are obtained by getting the inverse Fourier transform of the image. Due to the simplicity and the easiness involved in the voxel values, different enhancement techniques implemented in ITK are in spatial domain. The technique of image pre-processing falls into image enhancement. Due to various limitations of the image extraction devices, images acquired by them are prone to many errors. Some of these limitations of these devices may include spatial and temporal limitations like patient moment during the acquisition. The effect of all these limitations includes noise (unwanted data), deformation, bad illumination and blur in the acquired images. Image analysis requires often pre-processing in which different filters are applied removing the noise by preserving clinically important structures. This helps to improve the performance of subsequent tasks. The images that are used for the segmentation process in our case are carefully chosen so that they are less prone to these errors. To perform the image processing certain filters are been used. These filters are used to perform the information extraction as well to perform the required changes over the image. As some operation is performed on an image, the foremost task is to convert the image to a normalized image. For this conversion, the filtration is required. The filtration is about to remove the different kind of noise from the image and to standardize the image physical features. Different methods used for the pre-processing of these images using ITK filters are briefly explained in the following paragraphs. 1.1 Region of Interest Image Filter (ROI) This filter allows the user to choose specific spatial area in an image. In case of image pre-processing, this filter is primarily used to isolate an area (volume in 3D) of an image for processing. MIP offers users to choose different Page 115

1. INTRODUCTION
The common aspects of the research consist of the image processing and different associated operations. Work includes the study of complete architecture to extract the information from the image and to perform the segmentation to extract the image features and to perform the classification. Here the detailed description of all the steps of image analysis and the extraction of infection from it are given. Digital image processing refers to the processing of digital images by means of digital computers. This process, whose inputs are images and outputs, can be images or extracted features of different attributes of images including reorganization of individual objects. Figure 1 show, basic steps involved in the processing of digital images.

Fig. 1: Basic steps involved in image processing (a) Image Acquisition The data collection is a major aspect in image processing. The same collection has been used in other studies of automatic scan images segmentation. Various image databases available world-wide along their name, description and applications. The image acquisition is required to collect the actual source image. Volume 2, Issue 4 July August 2013

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)


Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com Volume 2, Issue 4, July August 2013 ISSN 2278-6856
parts of image by selecting different dimensions of image using ROI filter. 1.2 Smoothing Filters Real image data has a level of uncertainty that being manifested in the variability of measures that are assigned to voxels. This uncertainty is interpreted usually as the noise and considered as undesirable in the image data. Smoothing filters are used for blurring and noise reduction. Blurring is used in elimination of small details from an image prior to large object extraction and bridging of small gaps in lines or curves. Noise reduction can be done by blurring with linear filter or by non-linear filter. 1.2.1 Neighbourhood Filters These filters use the concept of locality depending on the neighbourhood of every image element in the input image. ITK uses the concept of iterator to achieve the task of visiting different input pixels of images; especially it uses the concept of neighbourhood iterators to visit each neighbourhood of the pixel elements. Mean and median filters are neighbourhood filters that are generally used for smoothing an image. Mean Image Filter: This is a spatial linear filter. The idea behind this smoothing filter is to replace the value of each input image element with the average of the grey levels in the neighbourhood defined by the filter mask or kernel. ITK mean filter computers the value of each output pixel by finding the statistical mean of the neighbourhood (original pixel is also included) of the corresponding input pixel. The size of the neighbourhood is assigned here by specifying the radius. Figure 1.2 (a) illustrates the local effect of the mean image filter in 2D case with radius one. The original intensity value of centre pixel is 10, which is replaced by 52 after this filter is processed. the mean filter is that it provides excellent noise reduction capability with considerably less blurring of the image. As the images used in our case do not have any impulse noise the results of this filter are more effective than mean filter. In the same example as the original centre pixel intensity value 10 is replaced with statistical median value of the neighbouring pixels i.e. 50. In figure 1.2 (b), the effect of this filter on human head CT scan is shown.

Fig. 1.2(b): Median filter operation 1.3 Histogram Equalization The histogram in the context of image processing is the operation in which the occurrence of each intensity value in an image is shown. Normally histogram of an image is a graph showing the intensity values on x-axis to the number of occurrence same intensity value on y-axis. Histogram equalization is a technique in which the dynamic range of the histogram of an input image increased. This process assigns the intensity values of pixels in input image so that the output image contains uniformly distribution of intensity values. This process improves the contrast of the input image. 1.4 Scaling Filter In the image processing, normalization is a simple image enhancement technique that attempts to improve the contrast (difference in visual property that make an object) of an image by stretching the range of intensity values of the original image to span a desired range of values. This method is a linear process. The applications of this process may include stretching the poor contrast images, which have glare into the higher range so that they look better. For this reason the process of normalization is also called contrast stretching. It differs from other methods like histogram equalization in this it only applies a linear scaling function to the image pixel values. ITK offers four different filters for the purpose of casting the pixel values of an image. These filters only shift the range of the input image pixels so that the output image pixels are easier for different computation purposes. 1.4.1 Cast Image Filter This filter is a simple image filter that acts pixel-wise on an input image casting every pixel to the type of output image. For example the unsigned short input image pixel type can be converted into integer output image pixel type using this filter. The general formula used for this filter is provided below: Output pixel = static cast <output pixel type> (input pixel) Page 116

Fig. 1.2(a): Mean filter operation. Median Image Filter: This filter falls into the category of non-linear filters. In this filter, the response is based on ranking or ordering the pixels contained in an image area encompassed by the filter and then replacing the value of the centre pixel with the value determined by the ranking result. The median filter replaces the value of the pixel by the median of the grey levels in the neighbourhood of the pixel (the original value of the pixel is also included). The median filters are quite popular in removing certain type of random noise especially in removing the presence of impulse noise or salt-and-pepper noise. Impulse noise appears as the white and black dots superimposed on the original image. The advantage of the median filter over Volume 2, Issue 4 July August 2013

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)


Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com Volume 2, Issue 4, July August 2013 ISSN 2278-6856
1.4.2 Rescale Intensity Image Filter This filter maps the input range of pixel values to the user specified out range of the pixel values. 1.4.3 Shift Scale Image Filter This filter gives more choice to the user to define different parameters involved in pixel conversion of linear transformation. The user provides the values of shift and scale. Depending on these values linear transformation is calculated. The linear transformation applied by this filter can be expressed as: Output pixel = (input pixel + shift) scale 1.4.4 Normalization Image Filter This filter maps the values of the input pixels to the values of the output pixels so that the grey levels of the output image have zero mean and unit variance. This filter uses statistic image filter to compute mean, variance of the input image and then applies a shift scale image filter to shift and scale the pixels. In our case this filter helps in clustering the pixels that are spatially closer with similar intensity values. 1.4.5 Re-sampling Image Filter This filter plays an important role in the pre-processing of input images. The reason for using it in the first stage of the pre-processing is that most of the input images are not equi-spaced along the z-direction. For applying different special filters it is really important to use equi-spaced input dataset. This filter resamples an existing image through some transform, interpolating via some image function. The choice of the image function is very much dependent on the goal of processing. The input expected by this filter is an image, a transform and interpolator. The space coordinates of the image are mapped using the transform to get new image. This filter is only performed on the space coordinates and not on the pixel/grid coordinates. In this filter transform function is used to transform the spatial co-ordinates of the input image as defined by this function. The interpolator is required since mapping from one space to another space is often requires evaluation of intensity values at non-grid positions. 1.5 Anisotropic diffusion Image Filter The drawback of image de-noising with linear filtering is that they tend to blur the sharp boundaries of the images. But in case of segmentation it is really important to retain the edges of an image to detect subtle aspects of the anatomical structures or shapes. To achieve this goal use of non-linear filters has been proposed. Most non-linear filters are built upon one of the following three strategies: i. Heuristics Set of rules on pixels that are designed specifically to achieve particular result. ii. Statistics - Filters designed on robust statistics (e.g. median) or using properties from stochastic process. Volume 2, Issue 4 July August 2013 iii. Partial differential equations (PDEs) The input is an initial value in a PDE and the Output is from the solution of a PDE. 1.6 Image Segmentation In image processing, segmentation falls in to the category of extracting different image attributes of an original image. Segmentation subdivides an image into constituent regions or objects. The level to which that subdivision carried out is a problem specific. The simplest method among all segmentation methods is thresholdbased method, whose volume uses either a manually or automated generated threshold values for segmentation. In this method first the histogram of the image is computed then a particular value of threshold (intensity) is selected to segment the region. However in this method the intensity values often suffer from non-uniformly distributed contrast values inside the vessels. So, in case of small structure vessel segmentation, global threshold based methods are not useful. Segmentation is again having different types given as under. i. Edge detection ii. Watershed Segmentation

2. RELATED WORK
There is lot of work already done in the area of image segmentation as well as the object detection. There are number of algorithmic approaches proposed by different authors to perform the image and the video segmentation to perform the object detection. One of such object segmentation is proposed by Johannes Schels [1]. According to this approach a view point based analysis is performed to identify the multiple objects over an image. The author has defined the approach to detect an individual object as well as the object parts by performing the analysis from different viewpoints and finally combine the results taken from different viewpoints to generate the overall segmented object. The another work in same area was proposed by Ke Gao, the author uses the characteristics mapping based affine analysis along with sample expansion. The author had presented the characteristics mining to perform the object detection as well as to obtain the global and the local stability model to perform the object detection [2]. Author also purpose the feature based analysis to clear the object detection area. Another work on multiclass object detection was defined by LiMin Wang. The author has used the local feature and the context analysis to perform the object detection. The author has divided the complete image area in smaller segments and then this segmented image will be analyzed under the local patches and based on this patch based analysis the local appearance of the object analyzed and relatively the object area detection was performed[3]. In object detection many authors defined the feature based analysis and same kind of work was performed by Yuli Gao. Author has defined a function selection approach for the object detection. The author has defined a framework to identify the multiple object Page 117

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)


Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com Volume 2, Issue 4, July August 2013 ISSN 2278-6856
classes by performing the image observation. The author has presented a hybrid system based on the feature analysis and the diversity analysis over the image [4]. The author has defined object detection over the videos by performing the video frame analysis. A statistical measure is been performed over the sequence frames to identify the objects. In this work, author also defined a spatial pyramid based matching to identify the objects over the plane [5]. In case of color images and the videos, the color threshholding mechanism is adapted by different authors to perform color based segmentation. Mehdi Madani had proposed a robust mechanism for the object detection in the real time videos. Author has defined a redistribution algorithm for the adaptive thresholding method to process on training data. The author also verifies the robustness of the approach under the different lighting conditions [6]. The traffic analysis and vehicle object detection on road was proposed by Taewan Kim by performing a learning based analysis. The author has built a system to perform the dimension reduction based on the mutual information analysis approach. To perform the learning a Bays classifier was suggested by the author [7]. The object detection and the tracking over the videos for the luggage objects are suggested by Zhenke Yang. The segmentation approach includes the blob creation; tracking algorithm is defined [8]. In same direction, a video sequencing analysis will be used by performing the background analysis. Shashank Prasad has defined the effective approach for the object recognition in real time videos [9]. A local feature based analysis using the SVM based approach is defined to perform the pixel based classification. Author has proposed a salient point analysis for the object analysis and the object recognition in the video frames [10]. A template based match for the object detection is proposed by Chengli Xie. In this approach, a pair wise classification and instance detection over the video is proposed. The author had defined a layered approach was adopted to perform the object classification and recognition [11]. Two techniques - frame differencing and Gaussian mixture are used generally for background segmentation. Both of them have their respective advantages and shortcomings in an environment that involves a constant moving background. 3.1.1 Frame Differencing: This is one of the most common techniques used in background segmentation. As the name itself suggests, frame differencing involves taking the difference between two frames and using this difference to detect the object. The approach we use is very similar to this and is a two part process. First, the object is detected using frame differencing. Then this detected object is compared with the ground truth to learn the reliability of this approach. Two consecutive frames are loaded from a given sequence of video frames. These color frames are converted to gray scale intensity. Otsus Method [12] is then used to determine the threshold value of the gray scale images. The threshold value of image is determined such that the pixel values on either side of this value are established to be either a background or a foreground pixel. [13] Following Otsus method, the two consecutive gray scaled images are differentiated and their absolute difference is used to identify the movement between frames. The noise collected due to differencing is removed by applying the threshold value to the images. The threshold value that we find in our case varies between [0.43 - 0.45]. Pixels below the threshold are removed from the differenced frame leaving behind our object of interest. As described in equation 1, the absolute difference between two frames needs to be greater than the threshold for the object to be detected. | f1f2 |= T - (1) Here 1F is the initial frame, 2F is the following frame and T is the threshold value. 3.1.2 Gaussian Mixture: This is another technique that is commonly used for performing background segmentation. In their paper [15], Stauffer and Grimson suggest a probabilistic approach using a mixture of Gaussians for identifying the background and foreground objects. The probability of observing a given pixel value tp at time t is given by [15]: P (Xt) = where K is the number of Gaussian mixtures that are used. The number of K varies depending on the memory allocated for simulations. The normalized Gaussian is a function of ti, , ti, , ti , which represents the weight, mean, and the covariance matrix of the ith Gaussian at time t respectively. The weight indicates the influence of the ith Gaussian at time t. In our case we choose K = 5, to maximize the distinction amongst pixel values. Since this is an iterative process, all the parameters are updated with the inclusion of every new pixel. Before the update takes place, the new pixel is compared to see if it matches any of the K existing Gaussians.

3. THEORITICAL BACKGROUND
In the past, several background segmentation techniques have been used to identify the objects of interest in a scene. The object of interest can be defined as something that is different in a scene in comparison to previous scenes. This comparison is performed by comparing a scene with an object, to an ideal scene which just has the presence of the background. The regions which detect a considerable change between the current scene and the previous scenes are the areas of interest, as they may indicate the location of the new object. The term background segmentation refers to identifying the difference between an image and its background using any of the techniques mentioned in [4] and then thresholding the results to identify an object of interest. 3.1 Methods: Segmentation Techniques Volume 2, Issue 4 July August 2013

Page 118

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)


Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com Volume 2, Issue 4, July August 2013 ISSN 2278-6856
Table 1: comparison of techniques 2. It incorrectly removes the inside of an object. Back plate Difference:

Figure 3.2: Back plate Difference Observations:1. It correctly identifies an object. 2. But incorrectly keeps the complex background.

An algorithm can be judged based on the properties accuracy, speed and Memory requirements. The following table shows the properties for respective algorithms. From the previous table it is clear that frame differencing is chosen to be better than Gaussian mixture segmentation algorithm. Frame differencing is a very primitive technique that could be implemented very easily. In comparison Gaussian Mixture approach requires several resources for it to be effective. Hence, frame differencing is preferred and used for the analysis performed in this study. This will help us identify an approach that would be most suitable to a given systems unique requirements. 3.2 Frame Difference Vs Back plate difference: In contrast frame differencing is preferred over back plate difference segmentation technique due to the following reasons: (a) Back plate difference [15] was the fastest and produces the highest results in 4 out of 7 tests. (b) Frame difference was the ONLY algorithm to correctly remove the complex background, but could not correctly identify the foreground element The following images define the functioning of frame difference and back plate difference segmentation algorithms. Frame Difference:

4. OUR RESEARCH DESIGN


Our project Moving object detection uses the mat lab platform for the simulation. . Step by step process is shown graphically in figure 4.1.

Figure 4.1: Block diagram for implementation of Moving Object Detection Different phases caught up in the implementation of this project are as follows: 4.1 Phase I: Pre-processing Noise Removal The pre-processing performs some steps to improve the image quality. In this project, we have used median filter to remove noise in images. 4.2 Median Filter: The median filter is a classical noise elimination filter. Noise is removed by calculating the median from all its box elements and stores the value to the central element. If we consider an example of 3x3 matrix 1 2 9 4 3 8 5 6 7 The median filter sorts the elements in a given matrix and median value is assigned to the central pixel. Sorted Page 119

Figure 3.1: frame difference Observations:1. This algorithm Correctly Background in a video frame

Removes

Complex

Volume 2, Issue 4 July August 2013

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)


Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com Volume 2, Issue 4, July August 2013 ISSN 2278-6856
elements 1, 2, 3, 4, 5, 6, 7, 8, 9 and median 5 will assign to the central element. Similar box scan is performed over the whole image and reduces noise. Execution time is more compared to mean filter, since the algorithm involves with sorting techniques. But it removes the small pixel noise. 4.3 Algorithm for Noise Removal: A noise removal algorithm is developed by using the median filter as explained in section 1.1. Algorithm for noise removal is explained as follows 1. Read the input image 2. For (present position = initial position: final position) (a) Scan all the surrounding elements (b) Use bubble sort technique for sorting all the values (c) Calculate the median of it. 3. Assign the value to the present position of the input image Initial position = 2nd row, 2nd column of an image resolution Final position = (n-1) row, (n-1) column of an image resolution Present position = the pixel value and it varies from initial to final position Values are taken from 2nd row 2nd columns to (n-1) row (n-1) column because we need to scan all the surrounding elements of each pixel. 4.4 Phase II: Segmentation Segmentation is the process of dividing digital image into multiple regions. Segmentation shows the objects and boundaries in an image. Each pixel in the region has some characteristics like color, intensity etc. In this project, we have used frame difference to segment the image. 4.5 Frame Difference: Frame difference calculates the difference between 2 frames at every pixel position and store the absolute difference. It is used for visualizing the moving objects in a sequence of frames. It takes very less memory to perform the calculation. Let us consider an example , if we take a sequence of frames, the present frame and the next frame are taken into consideration at every calculation and the frames are shifted(after computation, the next frame becomes present frame and the frame that comes in sequence becomes next frame). Figure 4.2 shows the frame difference between 2 frames. 4.6 Algorithm for segmentation: To perform the segmentation operation, frame difference algorithm is implemented as it takes less processing time. Frame difference algorithm performs separation of two sequential frames. Algorithm intended for the segmentation is explained as follows 1. Read the input images 2. For (present position = initial position: final position) (a) Difference between the pixels values at current position of two images is calculated. (b) Calculate the absolute value (c) Store the difference in the new image at same pixel position that is at present position 4.7 Phase III: Feature Extraction Feature Extraction plays a major role to detect the moving objects in the sequence of frames. Every object has a particular feature like color or shape. In a sequence of frames, any of the features is used to detect the objects in the frame [14]. In this project, we have used Bounding Box with Color feature as a method for feature extraction. Bounding Box: If the segmentation is performed using the frame difference, residual image is visualized with rectangular bounding box with the dimensions of the object produced from residual image. For a particular image, a scan is performed where the intensity values of the image are more than limit (depends on the assigned value, for accurate assign maximum). Then boundaries of the different moving objects are obtained using bwboundaries function in Mat lab. Now, size of each boundary is obtained from the result of bw-boundaries function. Then we will draw each boundary of minimum width and height of 20 pixels on the present frame. It should be taken in care that this boundary threshold should be adjusted with respect to the complexity of the scene. Methodology The presented work is about to detect the object over the images and the videos. The work is been implemented to detect some plane etc in the space so that some relative action can be taken at the initial stage. The work is about to improve the existing system in terms of efficiency and the accuracy. In this presented system the morphological operator will be used to perform the segmentation. The stages involved in this work are given as under A layered approach is suggested in this work where foremost operation is to identify the object frame by performing the moment analysis. The feature extraction will be performed on the video sequence to identify the object frame. Once the object frames are identified it is required to identify the object area from the video frame. To separate the object area from the background scene, the color model approach along with thresholding will be implemented. Page 120

Figure 4.2: Frame difference between 2 frames Volume 2, Issue 4 July August 2013

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)


Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com Volume 2, Issue 4, July August 2013 ISSN 2278-6856
After the detection of object area the next work is to represent the object in a boundary area. For this the morphological operator based approach will be implemented to fill the smaller holes and to define a cover boundary around the object. The closing and the dilation morphological operators will be used to perform the final object detection. The algorithmic approach used to perform the presented work is shown in figure 2. As shown in the figure, the video frames will be extracted from the video and then the object frame will be identified. Just after this, the color model will be implemented to separate the foreground object from the background. Finally a morphological logical operator approach along with feature analysis will be implemented to perform the object detection.

Figure 5.1.1 Input video screenshot 1

Figure 5.1.2 Input video screenshot 2

Figure 5.1.3 Input video screenshot 3

Figure 5.1.4 Input video screenshot 4 Figure 4.3: Proposed Work methodology

5. RESULTS
The proposed algorithm is tested with input video file consisting of Video Num Frames = 120 Video Frames per second = 30 Video Width = 360 Video Height = 288 5.1. Screenshots of Input video: The screenshots of the video that was used in this work are as given below Volume 2, Issue 4 July August 2013 Figure 5.1.5 Input video screenshot 5 5.2. Screenshots of the results: When the code is executed in MATLAB, first the video is displayed in the output window which contains the Page 121

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)


Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com Volume 2, Issue 4, July August 2013 ISSN 2278-6856
screenshots shown in 5.1. Then results of the algorithm will be displayed. Some screenshots of the results are shown as follows:

Figure 5.2.6 Output screen 6

Figure 5.2.1 Output screen 1

Figure 5.2.7 Output screen 7 Figure 5.2.2 Output screen 2

6. FUTURE WORK AND CONCLUSION


Future Work In this work, object detection process based on a mathematical model is performed. The proposed work can be extended in different directions. One of such direction is to image the work on different classification approaches such as bays classifier, ART network etc. The direction of the work is to perform the work on different format video; here the work is specific to the AVI videos. The robustness of the work can also be checked to work on distorted images also.

Figure 5.2.3 Output screen 3

7. Conclusion
In this presented work, implementation of the hybrid model to perform the detection of some object in some image or the video is done. The presented model used the mathematical approaches to perform the object detection over the image and the video. At the initial stage, as the image or the video is accepted from the user, the preprocessing is performed to identify the image or video format and to extract the frame from the video. Once the extraction is performed the next work is to remove the noise from image to improve the accuracy level. At the next stage the similarity analysis between the images is performed. Now based on this analysis the object and the object features are identified. The obtained results show that the presented work is effective enough to perform the detection of objects from the image or the video.

Figure 5.2.4 Output screen 4

REFERENCES
Figure 5.2.5 Output screen 5 Volume 2, Issue 4 July August 2013 [1] Johannes Schels," Synthetically trained multi-view object class and viewpoint detection for advanced Page 122

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)


Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com Volume 2, Issue 4, July August 2013 ISSN 2278-6856
image retrieval", ICMR 11, April 17-20, Trento, Italy 978-1-4503-0336-1/11/04 [2] Ke Gao," Affine Stable Characteristic based Sample Expansion for Object Detection", CIVR10, July 57, 2010, Xian, China 978-1-4503-0117-6/10/07 pp 422-429 [3] LiMin Wang," Multiclass Object Detection by Combining Local Appearances and Context", MM11, November 28December 1, 2011, Scottsdale, Arizona, USA. 978-1-4503-0616-4/11/11 pp 1161-1164 [4] Yuli Gao," Automatic Function Selection for Large Scale Salient Object Detection", MM06, October 2327, 2006, Santa Barbara, California, USA 159593-447-2/06/0010 pp 97-100 [5] Tim Althoff," Detection Bank: An Object Detection Based Video Representation for Multimedia Event Recognition", MM12, October 29November 2, 2012, Nara, Japan 978-1-4503-1089-5/12/10 pp 1065-1068 [6] Mehdi Madani," Real Time Object Detection Using a Novel Adaptive Color Thresholding Method", UbiMUI11, December 1, 2011, Scottsdale, Arizona, USA 978-1-4503-0993-6/11/12 pp 13-16 [7] Taewan Kim," Example based Learning for Object Detection in Images", VNBA08, October 31, 2008, Vancouver, British Columbia, Canada 978-1-60558313-6/08/10 pp 39-46 [8] Zhenke Yang," Surveillance System Using Abandoned Object Detection", International Conference on Computer Systems and Technologies CompSysTech11 CompSysTech'11, June 1617, 2011, Vienna, Austria 978-1-4503-0917-2/11/06 pp 380-386 [9] Shashank Prasad," Real-time Object Detection and Tracking in an Unknown Environment", 2011 World Congress on Information and Communication Technologies 978-1-4673-0125-1@ 2011 IEEE pp 1060-1065 [10] Sameena Shah," Fast object detection using local feature-based SVMs", MDM07 August 12, 2007, San Jose, California, USA 978-1-59593-8374/07/0008 [11] Chengli Xie," Real-time Multiple Object Instances Detection", MM12, October 29November 2, 2012, Nara, Japan 978-1-4503-1089-5/12/10 pp 1301-1302 [12] Riano Lorenzo," A New Unsupervised Neural Network for Pattern Recognition with Spiking Neurons", 2006 International Joint Conference on Neural Networks Sheraton Vancouver Wall Centre Hotel, Vancouver, BC, Canada July 16-21, 2006 07803-9490-9/062006 IEEE [13] Deep J. Shah, Deborah Estrin Motion Based Bird Sensing Using Frame Differencing and Gaussian Mixture U C R Undergraduate Research Journal, Department of Electrical Engineering, University of California, Riverside, Nov 15, 2000 Volume 2, Issue 4 July August 2013 [14] R.Revathi, M.Hemalatha A Novel Approach For Object Detection and Tracking using IFL Algorithm (IJCSIS) International Journal of Computer Science and Information Security, Karpagam University Coimbatore, India Vol. 11, No. 4, April 2013. [15] Chris Stauer, W.E.L Grimson Adaptive background mixture models for real-time tracking", The Articial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, A 02139.

Page 123

Você também pode gostar