Escolar Documentos
Profissional Documentos
Cultura Documentos
CHAPTER-1
INTRODUCTION
The result of the increase in vehicle traffic, many problems have appeared. For example,
traffic accidents, traffic congestion, traffic induced air pollution and so on. Traffic congestion has
been a significantly challenging problem. It has widely been realized that increases of
preliminary transportation infrastructure, more pavements, and widened road, have not been able
to relieve city congestion. As a result, many investigators have paid their attentions on intelligent
transportation system (ITS), such as predict the traffic flow on the basis of monitoring the
activities at traffic intersections for detecting congestions. To better understand traffic flow, an
increasing reliance on traffic surveillance is in a need for better vehicle detection at a wide-area.
Automatic detecting vehicles in video surveillance data is a very challenging problem in
computer vision with important practical applications, such as traffic analysis and security.
Automatic detecting and tracking vehicles in video surveillance data is a very challenging
problem in computer vision with important practical applications, such as traffic analysis and
security. Video cameras are a relatively inexpensive surveillance tool. Manually reviewing the
large amount of data they generate is often impractical. Thus, algorithms for analyzing video
which require little or no human input is a good solution. Video surveillance systems are focused
on background modeling, moving vehicle classification and tracking. The increasing availability
of video sensors and high performance video processing hardware opens up exciting possibilities
for tackling many video understanding problems, among which vehicle tracking and target
classification are very important. A vehicle tracking and classification system is described as one
that can categorize moving vehicles and further classifies the vehicles into various classes.
Traffic management and information systems depend mainly on sensors for estimating
the traffic parameters. In addition to vehicle counts, a much larger set of traffic parameters like
vehicle classifications, lane changes, etc., can be computed. Vehicle detection and counting uses
a single camera mounted usually on a pole or other tall structure, looking down on the traffic
scene. The system requires only the camera calibration parameters and direction of traffic for
initialization. Two common themes associated with tracking traffic movement and recognizing
accident information from real time video sequences are
1.3 Objectives
Detection of multiple moving vehicles in a video sequence.
Tracking of the detected vehicles.
Colure identification of Vehicles.
Counting the total number of vehicles in videos.
1. Gupte S., Masoud O., Martin R. F. K. and Papanikolopoulos N. P., proposed ―Detection
and Classification Vehicles‖ in the March, 2002,
The presents algorithms for vision-based detection and classification of vehicles
in monocular image sequences of traffic scenes recorded by a stationary camera.
Processing is done at three levels: raw images, region level, and vehicle level. Vehicles
are modeled as rectangular patches with certain dynamic behavior. The proposed method
is based on the establishment of correspondences between regions and vehicles, as the
vehicles move through the image sequence. Experimental results from highway scenes
are provided which demonstrate the effectiveness of the method. Briefly describe an
interactive camera calibration tool that is developed for recovering the camera parameters
using features in the image selected by the user.
2. Toufiq P., Ahmed Egammal and Anurag Mittal, proposed ―A Framework for Feature
Selection for Background Subtraction‖, in 2006.
Background subtraction is a widely used paradigm to detect moving vehicles in
video taken from a static camera and is used for various important applications such as
video surveillance, human motion analysis, etc. Various statistical approaches have been
3. Toufiq P., Ahmed Egammal and Anurag Mittal, proposed ―A Framework for Feature
Selection for Background Subtraction‖, In 2006.
Background subtraction is a widely used paradigm to detect moving objects in
video taken from a static camera and is used for various important applications such as
video surveillance, human motion analysis. Various statistical approaches have been
proposed for modeling a given scene background. However, there is no theoretical
framework for choosing which features to use to model different regions of the scene
background. The paper introduces a novel framework for feature selection for
background modeling and subtraction. A boosting algorithm, namely Real Boost, is used
to choose the best combination of features at each pixel. Given the probability estimates
from a pool of features calculated by Kernel Density Estimate (KDE) over a certain time
period, the algorithm selects the most useful ones to discriminate foreground objects from
the scene background. The results show that the proposed framework successfully selects
appropriate features for different parts of the image.
CHAPTER-2
The software and hardware requirement for the vehicle detection and counting method is
as follows:
Image Processing Toolbox supports a diverse set of image types, including high dynamic
range, giga pixel resolution, embedded profile, and tomographic. Visualization functions let
users explore an image, examine a region of pixels, adjust the contrast, create contours or
histograms, and manipulate regions of interest (ROIs). With toolbox algorithms users can restore
degraded images, detect and measure features, analyze shapes and textures, and adjust color
balance.
CHAPTER-3
In the adaptive background subtraction algorithm, assume that the first frame is
background for the video clips considered. The architecture of the proposed algorithm is shown
in Figure3.1. The flow of the algorithm for background elimination is as shown in figure3.2
Video clip is read and it is converted to frames. In the first stage difference between frames are
computed i.e. FR1and FR1+j. In the next stage these differences are compared, and in the third
stage pixels having the same values in the frame difference are eliminated. The fourth phase is
the post processing stage executed on the image obtained in third stage and the fifth phase is the
vehicle detection .and vehicle tuning .And final stage is counting vehicles.
FR1 FR1+j
Background
registration
Image subtraction
Foreground detection
Image segmentation
Vehicle counting
Often the vehicle may be of the same color as the background, or may be some portion of it may
be aged with the background, due to which detecting the vehicle becomes difficult. This leads to
an erroneous vehicle count.
The segmentation of vehicle regions of interest. In this step, regions which may
contain unknown object have to be detected.
Next step focuses on the extraction of suitable features and then extraction of
vehicles. The main purpose of feature extraction is to reduce data by means of
measuring certain features that distinguish the input patterns.
Moving vehicle detection is in video analysis. It can be used in many regions such as
video surveillance, traffic monitoring and people tracking. there are three common motion
segmentation techniques, which are frame difference, entropy mask and optical flow method.
Frame difference method has less computational complexity, and it is easy to implement, but
generally does a poor job of extracting the complete shapes of certain types of moving vehicles.
Adaptive background subtraction uses the current frame and the reference image.
Difference between the current frame and the reference frame is above the threshold is
considered as moving vehicle. Optical flow method can detect the moving vehicle even when the
camera moves, but it needs more time for its computational complexity, and it is very sensitive to
the noise. The motion area usually appears quite noisy in real images and optical flow estimation
involves only local computation. So the optical flow method cannot detect the exact contour of
the moving vehicle. From the above estimations, it is clear that there are some shortcomings in
the traditional moving vehicle detection methods
• Frame difference cannot detect the exact contour of the moving vehicle.
• Optical flow method is sensitive to the noise.
Video indexing, that is, automatic annotation and retrieval of the videos in multimedia
Databases.
Human-computer interaction, that is, gesture recognition, eye gaze tracking for data
Input to computers, etc.
Traffic monitoring, that is, real-time gathering of traffic statistics to direct traffic flow.
Vehicle navigation, that is, video-based path planning and obstacle avoidance
capabilities.
vehicle starts or ends within the image. The background of the image that contains the vehicle is
uniform, as it has already been set to white or black at the end of the first phase.
Background frames
Convert
video
Foreground frames
Subtraction
Convertin Convert
g into gray scale
gray scale image into
image binary
image
Image
segmentation
Tracking vehicle
Traverse
the image
Check for
vehicle
Vehicle counting
Figure 3.3 vehicle counting
The tracked binary image mask1 forms the input image for counting. The image is
scanned from top to bottom for detecting the presence of vehicle. Two variables are maintained
that is count that keeps track of the number of vehicles and count register variable countreg,
which contains the information of the registered vehicle. When a new vehicle is encountered is
checked to see whether it is already registered in the buffer, if the vehicle is not registered then it
is assumed to be a new vehicle and count is incremented, otherwise it is treated as a part of an
already existing vehicle and the presence of the vehicle is neglected. The concept is applied for
the entire image and the final count of vehicle is present in variable count. A fairly good
accuracy of count is achieved. Sometimes due to occlusions two vehicles are merged together
and treated as a single entity.
CHAPTER-4
DETAILED DESIGN
In detail design the algorithms of each modules which is used in this project and the
detail description of each module is explained.
end for
Step 4: for i = 1 to k
Convert images obtained in step 3 from RGB to gray format and store that in a
three- dimensional arrayT [m, n, l].
Initialize array variable to Read video and store two matrix value that is rows and
columns of video frame. Read each frame from video clips and then store frames into array then
increment the array position to store the next frame this process continues untill final frame is
read and store in array .Image is converted into RGB to gray image and store in 3 dimensional
array wher m and n is the row and column value is given to the particular number.
hole region of image background by using ImFill function and store into the background array.
finally shows the background image. . The involves subtracting every image from the
background scene. The first frame is assumed as initial background and thresholding the
resultant difference image to determine the foreground image A vehicle is a group of pixels that
move in a coherent manner, either as a lighter region over a darker background or vice versa.
Often the vehicle may be of the same color as the background, or may be some portion of it may
be aged with the background, due to which detecting the vehicle becomes difficult. This leads to
an erroneous vehicle count.
array c otherwise store the pixel value. Convert stored value c array and apply median filter for
removing the noise.
CHAPTER-5
IMPLEMENTATION REQUIREMENTS
The implementation of the vehicle detection and counting on traffic video by using image
processing with simulation software is as follows
if nargout
[varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});
else
gui_mainfcn(gui_State, varargin{:});
end
function MAINCODE_OpeningFcn(hObject, eventdata, handles, varargin)
handles.output = hObject;
a = ones(256,256);
axes(handles.axes1);
imshow(a);
axes(handles.axes2);
imshow(a);
axes(handles.axes3);
imshow(a);
guidata(hObject, handles);
function varargout = MAINCODE_OutputFcn(hObject, eventdata, handles)
varargout{1} = handles.output;
--- Executes on button press in pushbutton3.
function pushbutton3_Callback(hObject, eventdata, handles)
hObject handle to pushbutton3 (see GCBO)
eventdata reserved - to be defined in a future version of MATLAB
handles structure with handles and user data (see GUIDATA)
close all;
a = ones(256,256);
axes(handles.axes1);
imshow(a);
axes(handles.axes2);
imshow(a);
axes(handles.axes3),imshow(a);
set(handles.text12,'string','');
set(handles.text13,'string','');
set(handles.text14,'string','');
set(handles.edit1,'string','');
Graphical Objects in MATLAB are arranged in a structure called the Graphics Object
Hierarchy which can be viewed as a rooted tree with the nodes representing the objects and the
root vertex corresponding to the root object. The form of such a tree depends on the particular set
of objects initialised and the precedence relationship between its nodes is reflected in the object’s
properties Parent and Children. If for example, the handle h of object A is the value of the
property Parent of object B, then A is the parent of B, and, accordingly, B is a child of A. The
possibility to have one or another form of hierarchy depends on the admissible values of objects’
properties Parent and/or Children. One also has to be aware of some specific rules which apply
to the use of particular objects. For example, it is recommended that do not parent any objects to
the annotation axes, and do not change explicitly annotation axes’ properties Similarly, parenting
annotation objects to standard axes may cause errors enter the code.
i=1;
alpha = 0.5; count=0;
while ~isDone(videoReader)
thisFrame = step(videoReader);
if i == 1
Background = thisFrame;
else
Change background slightly at each frame
Background(t+1)=(1-alpha)*I+alpha*Background
Background = (1-alpha)* thisFrame + alpha * Background;
end
Display the changing/adapting background.
subplot(1, 3, 1);
axes(handles.axes1),imshow(Background);
title('Adaptive Background');
Calculate a difference between this frame and the background.
differenceImage = thisFrame - Background;
Threshold with Otsu method.
grayImage = rgb2gray(differenceImage); Convert to gray level
thresholdLevel = graythresh(grayImage); Get threshold.
binaryImage = im2bw( grayImage, thresholdLevel); Do the binarization
se = strel('square', 5);
binaryImage=imdilate(binaryImage,se); binaryImage=imdilate(binaryImage,se);
binaryImage=imopen(binaryImage,se);
binaryImage=imclose(binaryImage,se);
[binaryImage,IDX] = imfill(binaryImage,'holes');
[binaryImage,IDX] = imfill(binaryImage,'holes');
[binaryImage,IDX] = imfill(binaryImage,'holes');
blobAnalysis = vision.BlobAnalysis('BoundingBoxOutputPort', true, ...
for g=1:
set(handles.edit2,'string','1.red 2.blue 3.rdfdd ');
pause(0.1);
end
The preprocessing, background substraction is foreground detection, and data validation.
Preprocessing consists of a collection of simple image processing tasks that change the raw input
video into a format that can be processed by subsequent. Background modeling uses the new
video frame to calculate and update A background model provides a statistical description of the
entire background scene. Foreground detection then identi_es pixels in the video frame that
cannot be adequately explained by the background model, and outputs them as a binary
candidate foreground mask. Finally, data validation examines the candidate mask, eliminates
those pixels that do not correspond to actual moving vehicles, and outputs the foreground mask.
The Domain knowledge and computationally-intensive vision algorithms are often used in data
validation. Real-time processing is still feasible as these sophisticated algorithms are applied
only on the small number of candidate foreground pixels.
.
5.3 Pseudo code for detection and tracking
videoReader = vision.VideoFileReader(diry2);%visiontraffic.avi
blobAnalysis = vision.BlobAnalysis('BoundingBoxOutputPort', true, ...
'AreaOutputPort', false, 'CentroidOutputPort', false, ...
'MinimumBlobArea', 150);
fontSize = 14;
i=1;
alpha = 0.5; count=0;
while ~isDone(videoReader)
thisFrame = step(videoReader);
if i == 1
Background = thisFrame;
else
can be separated into two conventional classes: temporal differencing and background modeling
and subtraction . The former approach is possibly the simplest one, also capable of adapting to
changes in the scene with a lower computational load. However, the detection performance of
temporal differencing is usually quite poor in real-life surveillance applications. On the other
hand, background modeling and subtraction approach has been used successfully in several
algorithms Background subtraction is easy. If you want to subtract a constant value, or a
background with the same size as your image, you simply write img = img -
background. imsubtract simply makes sure that the output is zero wherever the background is
larger than the image. Background estimation is hard. There you need to know what kind of
image you're looking at, otherwise, the background estimation will fail.
Tracking moving vehicles in video streams has been an area of research in computer vision. In
a real time system for measuring traffic parameters is described. Tracking uses a feature-based
method along with occlusion reasoning for tracking vehicles in congested traffic scenes. In order
to handle occlusions, instead of tracking entire vehicles, vehicle sub features are tracked.
Tracking is usually performed in the context of higher-level applications that require the location
and/or shape of the vehicle in every frame. Typically, assumptions are made to constrain the
tracking problem in the context of a particular application. There are three key steps in video
analysis: detection of interesting moving vehicle, tracking of vehicle from frame to frame, and
analysis of vehicle tracks to recognize their behavior. Otsu's method is an image processing
technique that can be used to convert a greyscale image into a purely binary image by calculating
a threshold to split pixels into frames.
a=bb(2)+bb(4);
b=bb(1)+bb(3);
s=size(labeledImage,1);
if a<=s && b<=s
croppedImage = grayImage(bb(2):bb(2)+bb(4),bb(1):bb(1)+bb(3),:);
croppedImage1 = binaryImage(bb(2):bb(2)+bb(4),bb(1):bb(1)+bb(3),:);
else
a=a-(a-s);
b=b-(b-s);
croppedImage = grayImage(bb(2):a,bb(1):b,:);
croppedImage1 = binaryImage(bb(2):a,bb(1):b,:);
end
croppedImage=imresize(croppedImage,[50,50]);
cd database
imwrite(croppedImage,strcat(num2str(1),'.jpg'));
cd ..
end
area=area(area>150);
[~,maxAreaIdx] = max(area);
if length(area)<cres
cres=length(area);
count=count;
else if length(area)>cres
a=countnew(area,grayImage,stats);
count1=length(area)-cres;
count=count+count1;
cres=length(area);
else if length(area)==count
count=count;
cres=cres;
end
end
end
end
function out=countnew(area,grayImage,stats)
out=0;
for i=1:length(area)
bb = round(stats(i).BoundingBox);
note that regionprops switches x and y (it's a long story)
a=bb(2)+bb(4);
b=bb(1)+bb(3);
s=size(grayImage,1);
if a<=s && b<=s
croppedImage = grayImage(bb(2):bb(2)+bb(4),bb(1):bb(1)+bb(3),:);
croppedImage1 = binaryImage(bb(2):bb(2)+bb(4),bb(1):bb(1)+bb(3),:);
else
a=a-(a-s);
b=b-(b-s);
croppedImage = grayImage(bb(2):a,bb(1):b,:);
croppedImage1 = binaryImage(bb(2):a,bb(1):b,:);
end
croppedImage=imresize(croppedImage,[50,50]);
cd database
b1=imread('1.jpg');
cd ..
a1=corr2(croppedImage,b1);
if a1>=0.45
out=out+1;
end
end
end
The tracked binary image mask1 forms the input image for counting. Binary image is
scanned from top to bottom for detecting the presence of vehicle. Two variables are maintained
that is count that keeps track of the number of vehicles and count register countreg, which
contains the information of the registered vehicle. When a new vehicle is encountered it is first
checked to see whether it is already registered in the buffer, if the vehicle is not registered then it
is assumed to be a new vehicle and count is incremented, otherwise it is treated as a part of an
already existing vehicle and the presence of the vehicle is neglected. the Blob Analysis block to
calculate statistics for labeled regions in a binary image. The block returns quantities such as the
centroid, bounding box, label matrix, and blob count. The Blob Analysis block supports input
and output counting vehicle.
a=a-(a-s);
b=b-(b-s);
for j=1:3
croppedImage(:,:,j) = originalImage(bb(2):a,bb(1):b,j);
end
end
loc(1,i)=bb(2);
loc(2,i)=bb(1);
croppedImage=croppedImage*255;
r=mean2(croppedImage(:,:,1));
g=mean2(croppedImage(:,:,2));
b=mean2(croppedImage(:,:,3));
c(i1)=colur1(r,g,b);
i1=i1+1;
end
clear croppedImage;
end
end
function c=colur1(r,g,b)
chipNames = {'DarkSkin';'LightSkin';'BlueSky';'Foliage';'BlueFlower';'Bluish
Green';'Orange';'PurpleRed';ModerateRed';'Purple';'YellowGreen';'OrangeYellow';'Blue';'Green';'
White';'Yellow';'Magenta';'Cyan';'Red';'Neutral 8';'Neutral 65';'Neutral 5';'Neutral 35';'Black'};
sRGB_Values = ([...115,82,68194,150,13098,122,15787,108,67133,128,177103,189,170
214,126,4480,91,166193,90,9994,60,108157,188,64224,163,4656,61,15070,148,73175,54,
60,199,31187,86,1498,133,161243,243,242200,200,200160,160,160122,122,12185,85,85
52,52,52]);
r1=abs(sRGB_Values(:,1)-r);
g1=abs(sRGB_Values(:,1)-g);
b1=abs(sRGB_Values(:,1)-b);
a=[r1,g1,b1]';
[d,f]=min(sum(a));
c=chipNames(f);
end
many color spaces can separate chromatic and illumination components, maintaining
Slandered model regardless of the brightness can lead to an unstable model especially for very
bright or dark vehicles conversion also requires computational resources particularly in large
images. The idea of preserving intensity components and saving computational costs lead us
back to the RGB space. As the requirement to identify moving shadows, we need to consider a
color model that can separate slandered and brightness components. It should be compatible and
make use of our mixture model.
CHAPTER-6
TESTING
The actual purpose of testing is to discover errors. Testing is the process of trying to
discover every conceivable fault or weakness in a work product. It is the process of exercising
software with the intent of ensuring that the Software system meets its requirements and user
expectations and does not fail in an unacceptable manner.
TYPES OF TESTING
There are many types of testing methods are available in that mainly used testing
methods are as follows
Unit testing involves the design of test cases that validate that the internal program logic
is functioning properly, and that program produces valid outputs. All decision branches and
internal code flow should be validated. It is the testing of individual software units of the
application. It is done after the completion of individual unit before integration
Test passed
1
Test passed
3
The above tables 6.2 give different test cases done on the videos at different frame
location to detect the vehicle. In the moving cars video, the frame sequences used are 70, 168
and 232 respectively.
Test
1 passed
Moving Test
2 vehicle passed
tracking.
Test
3 passed
Table 6.3: Testing for Vehicle Tracking by Matrix Scan of a moving cars.
The above tables 6.3 give different test cases done on the videos at different frame in
tracking multiple moving vehicle. In the moving cars video, the frame sequences used are 70,
168 and 232 respectively.
Test
1 passed
Moving Test
2 vehicle passed
tracking.
Test
3 passed
The above figure 6.3 shows test case for vehicle counting the test case 1 is input frame is
1 and out frame is counting the number of vehicle is 1 then test case is passed. second test case
and third test case also passed. . The concept is applied for the entire image and the final count of
vehicle is present in variable count. A fairly good accuracy of count is achieved. Sometimes due
to occlusions two vehicles are merged together and treated as a single entity.
CHAPTER -7
RESULT
Graphical user interface is the browses, select the required video from the directory and also play
the videos. The Graphical user interface is used clear and close all videos. The following vehicle
are grouped under the category of MATLAB User Interface Objects are UI controls check
boxes, earditable text fields, list boxes, pop-up menus, push buttons, radio buttons, sliders, static
text boxes, toggle buttons. UI toolbar (uitoolbar), can parent object of type uipushtool and
uitoggletool UI context menus and UI menus vehicle of type uicontextmenu and uimenu
Container vehicle uipanel and uibuttongroup.
By using the detection and counting methods, various results are obtained. Video sequence taken
containing moving cars and walking person. These videos are processed to get a detected and
extracted object. Following snapshots shows the results obtained in each step of the process. The
output of the segmentation is a binary vehicle mask perform region extraction on mask. In the
region tracking want to associate regions in frame i+1 with the regions in frame . The allows us
to compute the velocity of the region as it moves across the image and also helps in the vehicle
tracking stage. There are certain problems that need to be handled for reliable and robust region
tracking.
Figure 7.2 shows the count of total tracked vehicles that are passed still this given frame.
Initially count register is set to zero ,when any moving vehicles are tracked than count register is
incremented. When a new vehicle is encountered is checked to see whether it is already
registered in the buffer, if the vehicle is not registered then it is assumed to be a new vehicle and
count is incremented, otherwise it is treated as a part of an already existing vehicle and the
presence of the vehicle is neglected.
Each Moving vehicles color is identified and displayed as shown in the figure 7.3, color
of vehicles in the given video is determined by using slandered color model. Color spaces can
separate chromatic and illumination components, maintaining Slandered model regardless of the
brightness can lead to an unstable model especially for very bright or dark vehicles conversion
also requires computational resources particularly in large images. The idea of preserving
intensity components and saving computational costs lead us back to the RGB space.
CHAPTER-7
CONCLUSION
A system has been developed to detect and count dynamic vehicles on highways efficiently.
The system effectively combines simple domain knowledge about vehicle classes with time
domain statistical measures to identify target vehicles in the presence of partial occlusions and
ambiguous poses, and the background clutter is effectively rejected. The experimental results
show that the accuracy of counting vehicles was 96%, although the vehicle detection was 100%
which is attributed towards partial occlusions.
The computational complexity of our algorithm is linear in the size of a video frame and the
number of vehicles detected. As we have considered traffic on highways there is no question of
shadow of any cast such as trees but sometimes due to occlusions two vehicles are merged
together and treated as a single entity.
Future Scope
Several future enhancements can be made to the system. The detection and tracking and
counting of moving vehicle can be extended to real-time live video feeds. Apart from the
detection and extraction, process of recognition can also be done. By using recognition
techniques, the vehicle in question can be classified. Recognition techniques would require an
additional database to match with the given vehicle.The system is designed for the detection and
tracking and counting of a multiple moving vehicle. It can be further devised to alarming
system.
REFERENCES
[1] P.M.Daigavane and Dr. P.R.Bajaj , Real Time Vehicle Detection and Counting Method for
Unsupervised Traffic Video on Highways Mrs.
[2] Chen S. C., Shyu M. L. and Zhang C., ―An Intelligent Framework for Spatio-Temporal
Vehicle Tracking‖, 4th International IEEE Conference on Intelligent Transportation Systems,
Oakland,California, USA, Aug. 2001.
[3] Gupte S., Masoud O., Martin R. F. K. and Papanikolopoulos N. P.,―Detection and
Classification of Vehicles‖, In IEEE Transactions on Intelligent Transportation Systems, vol. 3,
no. 1, March, 2002, pp. 37–47.
[4] Dailey D. J., Cathey F. and Pumrin S., ―An Algorithm to Estimate Mean Traffic Speed Using
Uncalibrated Cameras‖, In IEEE Transactions on Intelligent Ttransportations Systems, vol. 1, no.
2,pp. 98-107, June, 2000.
[5] S. Cheung and C. Kamath, ―Ro[1] Chen S. C., Shyu M. L. and Zhang C., ―An Unsupervised
Segmentation Framework for Texture Image Queries‖, In the 25th IEEE Computer Society
International Computer Software and Applications Conference (COMPSAC), Chicago, Illinois,
USA, Oct.2000.
[6] N. Kanhere, S. Pundlik and S. Birchfield, ―Vehicle Segmentation and Tracking from a Low-
Angle Off-Axis Camera‖, In IEEE Conference on Computer Vision and Pattern Recognition‖,
San Diego, June,2005.
[7] Deva R., David A., Forsyth and Andrew Z., ― Tracking People by Learning their
Appearance‖, In IEEE Transactions on Pattern Analysis and Machine Intelligence,vol. 29, no1,
Jan. 2007.
[8] Toufiq P., Ahmed Egammal and Anurag Mittal, ―A Framework for Feature Selection for
Background Subtraction‖, In Proceedings of IEEE Computer Society Conference on Computer
Vision and Pattern Recognition(CVPR’06), 2006.
[9] P. Kaewtra Kulpong and R. Bowden, ―An Improved Adaptive Background Mixture Model
for Real-time Tracking with Shadow Detection‖, In Proceedings of the 2nd European Workshop
on Advanced Video-Based Surveillance Systems, Sept. 2001.
APPENDICES
Functions
Functions differ from scripts. They take explicit input and output arguments. As all the
other matlab commands, a function can be called within a script or from the command line.
Functions used
There are many simulation functions are there some of them which are used in this
project are as follows
1. mmreader class
Description
The mmreader function is used to Create multimedia reader vehicle for reading video
files.Use mmreader with the read method to read video data from a multimedia file into the
MATLAB workspace.
2. read
video = read(obj)
video = read(obj, index)
Description
video = read(obj) reads in all video frames from the file associated with obj.
video = read(obj, index) reads only the specified frames. Index can be a single number or a two-
element array representing an index range of the video stream
3. size
d = size(X)
[m,n] = size(X)
m = size(X,dim)
[d1,d2,d3,...,dn] = size(X),
Description
The size is used to Array dimensions where d = size(X) returns the sizes of each dimension of
array X in a vector d with ndims(X) elements. If X is a scalar, which MATLAB software regards
as a 1-by-1 array, size(X) returns the vector [11].[m,n] = size(X) returns the size of matrix X in
separate variables m and n.m = size(X,dim) returns the size of the dimension of X specified by
scalar dim.
[d1,d2,d3,...,dn] = size(X), for n > 1, returns the sizes of the dimensions of the array X in the
variables d1,d2,d3,...,dn, provided the number of output arguments n equals ndims(X). If n does
not equal ndims(X), the following exceptions hold:
4. implay
mplay
implay(
filename)
implay(I)
implay(…, FPS)
Description
Implay function is used Play movies, videos, or image sequences.
mplay opens a Movie Player for showing MATLAB movies, videos, or image sequences (also
called image stacks). Use the implay File menu to select the movie or image sequence that you
want to play. Use implay toolbar buttons or menu options to play the movie, jump to a specific
frame in the sequence, change the frame rate of the display, or perform other exploration
activities. One can open multiple implay movie players to view different movies simultaneously.
implay(filename) opens the implay movie player, displaying the content of the file
specified by filename. The file can be an Audio Video Interleaved (AVI) file. implay reads one
frame at a time, conserving memory during playback. implay does not play audio tracks.
implay(I) opens the implay movie player, displaying the first frame in the multiframe
image array specified by I. I can be a MATLAB movie structure, or a sequence of binary,
grayscale, or truecolor images. A binary or grayscale image sequence can be an M-by-N-by-1-
by-K array or an M-by-N-by-K array. A true color image sequence must be an M-by-N-by-3-by-
K array.
implay(..., FPS) specifies the rate at which you want to view the movie or image
sequence. The frame rate is specified as frames-per-second. If omitted, implay uses the frame
rate specified in the file or the default value 20.
5. regionprops
Description
The regionnprop is Measure properties of image regions
STATS = regionprops(L, properties) measures a set of properties for each labeled region in the
label matrix L. Positive integer elements of L correspond to different regions. For example, the
set of elements of L equal to 1 corresponds to region 1; the set of elements of L equal to 2
corresponds to region 2; and so on.
STATS = regionprops(..., I, properties) measures a set of properties for each labeled region in the
image I. The first input to regionprops—either BW, CC, or L—identifies the regions in I. The
sizes must match: size(I) must equal size(BW), CC.ImageSize, or size(L).
STATS is a structure array with length equal to the number of vehicles in BW,
CC.NumVehicles, or max(L(:)). The fields of the structure array denote different properties for
each region, as specified by properties.
Properties
Properties can be a comma-separated list of strings, a cell array containing strings, the
single string 'all', or the string 'basic'. If properties are the string 'all', regionprops computes all
the shape measurements, listed in Shape Measurements. If called with a grayscale image,
regionprops also returns the pixel value measurements, listed in Pixel Value Measurements. If
properties are not specified or if it is the string 'basic', regionprops computes only the 'Area',
'Centroid', and 'BoundingBox' measurements. One can calculate the following properties on N-D
inputs: 'Area', 'BoundingBox', 'Centroid', 'FilledArea', 'FilledImage', 'Image', 'PixelIdxList',
'PixelList', and 'SubarrayIdx'
6. floor
B = floor(A)
Description
The floor function Round toward negative infinity.B = floor(A) rounds the elements of A to the
nearest integers less than or equal to A. For complex A, the imaginary and real parts are rounded
independently.