Você está na página 1de 6

DIGITAL IMAGE PROCESSING: APPLICATION FOR

ABNORMAL INCIDENT DETECTION


K.Pradeep Kumar K.Sai Prasad
IV/IV B.tech IV/IV B.tech
RVR&JC College of Engineering RVR&JC College of Engineering
Guntur Guntur
Ph:9347075053 mailto:saik_882@yahoo.co.in
pradeep_ec865@rediffmail.com

Abstract- Intelligent vision systems (IVS) In recent years interest in automated surveillance
represent an exciting part of modern sensing, systems has grown dramatically as the advances
computing, and engineering systems. The in image processing and computer hardware
principal information source in IVS is the technologies have made it possible to design
image, a two dimensional representation of a intelligent incident detection algorithms and
three dimensional scene. The main advantage implemented them as real-time systems. The
of using IVS systems is that the information is need for such equipment has been obvious for
in a form that can be interpreted by humans. quite some time now, as human operators are
Our paper is an image process application for unreliable, fallible and expensive to employ.
abnormal incident detection, which can be
used in high security installation, subways, 1.2. Motion analysis for incident detection
etc. In our work, motion cues are used to
classify dynamic scenes and subsequently Interest in motion processing has increased with
allow the detection of abnormal movements, advance in motion analysis methodology and
which may be related critical situations. processing capabilities. The concept of
Successive frames are extracted from the automated incident detection is based on the idea
video stream and compared. By subtracting of finding suitable image cues that can represent
the second image from the first, that the specific event of interest with minimum
difference image is obtained. This is the overlapping with other classes. In this paper,
segmented to aid error measurement and motion is adopted as the main cue for abnormal
thresholding. If is the threshold is exceeded, incident detection.
the human operator is alerted. So, that he /
she may take remedial action Thus by 1.3. Image acquisition
processing the input image suitably, our
system alerts operators to any abnormal Obtaining the images is the
incidents, which might lead to crtitical first step in implementing the system.
situations.
1.4. Camera position
1. INTRODUCTION The camera is placed at a fixed
height in the subway or corridor. This portion
1.1. Need for automated Surveillance need not be changed along the course of
operation.
Motion-based automated surveillance or
intelligent scene-monitoring systems were
introduced in the recent past. Video motion
detection and other similar systems aim to alert
operators or start a high-resolution video
recording when the motion conditions of a
specific area in the scene are changed.
Where ε is a small positive number. The figure
3 shows the resultant image obtained by
subtracting the images 1 and 2. The threshold
level used in the system is 0.8, which is found to
be sufficient for obtaining a good binary
difference image

fig1. Camera Position

1.5. Frame extraction

In this system, motion is used


as the main cue for abnormal incident detection.
It is henceforth obvious that the first concern is
obtaining the images required from the source. In
the circumstances described (subways, high Initial position final position
security installations) usually a closed circuit
television system is employed.
Any ordinary video systems
use 25 frames per second. The system described
here uses scene motion information extracted at
the rate of 8.33 times per second. These amounts
to capturing a frame once every two frames in
the video camera system. In practical real time
operation a hardware block-matching motion
detector is used for frame extraction.

2. THE DIFFERENCE IMAGE:


There are two major
approaches to extracting two-dimensional
motion from image sequential optical flow and difference image
motion correspondence. Simple subtraction of The second figure shows a slightly
displaced version of the first figure.
images acquired at different instants in time The system errors mentioned in
makes motion detection possible, when there is a the last item must be suppressed. If it is required
stationary camera and constant illumination. to find the direction of motion, it can do by
Both of these conditions are satisfied in the areas constructing a cumulative difference image. This
of application of our system. cumulative difference image can be constructed
A difference image is nothing from a sequence
but a binary image d (i , j) where non-zero values of images. This, however is not necessary our
represent image areas with motion, that is areas system as the direction of motion is invariably the
where these was a substantial difference between same.
gray levels in consecutive images p1 and p2: Obtaining the difference image is
d (i, j) = 0 if ⏐ p1 (i, j) – p2 (i, j) ⏐ <= ε simplified in the MATLAB image processing
= 1 otherwise Toolbox. The input images are read using the
‘imread’ function and converted to a binary image
using the ‘im2bw’ function. The ‘im2bw’ Ri adjacent to Rj where S is the total number of
function converts the input image to gray scale regions in an image and H(Ri) is a binary
image. The output binary image BW is 0.0 homogeneity evaluation of region Ri. Resulting
(black) for all pixels in the input image with regions of the segmented image must be both
luminance less than a user defined level and 1.0 homogeneous and maximal where ‘maximal’
(white) for all other pixels. means that the homogeneity criterion would not
be true after merging a region with any adjacent
2.1. Segmentation details region.
This says about the
segmentation details of the object. There are two 2.2. Region merging and splitting
basic forms of segmentation. The basic approaches to region-based
1. Complete Segmentation. segmentation are
2. Partial Segmentation. ™ Region Merging
™ Region Splitting
2.1.1. Complete and partial segmentation ™ Split-and-Merge processing.

Complete segmentation results in a set of disjoint Region merging starts with an over segmented
region corresponding uniquely with objects in image and merges similar or homogeneous
the input image. In partial segmentation, the regions to form larger regions until no further
regions may not correspond directly with the merging are possible. Region splitting is the
image objects. opposite of region merging. It begins with an
If partial segmentation is the goal, an image is under segmented image where the regions are
divided into separate regions that are not homogeneous. The existing image regions
homogeneous with respect to a chosen property are sequentially split to form regions properly.
such as brightness, color, reflectivity, texture etc.
Segmentation methods can be divided into three 2.3. Region growing and segmentation
groups according to the dominant features they
employ. First is global knowledge about an Our system uses the region growing
image or its past; edge-based segmentation forms segmentation method to video the image in to
the second group and region based segmentation, regions. In region growing segmentation, a seed
the third. In the second and third group each point is first chosen in the image. Then the eight
region can be represented by its closed boundary neighbours of the pixel are checked for a specific
and each closed boundary describes the region. threshold condition. If the condition is satisfied it
Edge based segmentation methods find the is incorporated as part of the region. This process
borders between regions while region based is repeated for each of the eight neighbours and
methods construct regions directly. this continues until every pixel has been
Region growing techniques are generally better checked, and the whole image has been
in noisy images where borders are not very easy segmented into regions.
to detect. Homogeneity is an important property In our system, the MATLAB
of regions and is used as the main segmentation function ‘bwlabel’ which performs region-
criterion in region growing, where the basic idea growing segmentation. This function accepts the
is to divide an image into to zones of maximum image to be segmented as input and returns a
homogeneity. matrix representing the segmented image along
A complete segmentation of an image R is a with the number of segments. It is to be noted
finite set of regions R1...Rs , that the image at this stage of processing is a
s binary image with only two levels-black (1) and
R = U Ri Ri ∩ Rj = Φ i ≠ j white (0).
i =1
2.3.1. Segmentation algorithm
Further, for region-based segmentation,
the conditions need to be satisfied. ¾ AN INITIAL SET OF SMALL AREAS
ARBITERATIVELY MERGED
H (Ri) = TRUE i = 1,2,s ACCORDING TO SIMILARITY
H (Ri U Rj) = FALSE i≠j CONSTRAINTS.
¾ START BY CHOOSING AN
ARBITRARY SEED PIXEL,
COMPARE IT WITH Segment 2:
NEIGHBOURING PIXELS.

¾ REGION IS GROWN FROM THE


0 0 0 0 2 2 2 2 0 0 0 0
SEED PIXEL BY ADDING IN 0 0 0 0 2 2 2 2 2 2 2 2
NEIGHBOURING PIXELS THAT 0 0 0 2 2 2 2 2 2 2 2 2
ARE SIMILAR, INCREASING THE 0 0 0 0 2 2 2 2 2 2 2 2
SIZE OF REGION. 0 0 0 0 2 2 2 2 2 2 2 2
¾ WHEN THE GROWTH OF ONE
REGION STOPS WE SIMPLY
0 0 0 0 2 2 2 2 2 2 2 2
CHOOSE ANOTHER SEED PIXEL 0 0 0 0 2 2 2 2 2 2 2 2
WHICH DOES NOT YET BELONG 0 0 0 0 2 2 2 2 2 2 2 2
TO ANY REGION AND START 0 0 0 0 2 2 2 2 2 2 2 2
AGAIN. 0 0 0 0 2 2 2 2 2 2 2 2
¾ THE WHOLE PROCESS IS
CONTINUED UNTIL ALL PIXELS
0 0 2 0 2 2 2 2 2 2 2 2
BELONG TO SOME REGION. 0 0 2 2 2 2 2 2 2 2 2 2
0 0 0 2 2 2 2 2 2 2 2 2
0 0 0 0 2 2 2 2 2 2 2 2

A portion of the corresponding segment


matrix

3. THRESHOLDING AND ABNORMAL


INCIDENT DETECTION

3.1. Need forthresholding

The process of segmentation aids in extracting


the require information separately – in this case,
example of difference image the segments are a representation of the amount
Segment 1: of motion of the subjects in the scene from one
frame to the next.
0 0 0 0 0 0 0 0 1 1 1 1
0 0 0 0 0 0 0 0 1 1 1 1
0 0 0 0 0 0 0 1 1 1 1 1
0 0 0 0 0 0 0 1 1 1 1 1
0 0 0 0 0 0 0 1 1 1 1 1
0 0 0 0 0 0 0 1 1 1 1 1
0 0 0 0 0 0 0 1 1 1 1 1
0 0 0 0 0 0 0 1 1 1 1 1
0 0 0 0 0 0 0 1 1 1 1 1
0 0 0 0 0 0 0 1 1 1 1 1
0 0 0 0 0 0 0 1 1 1 1 1
0 0 0 0 0 0 1 1 1 1 1 1
0 0 0 0 0 0 1 1 1 1 1 1 Difference image
0 0 0 0 0 0 1 1 1 1 1 1 3.2. Thersholding algorithm
0 0 0 0 0 1 0 1 1 1 1 1
0 0 0 0 0 1 1 0 0 0 0 0 1. Get the total number of segments k.
2. Repeat through steps 3 to 7 for all k
segments.
3. Scan the matrix to find the kith 3. RESULTS
segment.
4. Store the column indices of the kith
segment.
5. Find the maximum and minimum
index values; subtract to find their
difference.
6. If difference is greater than or equal
to 16 pixels, sound alarm to alert
the human operator.
7. Continue with next segment.
An example of how the differences are
stored in the form of a column vector is shown.
If any value in the difference matrix is greater
than or equal to 16, the human operator is
alerted.
Sample difference matrix
6 20
8 20
13 20
17 20
20 20
20 20 First image
20 20
20 20
20 20
20 18
20 11

In the sample matrix for the difference image


shown above, the
threshold of 16 pixels is exceeded and the human
operator is alerted.

Second image
DIFFERENCE IMAGE

4. ADVANTAGES
The system we have explained can be used as
mentioned earlier as an efficient and easily
implementable pedestrian monitoting system in
subways. It can detect quickly any fast or
abnormal movement, which may lead to
dangerous situations. Further, surveillance by
humans is dependent on the quality of the human
operator and a lot of factors like operator fatigue,
negligence may lead to degradation of
performance. These factors may can intelligent
vision system a better option.

5. CONCLUSION

Further, surveillance by humans is dependent on


the quality of the human operator and a lot of
factors like operator fatigue, negligence may lead
to degradation of performance. These factors
may can intelligent vision system a better option.
as in systems that use gait signature for
recognition in vehicle video sensors for driver
assistance.

Você também pode gostar