Você está na página 1de 6

Existing System

ANSWERING queries based on “alike” but maybe not exactly “same” is known as similarity
search. It has been widely used to simulate the process of object proximity ranking performed by
human specialists, such as image retrieval and time series matching. Nowadays, the rapid
advances in multimedia and network technologies popularize many applications of video
databases, and sophisticated techniques for representing, matching, and indexing videos are in
high demand. A video sequence is an ordered set of a large number of frames, and from the
database research perspective, each frame is usually represented by a high-dimensional vector,
which has been extracted from some low-level content features, such as color distribution,
texture pattern, or shape structure within the original media domain . Matching of videos is often
translated into searches among these feature vectors . it is often undesirable to manually check
whether a video is part of a long stream by browsing its entire length; thus, a reliable solution of
automatically finding similar content is imperative. Video subsequence identification involves
locating the position of the most similar part with respect to a user-specified query clip Q from a
long prestored video sequence S. Ideally, it can identify relevant video, even if there exists some
transformation distortion, partial content reordering, insertion, deletion, or replacement.

Extensive research efforts have been made on extracting and matching content-based signatures
to detect copies of videos. Another system introduced to employ ordinal measure for video
sequence matching. Naphade et al. developed an efficient scheme to match video clips using
color histogram intersection. Pua et al. proposed a method based on color moment feature to
search video copy from a long segmented sequence. Hampapur et al. examined several methods
of using a sequence of frame features (ordinal, motion, or color signature) to leverage the
characteristic of sequence-to-sequence matching. In their work, query sequence slides frame by
frame on database video with a fixed length window. In addition to distortions introduced by
different encoding parameters, Kim and Vasudev proposed to use spatiotemporal ordinal
signatures of frames to further address display format conversions, such as different aspect ratios
(letter-box, pillar-box, or other styles).
Since the process of video transformation could give rise to several distortions, techniques
circumventing these variations by globe signatures have been considered. They tend to depict a
video globally rather than focusing on its sequential details. Some properties that are likely to be
preserved even with these variations (e.g., shot length

information) were suggested to be generated as compact signatures and string matching


technique could be used to report such a copy .

System Specification:

Hardware Requirements:

Processor Type : Pentium 4


1. Processor Speed : 3.4GHZ
2. RAM : 2 GB RAM
3. Hard Disk Capacity : 160 GB
4. Monitor : Acer
5. Mouse : Logitech
6. Keyboard : TVS keyboard

SOFTWARE REQUIREMENTS
Operating System : Windows 2000, XP.

Programming Tool : JDK 1.6

Run - Time : JRE 1.6

Modules

Frame Acquisition

Frame Acquisition is a significant super-set of the support for digital still imaging drivers that
was provided by the Still Image Architecture (STI) in Windows. Whereas STI only provided a
low-level interface for doing basic transfers of data to and from the device (as well as the
invocation of an image scan process on the Windows machine through the external device), FA
provides a framework through which a device can present its unique capabilities to the operating
system, and applications can programmatically take advantage of those features. According to
Microsoft, FA drivers are made up of a user interface (UI) component and a driver core
component, loaded into two different process spaces: UI in the application space and the driver
core in the FA service space. Intuitively, the unmatched and sparsely matched parts can be
directly discarded, as they clearly suggest there are no possible subsequences similar to Q,
because a necessary condition for a subsequence ~S to be similar to Q is they share sufficient
number of similar frames
Frame Feature extraction

Frame Information extraction (FIE) is a type of image information retrieval whose goal
is to automatically extract structured pixel information, i.e. categorized and contextually and
semantically well-defined data from a certain domain, from unstructured machine-readable
images. Each frame can be placed as a node along the temporal line of a video. Given a query
clip Q and database video S, a short line and a long line can be abstracted, respectively.
Hereafter, each frame is no longer modeled as a high-dimensional point as in the preliminary
step, but simply a node.3 Q and S, which are two finite sets of nodes ordered along the temporal
lines, are treated as two sides of a bipartite graph

Dense Segment Extraction

The Similar region segmentation Identification and cluster analysis indicated that subjects
could precisely perceive the similarity between every two images according to the salience of
form features. It has been found that even though there are multiple and complicated
combinations of form features in objects, it is possible for people to make decisions for the
feature matching behavior The dense segment extraction is a “query-aware” process, i.e.,video
segmentation is materialized on-the-fly. It is different from the offline pre segmentation by
detecting shot boundaries typically in video retrieval. ensures that any dense segment will not
overlap with others, and there is no likelihood of cutting off an actually similar subsequence into
two segments , which helps to reduce the number of dense segments. With a proper , the minimal
density of a segment can be guaranteed according to the following proposition However, it is
likely that two successive or overlapped subsequences actually both somewhat similar to Q are
grouped in a longer segment. To further filter dense but non similar segments, and extract the
most similar subsequence accurately in a relatively long segment, a filter-and refine search
strategy is applied

Clustering Index Table

By virtue of the compact image feature representation ICC, the VSS can be implemented in high
efficiency. For the efficiency improvement and scalable computing, Clustering Index Table
(CIT) search method is proposed in this paper. The indexes of tables are ICC for video feature
clustering. The video components are arranged into the same video clustering index table
according to the definition mentioned before. The VSS can be carried out by index clustering and
redundant dissimilar video search is no longer executed, so the search efficiency can be greatly
improved. Compared with the existing search techniques whose computational complexity are
O(n) or O(logn), the CIT search approach can perform with higher efficiency, which is
implemented on the basis of hashing technique.

Visual content similarity identification

The filtering step can be viewed as a rough similarity evaluation disregarding temporal
information. Observing that a segment ~S_ k 2 ~ SS_0 may have multiple 1:1 mappings, and the
most similar subsequence in S may only be a portion of S_ k, next, we further refine ~ SS_0 to
find the most suitable 1:1 mapping for accurate identification (or ranking), by considering visual
content, temporal order and frame alignment simultaneously.

Defining a similarity measure consistent with human perception is crucial for similarity search.
First, we present the score function which integrates three factors in judging video relevance for
resembling human perception more accurately. The video similarity is computed based on an
arbitrary 1:1 mapping Mk out of all the possible 1:1 mappings between Q and ~S_ k. To locate
the most visually similar subsequence, we resort to the distance between two similar frames,
instead of simply judging whether they are similar or not. Accordingly, we modify the edge
definition of un-weighted bipartite graph above. Let the weight of edge !i;j denote the detailed
similarity between frames qi and sj in Mk. This information has already been available in the
preliminary stage.

Você também pode gostar