Você está na página 1de 25

INTRODUCTION

What is a smart camera? Different researchers and camera manufacturers offer different definitions. There does not seem to be a wellestablished and agreed-upon definition in either the video surveillance or machine vision industries, probably the two most active and advanced applications for smart cameras at present.

The idea of smart cameras is to convert data to knowledge by processing information where it becomes available, and transmit only results that are at a higher level of abstraction. A smart camera is smart because it performs application specific information processing (ASIP), the goal of which to understand and describe what is happening in the images for the purpose of better decision-making in an automated control system. A smart camera combines video sensing, video processing and communication within a single device.

Recent technological advances are enabling a new generation of smart cameras that represent a quantum leap in sophistication. While today digital cameras capture images, a smart camera capture high level descriptions of the scene and analyze what they see.

Matrox Iris GT smart camera

Smart cameras not only capture images, they further perform high-level image processing on-board, and transfer the data via network.

Due to their logarithmic behavior, high dynamic range and high bit resolution the low-cost and low-power CMOS sensors acquire images with the necessary quality for further image processing under varying illumination conditions. The integration of these advanced image sensors with high-performance processors into an embedded system facilitates new applications such as motion analysis and face recognition on-board and to transmit the (compressed) video data as well as the extracted video information via a network.

NI 1742 Smart Camera

[1] CMOS image sensors can overcome problems like large intensity contrast due to weather conditions or road lights and further blooming, which is an inherent weakness of existing CCD image sensors. Furthermore, noise in the video data is reduced by the capability of video computation close to the CMOS sensor. Thus, the smart camera delivers a new video quality and better video analysis results, if it is compared to existing solutions. Beside these qualitative arguments and from a system architecture point of view, the smart camera is an important concept in future digital and heterogeneous third generation visual surveillance systems. [2]. Not only image enhancement and image compression but also video computing algorithms for scene analysis and behavior understanding are becoming increasingly important. These algorithms have a high demand for real-time performance and memory. Fortunately, smart cameras can support these demand as low-power, low-cost embedded systems with sufficient computing performance and memory capacity. Furthermore, they offer flexible video transmission and computing in scalable networks with thousands of cameras through a fully digital interface.

Block Diagram

A smart camera usually consists of several (but not necessarily all) of the following components:
Image sensor (matrix or linear) Image digitization circuitry Image memory Processor (often a DSP or suitably powerful processor) program- and data memory (RAM, nonvolatile FLASH) Communication interface (RS232, Ethernet) I/O lines (often optoisolated) Lens holder or built in lens (usually C, CS or M-mount) Built in illumination device (usually LED) Purpose developed real-time operating system (For example VCRT)

Classification Of Vision Systems And Smart Cameras

Architecture of the Smart Camera

The smart camera is divided into three major parts:


1. Video Sensor The video sensor represents the rst stage in the smart cameras overall data ow. The sensor captures incoming light and transforms it into electrical signals that can be transferred to the processing unit. 2. Processing Unit The second stage in the overall data ow is the processing unit. Due to the high-performance on-board image and video processing the requirements on the computing performance are very high. A rough estimation results in 10 GIPS computing performance. 3. Communication Unit The nal stage of the overall data ow in our smart camera represents the communication unit. The processing unit transfers the data to the processing unit via a generic interface. This interface eases the implementation of the different network connections such as Ethernet, wireless LAN and GSM/GPRS. For the Ethernet network interface only the physical-layer has to be added because the media-access control layer is already implemented on the DSP.

Dynamic Power Management


The basic idea behind DPM is that individual components can be switched to different power modes during runtime. Each power mode is characterized by a different functionality/performance of the component and the corresponding power consumption. For instance, if a specic component is not used during a certain time period it can be switched off. The commands to change the components power modes are issued by a central PowerManager (PM). The commands are issued corresponding a Power Managing Policy (PMP). The PMP is usually implemented in the operating system of the main processing compo-nent.

Embedded Processors
There are generally four main families of embedded processors that can be used for smart cameras: Microcontrollers - Microcontrollers are cheap but have limited processing
power and are generally not suited for building demanding smart cameras.

ASICs (Application Specific Integration Circuits) - ASICs are powerful


and power-efficient processors, but the design cost and risk are high and they are viable solutions only when volume is high and time-to-market is well-timed.

DSPs (Digital Signal Processors) - DSPs are relatively cheap and


powerful in performing image and video processing. They typically have a highend DSP core employing SIMD (Single Instruction Multiple Data) and VLSI architectures.

PLDs (Programmable Logic Devices) such as the FPGA. One of the most
important advantages of the FPGA is the ability to exploit the inherently parallel nature of many vision algorithms.

Applications
An automated face recognition system for intelligence surveillance: Smart camera recognizing faces in the crowd. Stand-alone Smart Cameras: CCTV Applications Research in Computer Vision and Pattern Recognition Industry Machine Vision ITS (Intelligent Transport Systems) Automobiles HCI(Human Computer Interface) Medical/Healthcare Video Conferencing Biometrics

Industry Machine Vision


Most machine vision cameras are stand-alone and autonomous smart cameras, where communications with PC or other central control unit is only needed for camera configuration, firmware upgrading or in some cases output data collection. Most algorithms implemented in these cameras follow the similar processing flow described in the figure below :-

Intelligent Transport Systems and Automobiles


Generally speaking, the application and algorithmic requirements for ITS are quite similar to those of IVSS. These requirements are different for automobile applications, however, where high-speed imaging and processing are often needed. Increased robustness is also required for car-mounted cameras to deal with varying weather conditions, speeds, road conditions, car vibrations. CMOS image sensors can overcome problems like large intensity contrasts due to weather conditions or road lights and further blooming, which is an inherent weakness of existing CCD image sensors.

The VIEWS system at the University of Reading is a 3D model-based vehicle tracking system, which is capable of detecting potential accident situations and is designed for existing camera setups on road networks.

Automobile Applications
Smart camera-powered intelligent vehicles will have the comprehensive capability of monitoring the vehicle environment including the drivers state and attention inside of the vehicle as well as detecting roads and obstacles outside the vehicle, so as to provide assistance to drivers and avoid accidents in emergencies. However, building and integrating smart cameras into vehicles is not an easy task: On one hand the algorithms require considerable computing power to work reliably in real-time and under a wide range of lighting conditions. On the other hand, the cost must be kept low, the package size must be small and the power consumption must be low. Applications of smart cameras in intelligent vehicles include lane departure detection, cruise control, parking assistance, blind-spot warning, driver fatigue detection, occupant classification and identification, obstacle and pedestrian detection, intersection-collision warning, overtaking vehicle detection.

Key Issues or Challenges


1) System Design The proprietary nature of smart cameras can limit
choices of hardware, like imagers, I/O, lighting, lens and the communications format. This may lead to a lack of expandability and flexibility of PC-based systems. In terms of design methodology, the easy integration of intellectual property in the design tool and flow can help foster product differentiation. Other important system-level issues include smart camera operating systems, development tools.

2) CMOS Image Sensors Dynamic range is still one of the key


aspects where CMOS image sensors lag behind CCD. Improvement in this area can lead to more low-cost smart cameras using CMOS image sensors for machine vision and surveillance applications.

3) Algorithm Development Many intelligent pattern recognition algorithms work well in laboratory conditions but fail when deployed and implemented in real-world conditions (occlusion, lighting condition changes, unfavourable weather conditions), and embedded system environments (scant resources, low power, low cost). Robustness and low complexity are among key issues facing researchers developing algorithms for smart cameras in surveillance, ITS and automobile applications.

4) Performance Evaluation - This is a very significant challenge in smart surveillance systems. Evaluating the performance of video analysis systems requires significant amounts of annotated data. Typically, annotation is a very expensive and tedious process. Additionally, there can be significant errors in annotation. All of these issues make performance evaluation a significant challenge.

5) Standards Development There is need for the development of some smart camera standards. In fact, the European Machine Vision Association (EMVA) has recently launched an initiative (EMVA 1288 Standard) to define a unified method to measure, compute and present specification parameters for smart cameras and image sensors used for machine vision applications. More needs to be done in this respect. 6) Single Chip Smart Cameras Single-chip smart cameras are an attractive concept, but the manufacturing cost for the single-chip smart cameras can be high because the feature size for making digital processors and memory is often different from the one used to make image sensors, which may require relatively large pixels to efficiently collect light. Therefore, it probably still makes sense to design the smart camera in a multi-chip approach with a separate image sensor chip. Separating the sensor and the processor also makes sense at the architectural level, given the well-understood and simple interface between the sensor and the computation engine.

Future Scope of Smart Cameras


The demand for smart cameras will steadily increase in traditional industries such as surveillance and industry machine vision, and may also come from new industry and market segments such as healthcare, entertainment, education and so on. Based on the discussions above, we can discern the following future directions for smart camera system and technologies.
At

the system design level, continuous effort will be made in the development of a research strategy or design methodology for smart cameras as embedded systems.
At

the ASIP algorithm development level, in order to improve performance and robustness of existing techniques, research should address issues such as occlusion handling, fusion of 2D and 3D tracking, anomaly detection and behavior prediction, combination of video surveillance and biometrical personal identification, multi-sensory data fusion .

Multi-modal, multi-sensory augmented video surveillance systems have the potential to provide improved performance and robustness. Such systems should be adaptable enough to adjust automatically and cope with changes in the environment like lighting, scene geometry or scene activity.
Work on distributed (or networked) IVSS should not be limited to the territory of computer vision laboratories, but should involve telecommunication companies and network service providers, and should take into account system engineering issues. Standards development. One area which may need standardization is the metadata format that facilitates integration and communication between different cameras, sensors and modules in a distributed and augmented video surveillance system. New communication protocols may be needed for better communication between different smart camera products.

In the machine vision arena, smart cameras will offer more and more functionality. The trend of distributing machine vision across the entire production line at points before value is added will continue. Neural network techniques seem to have become a key paradigm in machine vision that are used either to correctly segment an image in a wide variety of operational conditions or to classify the detected object. Stereo and 3D-vision applications are also increasingly widespread. Another trend is to utilize machine vision in the non-visible spectrum.
BOA Smart Camera: Small, flexible with vision system

New

product developments will introduce smart camera-based digital imaging systems into existing consumer and industry products, to increase their value and create new products.

As the price of cameras and computing elements continue to fall it becomes increasingly feasible to consider the deployment of smart camera networks. Such networks would be composed of small, networked computers equipped with inexpensive image sensors.

Consider, the proliferation of camera equipped cell phones. Such camera networks could be used to support a wide variety of applications including environmental modeling, 3D model construction and surveillance.

A number of research efforts at a variety of institutions are currently directed towards realizing aspects of this vision. One critical problem that must be addressed in such systems is the issue of localization. That is, in order to take full advantage of the images gathered from multiple vantage points it is helpful to know where the cameras are located with respect to each other.

In an advanced system each of the smart cameras is equipped with a co-located controllable light source which it can use to signal other smart cameras in the vicinity. By analyzing the images that it acquires over time, each smart camera is able to locate and identify other smart cameras in the scene. This arrangement makes it possible to directly determine the epipolar geometry of the camera system from image measurements and, hence, recover the relative positions and orientations of the smart camera nodes.

Conclusion
A smart camera realized as an embedded system has been presented in this paper. Our smart camera integrates a digital CMOS image sensor, a processing unit featuring two high-performance DSPs and a network interface. High-level video analysis algorithms in combination with stateof-the-art video compression transform this system from a network camera into a smart camera. There is a rapidly increasing market for smart cameras. Advances in performance and integration will enable new and more functionality implemented in smart cameras. The next steps in development of smart cameras include (i) the development of the target architecture, (ii) the implementation of further image processing algorithms, (iii) the real-world evaluation of objects.

Você também pode gostar