Escolar Documentos
Profissional Documentos
Cultura Documentos
What is a smart camera? Different researchers and camera manufacturers offer different definitions. There does not seem to be a wellestablished and agreed-upon definition in either the video surveillance or machine vision industries, probably the two most active and advanced applications for smart cameras at present.
The idea of smart cameras is to convert data to knowledge by processing information where it becomes available, and transmit only results that are at a higher level of abstraction. A smart camera is smart because it performs application specific information processing (ASIP), the goal of which to understand and describe what is happening in the images for the purpose of better decision-making in an automated control system. A smart camera combines video sensing, video processing and communication within a single device.
Recent technological advances are enabling a new generation of smart cameras that represent a quantum leap in sophistication. While today digital cameras capture images, a smart camera capture high level descriptions of the scene and analyze what they see.
Smart cameras not only capture images, they further perform high-level image processing on-board, and transfer the data via network.
Due to their logarithmic behavior, high dynamic range and high bit resolution the low-cost and low-power CMOS sensors acquire images with the necessary quality for further image processing under varying illumination conditions. The integration of these advanced image sensors with high-performance processors into an embedded system facilitates new applications such as motion analysis and face recognition on-board and to transmit the (compressed) video data as well as the extracted video information via a network.
[1] CMOS image sensors can overcome problems like large intensity contrast due to weather conditions or road lights and further blooming, which is an inherent weakness of existing CCD image sensors. Furthermore, noise in the video data is reduced by the capability of video computation close to the CMOS sensor. Thus, the smart camera delivers a new video quality and better video analysis results, if it is compared to existing solutions. Beside these qualitative arguments and from a system architecture point of view, the smart camera is an important concept in future digital and heterogeneous third generation visual surveillance systems. [2]. Not only image enhancement and image compression but also video computing algorithms for scene analysis and behavior understanding are becoming increasingly important. These algorithms have a high demand for real-time performance and memory. Fortunately, smart cameras can support these demand as low-power, low-cost embedded systems with sufficient computing performance and memory capacity. Furthermore, they offer flexible video transmission and computing in scalable networks with thousands of cameras through a fully digital interface.
Block Diagram
A smart camera usually consists of several (but not necessarily all) of the following components:
Image sensor (matrix or linear) Image digitization circuitry Image memory Processor (often a DSP or suitably powerful processor) program- and data memory (RAM, nonvolatile FLASH) Communication interface (RS232, Ethernet) I/O lines (often optoisolated) Lens holder or built in lens (usually C, CS or M-mount) Built in illumination device (usually LED) Purpose developed real-time operating system (For example VCRT)
Embedded Processors
There are generally four main families of embedded processors that can be used for smart cameras: Microcontrollers - Microcontrollers are cheap but have limited processing
power and are generally not suited for building demanding smart cameras.
PLDs (Programmable Logic Devices) such as the FPGA. One of the most
important advantages of the FPGA is the ability to exploit the inherently parallel nature of many vision algorithms.
Applications
An automated face recognition system for intelligence surveillance: Smart camera recognizing faces in the crowd. Stand-alone Smart Cameras: CCTV Applications Research in Computer Vision and Pattern Recognition Industry Machine Vision ITS (Intelligent Transport Systems) Automobiles HCI(Human Computer Interface) Medical/Healthcare Video Conferencing Biometrics
The VIEWS system at the University of Reading is a 3D model-based vehicle tracking system, which is capable of detecting potential accident situations and is designed for existing camera setups on road networks.
Automobile Applications
Smart camera-powered intelligent vehicles will have the comprehensive capability of monitoring the vehicle environment including the drivers state and attention inside of the vehicle as well as detecting roads and obstacles outside the vehicle, so as to provide assistance to drivers and avoid accidents in emergencies. However, building and integrating smart cameras into vehicles is not an easy task: On one hand the algorithms require considerable computing power to work reliably in real-time and under a wide range of lighting conditions. On the other hand, the cost must be kept low, the package size must be small and the power consumption must be low. Applications of smart cameras in intelligent vehicles include lane departure detection, cruise control, parking assistance, blind-spot warning, driver fatigue detection, occupant classification and identification, obstacle and pedestrian detection, intersection-collision warning, overtaking vehicle detection.
3) Algorithm Development Many intelligent pattern recognition algorithms work well in laboratory conditions but fail when deployed and implemented in real-world conditions (occlusion, lighting condition changes, unfavourable weather conditions), and embedded system environments (scant resources, low power, low cost). Robustness and low complexity are among key issues facing researchers developing algorithms for smart cameras in surveillance, ITS and automobile applications.
4) Performance Evaluation - This is a very significant challenge in smart surveillance systems. Evaluating the performance of video analysis systems requires significant amounts of annotated data. Typically, annotation is a very expensive and tedious process. Additionally, there can be significant errors in annotation. All of these issues make performance evaluation a significant challenge.
5) Standards Development There is need for the development of some smart camera standards. In fact, the European Machine Vision Association (EMVA) has recently launched an initiative (EMVA 1288 Standard) to define a unified method to measure, compute and present specification parameters for smart cameras and image sensors used for machine vision applications. More needs to be done in this respect. 6) Single Chip Smart Cameras Single-chip smart cameras are an attractive concept, but the manufacturing cost for the single-chip smart cameras can be high because the feature size for making digital processors and memory is often different from the one used to make image sensors, which may require relatively large pixels to efficiently collect light. Therefore, it probably still makes sense to design the smart camera in a multi-chip approach with a separate image sensor chip. Separating the sensor and the processor also makes sense at the architectural level, given the well-understood and simple interface between the sensor and the computation engine.
the system design level, continuous effort will be made in the development of a research strategy or design methodology for smart cameras as embedded systems.
At
the ASIP algorithm development level, in order to improve performance and robustness of existing techniques, research should address issues such as occlusion handling, fusion of 2D and 3D tracking, anomaly detection and behavior prediction, combination of video surveillance and biometrical personal identification, multi-sensory data fusion .
Multi-modal, multi-sensory augmented video surveillance systems have the potential to provide improved performance and robustness. Such systems should be adaptable enough to adjust automatically and cope with changes in the environment like lighting, scene geometry or scene activity.
Work on distributed (or networked) IVSS should not be limited to the territory of computer vision laboratories, but should involve telecommunication companies and network service providers, and should take into account system engineering issues. Standards development. One area which may need standardization is the metadata format that facilitates integration and communication between different cameras, sensors and modules in a distributed and augmented video surveillance system. New communication protocols may be needed for better communication between different smart camera products.
In the machine vision arena, smart cameras will offer more and more functionality. The trend of distributing machine vision across the entire production line at points before value is added will continue. Neural network techniques seem to have become a key paradigm in machine vision that are used either to correctly segment an image in a wide variety of operational conditions or to classify the detected object. Stereo and 3D-vision applications are also increasingly widespread. Another trend is to utilize machine vision in the non-visible spectrum.
BOA Smart Camera: Small, flexible with vision system
New
product developments will introduce smart camera-based digital imaging systems into existing consumer and industry products, to increase their value and create new products.
As the price of cameras and computing elements continue to fall it becomes increasingly feasible to consider the deployment of smart camera networks. Such networks would be composed of small, networked computers equipped with inexpensive image sensors.
Consider, the proliferation of camera equipped cell phones. Such camera networks could be used to support a wide variety of applications including environmental modeling, 3D model construction and surveillance.
A number of research efforts at a variety of institutions are currently directed towards realizing aspects of this vision. One critical problem that must be addressed in such systems is the issue of localization. That is, in order to take full advantage of the images gathered from multiple vantage points it is helpful to know where the cameras are located with respect to each other.
In an advanced system each of the smart cameras is equipped with a co-located controllable light source which it can use to signal other smart cameras in the vicinity. By analyzing the images that it acquires over time, each smart camera is able to locate and identify other smart cameras in the scene. This arrangement makes it possible to directly determine the epipolar geometry of the camera system from image measurements and, hence, recover the relative positions and orientations of the smart camera nodes.
Conclusion
A smart camera realized as an embedded system has been presented in this paper. Our smart camera integrates a digital CMOS image sensor, a processing unit featuring two high-performance DSPs and a network interface. High-level video analysis algorithms in combination with stateof-the-art video compression transform this system from a network camera into a smart camera. There is a rapidly increasing market for smart cameras. Advances in performance and integration will enable new and more functionality implemented in smart cameras. The next steps in development of smart cameras include (i) the development of the target architecture, (ii) the implementation of further image processing algorithms, (iii) the real-world evaluation of objects.