Você está na página 1de 55

23092011

VISION FOR INDUSTRIAL AND SERVICE ROBOTS AN INTRODUCTION

Revolution

Agricultural Revolution Industrial Revolution Electrification Transportation Communication Computers Industrial Robots Service Robots

R.SENTHILNATHAN RESEARCH SCHOLAR DEPARTMENT OF PRODUCTION TECHNOLOGY MIT CAMPUS, ANNA UNIVERSITY CHENNAI

Analogies

Industrial Robot

Profound, Widespread and Global


From ISO 8373

Mental to Physical Leverage Mainframes to Industrial Robots PCs to Service Robots

An automatically controlled, reprogrammable, multipurpose manipulator programmable in three or more axes, which may either fixed in place or mobile for use in an industrial automation application.

The Price and Volume curves



3

Software and Applications Third Party Applications Mobile, Personal and Household
4

23092011

Service Robot

Perception to Physical Access

Provisional from Working Group

A service robot is a robot which operates semi- or fully autonomously to perform services useful to the well-being of humans and equipment, excluding manufacturing operations.

How Humans do?


Locate with eyes Calculate target with brain Guide with arm and fingers

How Robots do?


Locate with camera Calculate target with software Guide with robot and grippers

Role of Perception in Robot Manipulation

Role of Perception in Robot Manipulation

Where am I relative to the world?


How can I safely interact with environment ?


sensors: vision, stereo, range sensors, acoustics problems: scene modeling/classification/recognition integration: localization/mapping algorithms (e.g. SLAM)

sensors: vision, range, haptics (force+tactile) problems: structure/range estimation, modeling, tracking, materials, size, weight, inference integration: navigation, manipulation, control, learning

What is around me?


sensors: vision, stereo, range sensors, acoustics, sounds, smell problems: object recognition, qualitative modeling integration: collision avoidance/navigation, learning

How can I solve new problems (generalization)?


sensors: vision, range, haptics. problems: categorization by function/shape/context integrations: inference, navigation, manipulation, control, learning

23092011

Vision for Robots

Industry vs. Service Sector

Vision for Industrial Robots: GIGI


Strength if Industry Sector


Gauging Inspection Guidance Identification

Many Years of Experience Motors, Speed, Precision Vision, Force, Torque, Encoders

Vision for Service Robots


Vision Frontiers in Service Robots


Preprocess Environment Sensor Fusion

Uncontrolled Environment and Safety Sensor Fusion and Reliability

10

Building Blocks in Design of Vision Guided Robots

Scene

Lighting

Technique (Frontlight, Backlight) Source (Fluorescent tubes, Halogen and xenon lamps, LED, Laser)

Parts

12

Optics Vision Cameras


Type of sensor (CCD, CMOS etc) Spec. of Camera (Resolution, Frame rate, etc) Type of Camera (Line Scan, Area Scan, Structured Light, Time of Flight) Interface (Standalone, Computer Interface)


11

Software Robot Types Camera Mounting (Eye in Hand, Eye to Hand)

Discrete parts or endless material (e.g., paper) Minimum and maximum dimensions Changes in shape Description of the features that have to be extracted Changes of these features concerning error parts and common product variation Surface finish Color Corrosion, oil films, or adhesives Changes due to part handling

23092011

Contd.

Industrial Robots - Applications

Part Presentation

indexed positioning continuous movement

If there is more than one part in view, the following topics are important:

number of parts in view overlapping parts touching parts

13

14

Robot Configuration

Camera Mounting

Industrial

1/2/3 Axis Cartesian 4-Axis SCARA 6-Axis Articulated Gantry Type

Eye to Hand

Eye in Hand

15

16

23092011

Applications

Advantages of Vision for Industrial Robots

Labor Savings: Often alone justifies Throughput Gains in Production Quality Improvements Safety and Medical Cost Savings Flexible Change to Multiple Products Floor Footprint Reduction Reutilize Conveyors, Racks, Bins
18

2D
Indexed Conveyor Flexible Feeding Autoracking Packaging

2.5D
Stacked Objects Geometry for depth perception

3D
Auto racking Discrete Bag Handling Palletizing Bin Picking

17

Building Blocks in Service Robots


Classes of Service Robots

3D Vision 3D Vision with Real Time Motion 3D Vision with GPS navigation 3D Vision with SLAM

Aerospace

Spacecraft, Satellites, Aircraft

Land

Simultaneous Localization and Mapping


Defenseand security, Farming, Wildlife, Food, Transportation, Outdoor Logistics, Office and Warehouse, Health: Care, Rehabilitation, Surgical, Entertainment, Entertainment

Sensing the Environment Modeling the Environment

3D Vision with Sonar Navigation 3D Visualization with Haptic Controls

Water

Defense and security, Research and Exploration, Preventive Maintenance, Rescue and Recovery.

19

20

23092011

Software

2D Object Recognition

Edge Detection Boundary analysis Geometric Pattern Matching Mono camera if geometry is consistent Stereo matching: Redundant reliability Laser, Range or time of flight methods Projected points/Lines of Light 3D volume Scans Scene-specific Heuristics

Mobile Robots for Defence Rehabilitation Surgical Innovation


3D Object Recognition

Human Like Swimming Flying

Additional Techniques

21

22

IMAGING FUNDAMENTALS
23

Any Digital Image, irrespective of its type is a 2D array of numbers

24

23092011

Types

Intensity Images

Intensity images Range images

Optical parametres

Lens type Focal length FOV Intensity Direction of illumination Reflectance properties Sensors structure Types of Projections Pose of the camera

Photometric parameters

Geometric parameters

25

26

Basic Optics
The Thin Lens model: Fundamental Equation

Perspective Camera Model

where

27

28

23092011

Pin Hole Model

29

30

Mapping Point to Camera

31

32

23092011

33

34

35

36

23092011

Camera Parameters and Calibration


World Coordinate

Extrinsic parameters Intrinsic parameters

37

38

World Coordinate

39

40

10

23092011

Perspective Projection
A point p[x,y,z] in the image plane is given by x = f [ X / Z] y = f [ Y / Z]

Range Images
Reconstructing a 3D shape from a single intensity image is DIFFICULT. Range Images

Are also called as depth images, depth maps, xyz maps, surface profiles and 2.5D images. Each pixel of a range image expresses the distance between a known reference frame and a visible point in the scene. Cloud of points (xyz form) Spatial form

Forms of Range Images


where p is the image of the point P[X,Y,Z] in world space.


41

42

Range Images Display types

Agenda

Physics of Light Optics Camera Sensors Camera Interface Camera Calibration Software Applications and Case Study
44

43

11

23092011

Why Physics of Light ?

The laws of physics govern the properties of vision systems.

Understanding the physics will allow you to predict the behavior

PHYSICS OF LIGHT

You will understand the limitations of the performance

45

46

Vision Starts with Light


Light has a dual nature, obeying the laws of physics as both a transverse wave (electromagnetic radiation) and as a particle of energy (photon).

Properties of Light

Electromagnetic Radiation Used to explain the propagation of light through various substances.

Particle Used to explain the interaction of light and matter that result in a change in energy, such as in a video sensor

47

48

12

23092011

Light as an Electromagnetic Radiation

Electromagnetic Wave Characteristics

Light is a Transverse wave.

Points oscillate in the same plane on a axis perpendicular to the direction of motion.

Frequency (f) is the no. of oscillations per second. Wavelength () is the distance between two points in the same position on the wave (nm) f=c/ where c is the speed of light

The electrical wave oscillates perpendicular to the magnetic wave.


49 50

Energy vs Intensity

Electromagnetic Spectrum

Energy is determined by the frequency of oscillation

Higher frequency

Shorter wavelength Higher energy

Intensity is determined by the amount of radiation.

51

52

13

23092011

Relation between Color of Light and its Wavelength


What happens to Light when it hits an Object ?

Visible light contains a continuum of frequencies We perceive color as a result of predominance of certain wavelengths of light

The eye responds to visible light with varying efficiencies across the visible spectrum Cameras have a very different response Ultraviolet Visible Infrared While the eye can see only in the visible spectrum, the energy above and below visible light is also important to machine vision.
54

We are concerned with a narrow region of the spectrum


53

In Vision We are Concerned with Reflected Light


How do Objects of Different Color Respond to Light ?

Reflected Light is controlled by engineering the lighting. The reflected light (and therefore the digital image) is impacted by

Geometry of everything Color of the light Color of the part

55

56

14

23092011

How and Why Objects Have Color?

Additive Color

Red light gets reflected from red objects. Your eyes see the reflected light. The camera also see the reflected light.

Demonstrates what happens when colored lights are mixed together Additive primaries are red, green and blue which altogether make white RGB used for color TV and Cameras
58

All other color gets absorbed by the material. This radiation gets turned into heat.
57

Subtractive Color

Maxwells Triangle A demonstration of Additive Color Mixing

Used to describe why objects appear the color they do Pigments added to paint will absorb all colors of that wavelength CMYK used for printing ink (K is for carbon black, less expensive pigment than other colors)
59 60

CIE Chromaticity Diagram

15

23092011

White Light is Actually Very Colorful

The rainbow exiting a prism or seen in the sky is the inverse of the additive color wheel.

Both demonstrate that white light is actually a very complex function which needs precise definition.

OPTICS

61

62

Optical Filter

Using Filters to Highlight Objects of Different Colors

An optical device which selectively transmits light of certain wavelengths and absorbs or reflects all other wavelengths.

Red light reflects off the red background but absoebed by the blue circle.
63 64

Blue light reflects off the blue circle but but absorbed by the red background.

16

23092011

Placement of Filters: Incident or Reflected Light

An example

Resulting Images would be the same: No available red light to be reflected, so red appears dark. Light is reflected from blue, appears light.
65 66

Spectral Response

Spectral Reflectivity for Al and Ag

How efficiently light is emitted or received as the wavelength (or color) of the light changes.

Filters can be described by a spectral response plot.


67 68

17

23092011

Spectral Response of Filters


Ideal Filter Real Filter

Spectral Response of CCD

69

70

Interaction of Light with Surfaces

Why Reflection is Important ?


Majority of the vision systems record the reflected light. A well designed lighting system provides high contrast between the features of interest and background (noise).

Regions of high reflectivity, regions of minimal reflected light. Spectral properties of light sources, combined with spectral properties of surface can be used to provide high contrast.

Geometrical

considerations

are

important

for

Reflection
71

Refraction

understanding reflected light.


72

18

23092011

Interaction of Light with transparent surfaces

Why Refraction is Important ?

Refraction is the basic principle behind many optical elements


Lenses Filters Mirrors and Prisms

Optical elements are not perfect


They do not transmit 100% of the light. Chromatic Abberations.

73

74

Surface Finish

Complex Geometries

75

76

19

23092011

How is Light measured ?

Lens

Lumen and Lux are photometric parameters that represent the amount of light that falls upon a surface per second.

Bright Sun Light: Cloudy day: Full moon night: Over cast night:

100,000 lux 10,000 lux 0.05 lux 0.00005 lux

The human eye is sensitive to this full range (10 orders of magnitude!) But cameras are only sensitive to 3 orders of magnitude.

The Lens uses refraction to bend light as it passes through, generating image at the other side.
78

77

Lens and the Camera sensor

Specifications used to select Lens for a Machine Vision Application


Focal Length Angular field of view or magnification Working Distance or Field of view minimum at focus Depth of Focus Aperture Resolution Camera Sensor Size Camera mounting configuration
80

79

20

23092011

Focal Length

f The focal length is the distance between the optical centre and the image plane when the lens is focused at infinity.
81 82

Field Of View

Field Of View

The area imaged or the FOV is determined by the intersection of the stand off distance and the angle of viewing.
83 84

21

23092011

Moving closer

Focal Length and Stand off Distance

A shorter focal length lens can image the same field of view as a longer focal length lens by decreasing the stand off distance. Shorter focal length lens will have more parallax distortion (fish eye effect). Stand off distance has a larger effect on magnification for short focal length (wide angle) lenses.

85

86

What to do if you need to change the image size?

Focus

To increase magnification (smaller FOV)


Lenses are designed for specific imaging characteristics

Use a lens with longer focal length Move the camera closer to the part

Using the lens outside of the design region impacts image quality.

(To be cautious about distortion and ability to focus)

For example, stand off distance for focusing can be reduced using spacer rings.

To decrease magnification (larger FOV)


Depth of focus is also dependent upon aperture or f/stop setting


Use a shorter focal length lens Move the camera further from the part

Wide open aperture: small depth of focus Small aperture: large depth of focus

87

88

22

23092011

Extension Rings

Aperture and F/stop (F#)

Extension Rings are used to alter the focusing distance of a lens


F/stop = Focal length / Aperture Diameter

The rings increase the image distance, and allow the lens to focus at shorter distances
Lens Adaptor

Aperture is the clear opening of the lens

89

90

An Example

Captured with a 100-mm lens with F/4 Captured with a 28-mm lens with F/4

LIGHTING

Captured with a 100-mm lens with F/22 Captured with a 28-mm lens with F/22
91 92

23

23092011

Lighting Concerns

Illumination affects the color of the material

Stability of the light source Flicker rate Change in spectral properties Need to control diffusion of light (bright spots are bad) Ambient lighting needs to be blocked off Ambient temperature has very large effect on lighting

93

Sources have different spectral properties which cause objects to look differently under different sources.

Depends on lighting and camera Relations are non-linear


94

One More Example..

Characteristics of Light Sources

Thermal (Incandescent)

1000 lumens for 75W 5% efficiency (12 15 lumens/Watt) 1000 hours Rs 50/Klumen 10,000 lumens 25% efficiency (50 lumens/ Watt) 10,000 hours (output degrades, then fails) Rs 25/Klumen 30-35 lumens/Watt 100,000 hours (output degrades over time, not hard failure) Rs 2500/Klumen

Gas Discharge (Fluorescent)


LEDs (Solid state lighting)


95

96

24

23092011

50 Hz Noise Variation in Light Output


Effect of Operating Temperature

Use a high frequency ballast for Fluorescent lights (10 KHz) Use DC sources for LEDs Shroud your cell from ambient light if it is bright.

The ambient light source is most likely AC

97

98

Use Geometry to Meet the Objectives for Lighting Design

Angle of Lighting depends on the part features

Optical Filtering

Highlight Features of Interest Reduce Extraneous information

Use natural features of the part for contrast


Shadows Specular Reflections

Design a system that is compatible with process constraints

99

100

25

23092011

Lighting Techniques

Back Light

Back Light

Diffuse Collimated

Front Light

Diffused Directed Structured

Placing a light behind the part such that the part is between the light and the camera, providing a silhouette of the part
102

101

Back Lighting Provides the Highest Contrast. But.

Types

Its not always practical to implement

Parts on a conveyor or in a fixture often is difficult to be back lit.

It provides the information about the parts silhouette only

Sometimes surface features are the ones were interested in

103

104

26

23092011

Front Light

Dark Field Illumination

Dark field illumination is used to subdue background and highlight pin stamped characteristics

Light positioned at an oblique angle to the part. Angle of incidence set up such that angle of reflection is away from the camera lens.

Placing the light in front of the part, on the same side of the camera. Provides an Image with surface features and shading
105

Any perturbations can reflect light into the camera lens.

106

Lighting Component

Bright Field Illumination

Dark Field IIlumination

107

108

27

23092011

109

110

CAMERA SENSORS

111

112

28

23092011

Digital Image

Types
is a numerical

digital

image

representation of a real physical object. The objective is to obtain an accurate spatial (geometric) and spectral (light) representation (resolution).

Intensity images Range images

with

sufficient

detail by

Image sensors generate images strikes the sensor surface.

measuring and recording the light that Any Digital Image, irrespective of its type is a 2D array of numbers
113 114

Recording the Field of View

Intensity Images CV Terminologies

Optical parametres

Area Scan

Lens type Focal length FOV

Photometric parameters

Intensity Direction of illumination Reflectance properties Sensors structure

Line Scan

Geometric parameters

Types of Projections Pose of the camera

115

116

29

23092011

Getting a good image


What is a good image? Features of interest are well defined

Properties of Sensors

Some materials will generate electrical charge proportional to the number of photons striking it.

High contrast with enough detail

Images are repeatable Features in the image exist in the physical world

No noise or artifacts

Changes in the environment should have minimal impact on the image. How to achieve this? Good lighting and optics Understanding the requirements Choosing the right camera for the application.
117

These materials are used for image sensors.

118

Sensor Types based on the Sensing Element Used

Sensor analogy to film

Vaccum Diode

In some ways, a sensor in a video camera is lot like photographic film.

Vidicon Plumbicon Photo Multiplier Tube (PMT)

An image is focused on the sensor for a preset exposure time. The light pattern is captured and transformed into a new medium. There is a integral relationship between the amount of light measured and exposure time.

Solid State (silicon)


Silicon Photo Diode Positive Selective Detector (PSD)

Solid State Camera Sensors


CCD COMS
120

119

30

23092011

Comparison of a sensor to film

How sensors work ?


We can think of camera as imposing a grid over the object being imaged, and sampling the light

Film has a continuous surface, down to the grain of the film Video Sensors have discrete imaging surface

The individual square is called a photo site, and is similar to a light meter.

The camera sensor is made up of an array of these photo sites.

Sizes of solid state sensors 2/3 inch: 6.6 x 8.8 mm 1/2 inch: 6.4 x 4.8 mm 1/3 inch: 3.6 x 4.8 mm 1/4 inch: 2.4 x 3.2 mm
121

The individual photo sites in an video sensor are called picture elements - PIXEL
122

How Sensors Measure Light?

Each photo site can be modeled by a bucket to collect charges generated by photons As photons strike the sensor, charge is developed and the bucket begins to fill How full the bucket gets is determined by

How much light (intensity) How long you collect charge (exposure time or shutter speed)

How efficiently the photons gets converted to charge (spectral response)

The amount of light in each photo site is sampled and converted into a number. This number, or gray scale value, is an indicator of brightness.
124

123

31

23092011

Exposure Time Analogy

Images Taken With Different Exposure Time

How full the bucket gets is dependent upon:

How fast the faucet is running - light intensity

How long you keep the bucket under the running water - exposure time
INCREASING EXPOSURE TIME Photons

ANALOGY

Electrons

As you increase the exposure time, you allow more time for photons to get converted into electrons in the sensor; hence more charge accumulation for more brighter image.
126

125

How Many Pixels Are Required To Find An Object ?


2 x 2 grid 4 photo sites or (4 pixels) When the blue object fills more than 50% of the photo site, it will be turned black, otherwise the site is considered to be white.

Double the Resolution..


4 x 4 grid 16 photo sites or (16 pixels) When the blue object fills more than 50% of the photo site, it will be turned black, otherwise the site is considered to be white.

127

128

32

23092011

Double it again....
8 x 8 grid 64 photo sites or (64 pixels) When the blue object fills more than 50% of the photo site, it will be turned black, otherwise the site is considered to be white.

Attributes of Sampling

You might not even detect the object if the sampling resolution is too low

If you sample at two times the resolution, the total number of sample sites is increased by a factor of 4

129

130

Other Attributes

A close up look at pixels !


INCREASING ZOOM LEVEL
132

The new digitized information contains much less information A three dimensional scene is reduced to a 2D representation

No color information

Size and location are now estimates whose precision and accuracy depends on the sampling resolution.

131

33

23092011

Sensor Array Configuration


How big is a pixel ? - Resolution


When it comes to resolution, the following distinction is necessary

The sensor consists of an array of individual photo cells. Typical array size in pixels is

Number of Pixels in the image - Camera Sensor Resolution


640 x 480 or 768 x 480, 1280 x 760 1600 x 1200, and larger

Usually between 5 and 10 microns Impact Sensor Noise and Dynamic Range

For reference human vision is >100 million pixels

Number of pixels covering feature - Spatial Resolution

Impacts robustness of the vision algorithm

The array size is called PIXEL RESOLUTION


133

Smallest detail captured in the image - Measurement Accuracy


134

Spatial Resolution

How many pixels should cover the Features of Interest ?

Depends on the application, but in general, more is better

Trade off is that you image less of the scene.


Field of view should be large enough to accommodate variations in position. Might require more than one camera

Spatial Resolution = FOV / No. of Pixels

135

136

34

23092011

What if your camera does not have enough pixels ?

What If Pixel Arrays Are Not Big Enough And Sub-pixels Wont Work?

Use sub-pixels

Use Line Scan cameras: A digital camera with pixels arranged in a single line. Can generate extremely large contiguous images not possible with area scan cameras

Interpolating between pixel boundaries for sizing or identifying location

Sub pixels are only applicable to measurement, not detection

1K, 2K, 4K, 8K, 10K are some available sizes Cost of the line scan sensor is low relative to large format array cameras (2000 x 2000)

Motion of the camera or part is required for the 2nd axis Similar to scanners, copiers and fax machines

Can obtain images of continuously moving line (web inspection)

137

138

Line Scan Image Example

Some More Camera Sensor Parameters


Saturation Blooming Dynamic Range Grayscale Resolution Dark Current Noise Fill Factor

Field of View is 30 x 200 100 dpi 3,000 x 20,000 pixel image 60 Mbyte image data
139

140

35

23092011

Saturation

Blooming

At certain light levels and exposure times, the bucket (photo site) gets filled with charge and can hold no more. The photo cell is now saturated Any additional charge generated by the sensor has to go somewhere

When light saturates in a pixel area it spills over into adjacent pixels. Spill over occurs

Into adjacent pixels In CCD spillover also occurs in the pixel columns

Prevent blooming by

Avoiding saturation Cameras with anti blooming circuitry

Where does it go?

Blooming causes loss of image data


142

141

Dynamic Range

Examples
Low Dynamic Range High Dynamic Range

Ratio of Amount of Light it takes to saturate the sensor to the least amount of light detectable above background noise.

A good dynamic range allows very bright and very dim areas to be viewed simultaneously

143

144

36

23092011

Grayscale Resolution

Dark Current Noise

The number of bits used to represent the amount of light in the pixel
8

If no photons strike the sensor during the exposure time


No charge is created The bucket should remain empty

Digitizing to 8 bits gives 256 gray shades 2 = 256

However stray charge gets generated in the silicon from the thermal energy causing low level noise

This charge is called dark current Result is that black is not 0.0 volts

Dark current noise increases with temperature, doubles with every 6 degree rise above room temperature.

145

146

Photosensitive Area of the Sensor


Photons which fall out of the photo sensitive area do not get converted into electric charge and are not detected by the sensor. This will impact sensitivity to light and the ability to accurately measure between pixels for sub pixel tolerance.

Fill Factor

Percentage of the pixel area sensitive to light

Circuitry required to read out voltage obscures silicon beneath traces

Coverage can be as low as 30%

Fill factor is shown in some camera specifications

Impacts quality of image

Sensitivity to Light

147

148

37

23092011

Fill Factor Considerations with CCD vs. CMOS

Advantages and Disadvantages of each Technology

CCD
High quality, low noise images Good Pixel-to-Pixel uniformity Electronic Shutter without artifacts 100% fill factor Highest Sensitivity High Power consumption Multiple Voltages Required Increased system integration complexity and cost

CCD Sensor
CCD Sensor reads out a single row of pixels at a time, after the charge is moved down the sensor lock step by rows

CMOS Active Pixel Sensor


CMOS sensor has amplifier circuitry on each and every pixel in the array. Pixel values may be readout somewhat randomly.

CMOS
Low Power consumption Camera functions and additional control circuitry can be implemented in the CMOS sensor chip itself Random pixel read out capability (Windowing) Fixed Pattern noise Higher Dark Current Noise Lower Light Sensitivity

149

150

Color vs. Monochrome

You would have 3x the amount of data to process, or 1/3 the spatial resolution with color imaging Need to evaluate the benefit of color information relative to increased complexity and reduced resolution

Most machine vision applications use monochrome cameras

Machine vision implemented with color camera are suitable for sorting, nor colorimetry

CAMERA INTERFACE

For robustness, colors being differentiated need to be widely spaced Watch for uniform spectral output of your light source for color applications (remember that the camera measures the reflected light)

151

152

38

23092011

How Vision Works


Standard Vision Components

Take a picture Process the image data Make a decision or measurement Do something useful with the results
Not everything enclosed in the box is required Computer Frame Grabber

Custom Image Processor


153 154

Hardware Common to Most Vision Systems

Camera Types based on the Hardware Architecture


PC Based

Camera

Sensor, format, interfaces

Processor

Frame Grabber, I/O, interfaces, packaging


Smart Camera Hardware Architecture Embedded Vision Camera
156

Optics

Lenses and Accessories

Lighting

Source and Technique

Other Accessories

155

Enclosures, cables, power supplies.

39

23092011

In detail

Signal Flow of Image from Camera to Computer (Analog)


Vision Camera Analog Signal RS 170 / CCIR Frame Grabber

PC-based vision is usually more effective for larger systems


Additional cameras can be very low incremental costs PC is available for complex image processing or post processing tasks PC can be used for storing images, collecting process data, programming system updates

Smart Cameras can be cost effective where


Small number of cameras are required Operation of each smart cameras are independent of others in the cell Minimal post-processing of data is required No logic between cameras Lower end vision algorithms sufficient

DAC Digital Image Sensor


158

Analog Signal

ADC

Embedded Vision System provides complete hardware packaging and software integration solution
157

Image Buffer

Signal Flow of Image from Camera to Computer (Digital)


Vision Camera Frame Grabber

Bandwidth, Resolution and Frame Rate


The bandwidth of a interface protocol is shared by the resolution of the image and the frame rate. Frame rate of a camera depends upon the camera interface and also the camera electronics Frame Rates could go up to a million frames per second.
Image Buffer
160

Digital Signal
MAY BE, or becomes a part of the camera

DAC

Analog Signal

ADC

Digital Image Sensor


159

Digital Serial or Parallel Interface

40

23092011

Standards for Digital Interface


INTERFACE BANDWIDTH COST EFFECTIVE CABLE LENGTH POWER OVER CABLE CAMERA AVAILABLITY CPU USAGE STANDARD INTERFACE

CAMERA LINK IEEE 1394

250 MBps

No

Low

CAMERA CALIBRATION

400 Mbps (a), 800 Mbps (b) 500 Mbps

Yes

Moderate

USB

Yes

Extensive

GigE

1Gbps

No

Moderate

5 Excellent;
161

1 - Poor
162

Camera Parameters and Calibration


World Coordinate

Extrinsic parameters Intrinsic parameters

The process of finding the intrinsic and the extrinsic parameters of a camera is called camera calibration and it depends on the model chosen for the camera
163 164

41

23092011

World Coordinate

Camera Coordinates

165

166

Ideal Model of a Camera The Perspective Projection


A point p[x,y] in the image plane is given by

An Approximate Model Scaled Orthographic Model

An approximate linear model. =s Validity depends on the working distance and the relative depths of objects in the scene

x = f [ X / Z] y = f [ Y / Z]

where p is the image of the point P[X,Y,Z] in world space.


167 168

42

23092011

Failure of Orthographic Projection An Example

MACHINE VISION SOFTWARE

169

170

Software is (just) a TOOL


Remember If u think that the only tool you have in your hand is a Hammer,

Lets First Look At How Humans Process Image Data


Shape Color Spatial Relationship Context

Everything around you tends to look like a Nail

171

172

43

23092011

Human Recognition By Shape and Color

Color aids in Recognition, But is not Necessary

173

174

Spatial Relationship & Context: Can You Read This?

Limitations of Vision Computers

Cna yuo raed this? It dsenot mtaetr in wtah oerdr the ltteres in the wrod are, the olny iproamtnt tihng is taht the frsit and lsat ltteer be in the rghit pclae.
175

Just as the camera is no match for Human Vision, well see that the computer cannot even begin to duplicate how the human brain processes the image data.

Small subset of processing algorithms is generally used for industrial vision.


Almost all are based on a priori information Vision not up to the anything, anywhere problem

176

44

23092011

Scene Constraints in Vision

Contd.

Parts

177

Part Presentation (in conveyor)


Discrete parts or endless material (e.g., paper) Minimum and maximum dimensions Changes in shape Description of the features that have to be extracted Changes of these features concerning error parts and common product variation Surface finish Color Corrosion, oil films, or adhesives Changes due to part handling

indexed positioning continuous movement

If there is more than one part in view, the following topics are important:

number of parts in view overlapping parts touching parts

178

What Vision Computers Do with Images?

When and Why We Process and Analyze Images


IMAGE ENHANCEMENT

Image Processing (Image Enhancement)

IMAGE ANALYSIS

Perform mathematical or logical calculations on an image and convert the image into another image where the pixel have different values

Reduce or eliminate noise Enhance Information Subdue unnecessary or confusing background information

To generate quantitative information about the complex image data for


Image Analysis

Accept/Reject decisions Identification Sorting Counting

Perform mathematical or logical calculations on an image to extract features which describe the image content in numerical terms

Make Decision analysis easier

To make decisions

179

180

45

23092011

Image Processing (Enhancement)

Some Image Enhancement Techniques

Point Transformations

Threshold (Binarization) Histograms (Equalization)

EDGE DETECTOR

Neighborhood Processing Techniques


Spatial filtering Filtering

181

182

Image Analysis
Algorithm

How Vision Systems Extract and Use Features from the Image
Output Feature Vectors
Centroid
Feature Vectors

Despite the wide range of feature vectors that can be extracted from the image, what you do with the values is quite consistent

Location

Area Perimeter Bounding

Compare to a known good part Calculate distance from one feature to another Calculate the size of the feature Locate feature in the field of view

Box Compactness % Match

Vision systems do not process all the pixels in the image

From a priori information, you know where the important features are, and process pixels only in that region

183

184

46

23092011

Region of Interests (ROI)

Typical Geometries for ROI

Set up a window, or Region of Interest, and process only those pixels in that region

Removes background or extraneous information that will not have to be processed

Reduces a big image to a small subset

Encompasses only that area where the features appears


Allow enough extra coverage for part and fixture tolerances Or use a tool to find the part then automatically adjust the window location for the new part location

Called fixturing

185

186

Information Content in Images

Spectral Information for Image Analysis

Spectral

Spectral Analysis

Color or Brightness of pixel data

Can be used for presence or absence No location information available in feature vector

Spatial

Relationship of pixel information in space

Temporal

Changes in pixel values with time

187

188

47

23092011

Algorithms For Extracting Spectral Information


Setting Up a Binary Pixel Count Application

Binary Pixel Count Grayscale Average Intensity Histogram Analysis

Define the region of interest


You can have more than one They can touch or overlap

Threshold the grayscale image to binary Count the pixels in each ROI (white or black) Computer returns the number of pixels Then? Compare the measured number of pixels to some standard value to make decision

These algorithms measure how light or dark image is, and make decisions based on that measured value.

189

190

Counting Pixels is Not Precise

Setting Accept/Reject Threshold

When you count the no. of pixels in the ROI, it may change from image to image for the same part even if the part and its location is maintained the same, due to camera noise and lighting. If the part is moved slightly you get more variation. If you measure different parts, the variability increases more.
Threshold for part OK

Plot the histogram distribution for both good and bad parts

Verify

wide

separation

in

feature vector values

Set an accept/reject threshold somewhere in between the two.

If you make multiple measures you can plot the distribution of pixel counts in a histogram to study how much variation you have in the process.
191 192

Red is bad part Green is good part Pixel count somewhere in between is set as threshold

48

23092011

Problem of False Accept and False Reject

False Rejects

Threshold are set incorrectly at this level to guarantee that only All feature vectors measured by vision systems have normal process variations. During setup you need to verify that you have sufficient separation in the measurements the vision system makes between good and bad parts
193 194

good parts are accepted by machine. Many specifications read SHALL ACCEPT NO BAD PARTS. Result is falsely rejecting good parts, which interferes with production efficiencies.

False Accepts

Grayscale Average Analysis

System calculates the average of the grayscale values of the pixels in the ROI. The measured value is compared with the value for good and bad parts in order to make an accept / reject decision. Can be used for presence or absence.

Thresholds are set incorrectly at this level to relieve production concerns about rejecting too many parts that the operator may say OK. Result is accepting bad parts

195

196

49

23092011

Histogram Analysis

Spatial Analysis

System calculates the histogram of the grayscale values of the pixels in an ROI

Relationship of pixel information in space. Types of spatial analysis

Connectivity Edge analysis


Features of the Histogram are compared to values for good and bad parts to make an accept/reject decision.

Measurement Location

Good for texture analysis, or for dynamically adjusting the binary threshold

197

Correlation Geometric vector matching

Can be used for finding location, size


198

Connectivity Analysis

Some Geometric Features from Connectivity Analysis


Set up an ROI Threshold the image


Area (no of white pixels) Perimeter (blue + red) Convex perimeter (blue + green) Compactness

Binary process only Also known as blob analysis

Initiate algorithm System returns a list of geometric features about each blob in the image

Ratio of perimeter to area

Roughness

Ratio of convex perimeter to area

199

200

50

23092011

More Geometric Features


How Geometric Features are Used

Centre of Gravity (average in x and y) Bounding box (red) Minimum x (or y) coordinate No. of holes Aspect ratio (ratio of span in x to span in y) No. of runs

Location: The centre of gravity, or minimum (maximum) pixel locations can be used for identifying where the object is in the image.

Identification: A family of Geometric Features can be used to differentiate objects.

Verification: Similar to presence/absence evaluation with spectral analysis, except that more information is present providing a more robust decision.

201

202

Repeatability and Accuracy are Dependent on the Blob Feature

Power of Blob Analysis


Centre of Gravity (RED average in x and y)


Averages the centre position of each line of pixels in the rows and columns Provides sub-pixel accuracy Each coordinate determined by the location of one pixel

Provides information on the object location and geometry Better than pixel counting because you can count only contiguous pixels

Bounding Box (BLUE)

Eliminate unwanted features or noise, such as specular reflections You can size the object Geometric verification of blob features provide additional check that you are counting the right pixels

Downside is that it is a binary process

203

204

51

23092011

Edge Analysis

Edge Pixels

Identify edge pixels Measurement tools available would give distances in pixels

Measure from line to line (caliper tool) Measure angle between two lines Measure from point to line (perpendicular to line)

Vertical edges are identified where grayscale changes as you scan along horizontal direction Horizontal edges are identified by scanning in the vertical direction. Oblique edges are calculated from a combination of the horizontal and vertical edge strength
206

Sub-pixel accuracy can be achieved if contiguous pixels along an edge are combined into a line used for measurement
205

Template Matching

Normalized Grayscale Correlation



Model

Matching a trained model to the image Does not require the user to know much about the features or grayscale values

Must understand features versus noise or background clutter Good image contrast is important Best Match

Powerful technique used extensively in vision for electronics and printing Normalized correlation or geometric vector matching

A model of a golden part is taught Trained template is moved over the image System records the percentage match between template and image Template is scanned over the entire search region Location of best fit and % match is returned

207

208

52

23092011

The Math Behind.

Potential Problems with Correlation based


This comparison is done by multiplying the grayscale values of the pixels in the model by the grayscale values of the pixels in the search area, and summing all the results Two values are returned

Model

Presence of similar features

Change in Scale

Location of the best match % Match value how close a match


Angular Rotation Change in Color

For Normalized grayscale correlation, the average grayscale intensities of the model and the search area are made equal.
210

209

What's the Next Better Solution ? Geometric Vector Match


List of Vectors Model Edge Image

Issues Encountered with Geometric Pattern Matching

System erroneously matches regions of image to template


Less sensitive to scale, rotation, color variation than Normalized Grayscale Correlation

Edge strength too low % acceptable match might be too low Search region too large includes background noise that could be misclassified

Background clutter not in trained image causes a no match found condition

Shadows can create additional features

Pixels in search region Edge Image List of Vectors


211

It can be slow for large, complex regions

212

53

23092011

An Application Example

Failed Images: % Match Below Acceptable Threshold


Trained Model

Badge Identification:Verify Correct Badge present V10


213 214

Badge Too Dark

Shadow

Results

How to Get Good Results With Geometric Vector Matching


Ensure high contrast, consistent images Use a ROI which minimizes background noise in the search area

Use software fixtures for the ROI

Rather than one large ROI for the template, use multiple smaller ROIs which includes unique features not seen elsewhere in the image

Increase signal to noise ratio

215

216

54

23092011

Summary

Lighting flexibility and agility Camera resolution and speed Vision recognition tools Computational processing power Mathematical Algorithms Robot Work Volume Gripper design and Versatility Part and Material Handling

217

55

Você também pode gostar