Você está na página 1de 30

Color Constancy Algorithms

Pi19404
December 30, 2012

tts

tts
tt t
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.10 Color Constancy . . . . . . . . . . . . . . . . . . . . . . . . . . . Gray world assumption . . . . . . . . . . . . . . . . . . . . . . normalized minkowski p-norm . . . . . . . . . . . . . . . . . . Max-RGB Algorithm . . . . . . . . . . . . . . . . . . . . . . . . Gray-Edge Algorithm . . . . . . . . . . . . . . . . . . . . . . . Max-Edge Algorithm . . . . . . . . . . . . . . . . . . . . . . . . Gray World Algorithm and Shades of Gray in Lab Color Space A Simple White Balance Algorithm . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3 3 10 16 17 19 19 26 30 30

2 | 30

tt t

tt t
r st

Color constancy is a mechanism of detection of color independent of light source. The light source many introduce color casts in captured digital images To solve the color constancy problem a standard method is to estimate the color of the prevailing light and then, at the second stage, remove it. Once the color of light in individual channels is obtained the each color pixel is normalized by a scaling factor . Two of the most commonly used simple techniques for estimating the color of the light are the Grey-World and Max-RGB algorithms. These two methods will work well in practice if the average scene color is gray or the maximum is white

r r sst

The Gray World Assumption is a white balance method that assumes that your scene, on average, is a neutral gray. Gray-world assumption hold if we have a good distribution of colors in the scene. Assuming that we have a good distribution of colors in our scene,the average reected color is assumed to be the color of the light. Therefore, we can estimate the illumination color cast by looking at the average color and comparing it to gray. Gray world algorithm produces an estimate of illumination by computing the mean of each channel of the image. One of the methods of normalization is that the mean of the three components is used as illumination estimate of the image. To normalize the image of channel i ,the pixel avg value is scaled by s1 = avgi where avgi is the channel mean and avg is the illumination estimate . Another method of normalization is normalizing to the maximum channel by scaling by si max ( avgR , avgG , avgB ) ri = avgi

Another method of normalization is normalizing to the maximum channel by scaling

3 | 30

tt t
by norm mi mi =

( avgr avgr + avgg avgg + avgb avgb )


ri = max (m R , mG , m B ) mi

Attached are output of standard contrast stretching and present algorithm

(a) original

(b) ld normalization method 1

(c) normalization method 2

(d) normalization method 3


Figure 1: Example 1.1:gray world

(a) original

(b) ld normalization method 1

(c) normalization method 2

(d) normalization method 3


Figure 2: Example 1.2:gray world

4 | 30

tt t

(a) original

(b) ld normalization method 1

(c) normalization method 2

(d) normalization method 3


Figure 3: Example 1.3:gray world

(a) original

(b) ld normalization method 1

(c) normalization method 2

(d) normalization method 3


Figure 4: Example 1.4:gray world

5 | 30

tt t

(a) original

(b) ld normalization method 1 (c) normalization method 2

(d) normalization method 3


Figure 5: Example 1.4:gray world

(a) original

(b) ld normalization method 1

(c) normalization method 2

(d) normalization method 3


Figure 6: Example 1.5:gray world

6 | 30

tt t

(a) original

(b) ld normalization method 1

(c) normalization method 2

(d) normalization method 3


Figure 7: Example 1.5:gray world

(a) original

(b) ld normalization method 1

(c) normalization method 2

(d) normalization method 3


Figure 8: Example 1.6:gray world

7 | 30

tt t

(a) original

(b) ld normalization method 1

(c) normalization method 2

(d) normalization method 3


Figure 9: Example 1.7:gray world

(a) original

(b) ld normalization method 1

(c) normalization method 2

(d) normalization method 3


Figure 10: Example 1.8:gray world

8 | 30

tt t

(a) original

(b) ld normalization method 1

(c) normalization method 2

(d) normalization method 3


Figure 11: Example 1.9:gray world

(a) original

(b) normalization method 1

(c) normalization method 2

(d) normalization method 3


Figure 12: Example 1.10:gray world

9 | 30

tt t
r s r

Another variant to estimate illumination vector is to calculate the normalized minkowski p-norm for each color channel color constancy algorithm which is based on Minkowski norm - for each color channel, the Minkowski p-norm is calculated and the normalized result forms the estimated illumination vector. ei = where summation if over all the pixels. Gray world algorithm is obtained by setting p=1 The shades of gray, is given by Finlayson and Trezzi concluded that using Minkowski norm with p = 6 gave the best estimation results on their data set. 1 N
1/ p

pi
i

(a) original

(b) normalization method 1

(c) normalization method 2

(d) normalization method 3


Figure 13: Example 2.a1:minkowski norm 6

10 | 30

tt t

(a) original

(b) normalization method 1

(c) normalization method 2

(d) normalization method 3


Figure 14: Example 2.a2:minkowski norm 6

(a) original

(b) ld normalization method 1

(c) normalization method 2

(d) normalization method 3


Figure 15: Example 2.4:shades of gray

11 | 30

tt t

(a) original

(b) ld normalization method 1

(c) normalization method 2

(d) normalization method 3


Figure 16: Example 2.4:shades of gray

(a) original

(b) ld normalization method 1

(c) normalization method 2

(d) normalization method 3


Figure 17: Example 2.4:shades of gray

12 | 30

tt t

(a) original

(b) ld normalization method 1

(c) normalization method 2

(d) normalization method 3


Figure 18: Example 2.4:shades of gray

(a) original

(b) ld normalization method 1

(c) normalization method 2

(d) normalization method 3


Figure 19: Example 2.5:shades of gray

13 | 30

tt t

(a) original

(b) ld normalization method 1

(c) normalization method 2

(d) normalization method 3


Figure 20: Example 2.6:shades of gray

(a) original

(b) ld normalization method 1

(c) normalization method 2

(d) normalization method 3


Figure 21: Example 2.7:shades of gray

14 | 30

tt t

(a) original

(b) ld normalization method 1

(c) normalization method 2

(d) normalization method 3


Figure 22: Example 2.8:shades of gray

(a) original

(b) ld normalization method 1

(c) normalization method 2

(d) normalization method 3


Figure 23: Example 2.9:shades of gray

15 | 30

tt t
rt
This method is based on the assumption that human visual system achieves color constancy by detecting the area of highest reectance in the eld of view and normalizes the response according to these maximal values. The algorithm calculate the maximum value in R,G,B channels and normalizes the pixels according to the maximum value. This algorithm would produce correct results if the scene contains a white patch which reects all light equally such that maximum of R,G,B would be found on the white patch. It would be good to ignore the pixels above 95% of the dynamic range so that pixel values are not clipped or saturated. The shades of gray algorithm is a combination of gray world and max-RGB algorithm. However this method may not work in all cases.Below are examples for the same The enhancement is also very subtle in both the cases.The changes are visible once program is run of images of actual size and difference and be clearly observed

(a) original

(b) normalization method 2

(c) normalization method 3

Figure 24: Example 3:max-RGB algorithm

(a) original

(b) original

(c) normalization method 2

(d) normalization method 3


Figure 25: Example 4 : max-RGB Algorithm

16 | 30

tt t
r rt
This method is similar to shades of gray algorithm but considers the image derivatives instead of image itself.This assumption is based on the observation that the distribution of derivatives of images forms an ellipsoid in R, G, B space, of which the long axis coincides with the illumination vector. The rst step of algorithm is to apply a Gaussian lter on each color channel separately. This is done to remove the noise since the noise could be amplied by the derivative The minkowski norm of the derivative of each of channel is computed. The vector components are normalized to obtain estimate of illumination vector and the normalization can be performed on any of the above mentioned techniques Like the max-RGB it shows very little improvement for the underwater image.

(a) original

(b) normalization method 1

(c) normalization method 2

(d) normalization method 3


Figure 26: Example 4.1 :Gray-Edge

17 | 30

tt t

(a) original

(b) normalization method 1

(c) 6 normalization method 2

(d) normalization method 3


Figure 27: Example 4.2 :Gray-Edge

(a) original

(b) normalization method 1

(c) normalization method 2

(d) normalization method 3


Figure 28: Example 4.3 :Gray-Edge minkowski norm 6

18 | 30

tt t

(a) original

(b) normalization method 1

(c) normalization method 2

(d) normalization method 3


Figure 29: Example 4.4 :Gray-Edge minkowski norm 6

rt
This method is similar to max-RGB algorithm but considers the image derivatives instead of image itself.This assumption is based on the observation that the distribution of derivatives of images forms an ellipsoid in R, G, B space, of which the long axis coincides with the illumination vector. The rst step of algorithm is to apply a Gaussian lter on each color channel separately. This is done to remove the noise since the noise could be amplied by the derivative the maximum value in each channel is computed and vector is used to estimate illumination vector. The vector components are normalized to obtain estimate of illumination vector and the normalization can be performed on any of the above mentioned techniques Like the max-RGB it shows very little subtel changes for the underwater image.

r r rt s r r
This is the same gray world algorithm implemented in Lab Color space. The mean value of chromatic components are computed in a,b spaces. The a,b value of each pixel in the image are added/subtracted based on a by value proportional to mean a,b values computed.The constant of proportionality is normalized luminosity component lmlax . OpenCV uses 0-255 range for all the components .Thus we convert values so that l takes values in range 0-100 , and a and b take values in the range -128 to +128. The shades of gray algorithm can also be modied using the same approach. However no appreciable difference is observed.

19 | 30

tt t

(a) original

(b) normalization method 1

(c) normalization method 2

(d) normalization method 3


Figure 30: Example 5.1 :Max-Edge

(a) original

(b) normalization method 1

(c) 6 normalization method 2

(d) normalization method 3


Figure 31: Example 5.2 :Gray-Edge

(a) original

(b) normalization method 1


Figure 32: Example 6.1 :Gray-World in Lab Color Space

20 | 30

tt t

(a) original

(b) normalization method 1


Figure 33: Example 6.2 :Gray-World in Lab Color space

(a) original

(b) ld normalization method 1


Figure 34: Example 6.1:gray world in Lab color space

(a) original

(b) ld normalization method 1


Figure 35: Example 6.2:gray world in Lab color space

(a) original

(b) ld normalization method 1


Figure 36: Example 6.3:shades of graygray world in Lab color space

21 | 30

tt t

(a) original

(b) ld normalization method 1


Figure 37: Example 6.4:gray world in Lab color space

(a) original

(b) ld normalization method 1


Figure 38: Example 6.5:gray world in Lab color space

(a) original

(b) ld normalization method 1


Figure 39: Example 6.6:gray world in Lab color space

(a) original

(b) ld normalization method 1


Figure 40: Example 6.7:gray world in Lab color space

22 | 30

tt t

(a) original

(b) ld normalization method 1


Figure 41: Example 6.8:gray world in Lab color space

(a) original

(b) ld normalization method 1


Figure 42: Example 6.9:gray world in Lab color space

(a) original

(b) normalization method 1


Figure 43: Example 7.a :shades of gray in Lab Color Space

(a) original

(b) normalization method 1


Figure 44: Example 7b :Shades of gray in Lab Color space

23 | 30

tt t

(a) original

(b) ld normalization method 1


Figure 45: Example 7.1:shades of gray in Lab Color Space

(a) original

(b) ld normalization method 1


Figure 46: Example 7.2:shades of gray in Lab Color Space

(a) original

(b) ld normalization method 1


Figure 47: Example 7.3:shades of gray in Lab Color Space

(a) original

(b) ld normalization method 1


Figure 48: Example 7.4:sshades of gray in Lab Color Space

24 | 30

tt t

(a) original

(b) ld normalization method 1


Figure 49: Example 7.5:shades of gray in Lab Color Space

(a) original

(b) ld normalization method 1


Figure 50: Example 7.6:shades of gray in Lab Color Space

(a) original

(b) ld normalization method 1


Figure 51: Example 7.7:shades of gray in Lab Color Space

(a) original

(b) ld normalization method 1


Figure 52: Example 7.8:shades of gray in Lab Color Space

25 | 30

tt t

(a) original

(b) ld normalization method 1


Figure 53: Example 7.9:shades of gray in Lab Color Space

t rt

This a variant of contrast stretching algorithm.A contrast stretching is a method to improve the contrast of the image so that values of channel occupy the maximal range of [0,255] for 8 Bit image.This can be performed by applying a afne transformation ax+b to each channel,we need to chose a,b such that maximal value becomes 255 and minimal value 0. But this method has the drawback that it may introduce artifacts in some cases . consider the situation that Red channel occupies values in the range 0-10 . Contrast stretching stretches the range to take values in the range 0-255 ,initially pixels near zero values will begin to take large values. The same is the case if bright pixels dominate the channel,it will lead to some of values becoming dark after stretching ,this would not provide a visually appealing output. To avoid the above cases ,the contrast stretching is performed for only on a subset of range occupied by the channel.The present algorithm based on the predened criteria determine the range [vmin , vmax ].The values below vmin are saturated to take value 0 ,value above vmax are saturated to take value 255 and contrast stretching is performed for the pixels in the range [vmin , vmax ] The input to algorithm is the % of pixels to saturate above and below. Let parameters be s1 % and s2 % for higher and lower saturation thresholds. Let N be the total number of pixels in the image.s1 N % of pixels in lower range and s2 N % of pixels in higher range are pixels to be saturated. The algorithm can be implemented as follows : 1. Compute Histogram of image 2. Compute the Cumulative distribution function of image 3. Find pixel index i1 and i2 such that CDF = s1 N %,and pixel index such that CDF = (1 s2 ) N % 4. Set all pixels less than i1 to 0 and all pixel greater than i2 to 255.

26 | 30

tt t
5. For all pixels in the range [i1 , i2 ] apply afne transformation to perform contrast stretching. ( pi i1 ) 255 pi = ( i2 i1 ) For color images the above algorithm is applied on each of channels of the image.

(a) contrast stretching

(b) present algorithm


Figure 54: Example 1

(a) original

(b) normalization method 1


Figure 55: Example 7.a :shades of gray in Lab Color Space

(a) original

(b) normalization method 1


Figure 56: Example 7b :Shades of gray in Lab Color space

All the images are taken from http : //research.edm.uhasselt.be/ oancuti /UnderwaterC VPR2 012/imagese

27 | 30

tt t

(a) original

(b) ld normalization method 1


Figure 57: Example 8.1:color contrast

(a) original

(b) ld normalization method 1


Figure 58: Example 8.2:shades of gray in Lab Color Space

(a) original

(b) ld normalization method 1


Figure 59: Example 8.3:color contrast

(a) original

(b) ld normalization method 1


Figure 60: Example 8.4:color contrast

28 | 30

tt t

(a) original

(b) ld normalization method 1


Figure 61: Example 8.5:color contrast

(a) original

(b) ld normalization method 1


Figure 62: Example 8.6:color contrast

(a) original

(b) ld normalization method 1


Figure 63: Example 8.7:color contrast

(a) original

(b) ld normalization method 1


Figure 64: Example 8.8:color contrast

29 | 30

tt t

(a) original

(b) ld normalization method 1


Figure 65: Example 8.9:color contrast

rs

1. A New Color Balancing Method ttstrssPrt rtsrttr 2. ttrsrssttrtrPPrtr 3. ttrstrstr 4. ttsrts 5. http://www.ipol.im/pub/art/2011/llmps-scb/ 6. Limare, Nicolas, Jose-Luis Lisani, Jean-Michel Morel, Ana Beln Petro, and Catalina Sbert. Simplest Color Balance. Image Processing On Line 2011 (2011). 7. Wikipedia contributors, "Color balance", Wikipedia, The Free Encyclopedia 8. ttrsrssttrtrP

For code refer to site ttsrrsrst

30 | 30

Você também pode gostar