1 / 18

The Role of Bright Pixels in Illumination Estimation

The Role of Bright Pixels in Illumination Estimation. November 2012. Outline. Motivation Related research Extending the white-patch hypothesis The effect if bright pixels in well-known methods The bright-pixels framework Further experiment Conclusion. Motivation. White-Patch method

terah
Download Presentation

The Role of Bright Pixels in Illumination Estimation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Role of Bright Pixels in Illumination Estimation November 2012

  2. Outline • Motivation • Related research • Extending the white-patch hypothesis • The effect if bright pixels in well-known methods • The bright-pixels framework • Further experiment • Conclusion

  3. Motivation • White-Patch method • One of the first colour constancy methods • Estimates the illuminant colour by the max response of three channels • Few researchers or commercial cameras use it now • Recent research reconsider white patch • Local mean calculation as a preprocessing can significantly improve [Choudhury & Medioni (CRICV09)] [Funt & Li (CIC2010)] • Analytically, the geometric mean of bright (specular) pixels is the optimal estimate for the illuminant, based on dichromatic model [Drew et al. (CPCV12)]

  4. Bright Pixels Light Source Just a bright surface White surface Highlights

  5. Previous Research • White Patch • Local mean calculation as a preprocessing step for White Patch • Using Specular Reflection • Specular reflection colour is same as the illumination within a Neutral Interface Reflection • It usually includes the bright areas of image • Illumination estimation method • Intersection of dichromatic planes [Tominaga and Wandell (JOSA89)] • Intersection of the lines generates by chromaticity values of pixels of each surface in the CIE chromaticity diagram by [Lee (JOSA86)] • Extending Lee’s algorithms by constraint on the colours of illumination

  6. Grey-based illumination estimation • Grey-world • The average reflectance in the scene is achromatic • Shade-of-grey • Minkowski p-norm • Grey-edge • The average of the reflectance differences in a scene is achromatic

  7. Extending the White Patch Hypothesis • Let us extend white-patch hypothesis that there is always include any of: white patch, specularities, or light source in an image • Gamut of bright pixels, in contradistinction to maximum channel response of the White-Patch method, which include the brightest pixels in the image • Removing clipped pixels (exceed 90% of the dynamic range) • Define bright pixels as the top T % of luminance given by R+G+B. • What is the probability of having an image without strong highlights, source of light, or white surface in the real world?

  8. Simple Experiment • Experiment whether or not the actual illuminant colour falls inside the 2D gamut of top 5% brightness pixels • SFU Laboratory Dataset : 88.16% • ColorChecker : 74.47% • GreyBall : 66.02% Specularity FAIL White surface

  9. The Effect of Bright Pixels on Grey-base methods • Experiment the effect of bright pixels • Run grey-based method for the top 20% brightness pixels in each image, and compare to using all image pixels (colour) • Using one fifth of the pixels  performance is better or equal ColorChecker Dataset

  10. The Effect of Bright Pixels on Gamut Mapping method • White-patch gamut and canonical white-patch gamut introduced [Vaezi Joze & Drew (ICIP12)] • White-patch gamut is the gamut of top 5% bright pixels in an image • Adding new constraints based on the white-patch gamut to standard Gamut Mapping constraints outperforms the Gamut Mapping method and its extensions. Canonical gamut vs. WP canonical gamut

  11. The Bright-Pixels Framework • If these bright pixels represent highlights, a white surface, or a light source, they approximate the colour of the illuminant • Try Mean, Median, Geomean, p-norm (p=2,p=4) for top T% brightness

  12. The Bright-Pixels Framework • A local mean calculation can help: • Resizing to 64 × 64 pixels by bicubic interpolation • Median filtering • Gaussian blurring filter • It does not help so much on these images ColorChecker Dataset

  13. Dataset • SFU Laboratory [Barnard & Funt (CRA02)] • 321 images under 11 different measured illuminants • Reprocessed version of ColorChecker[Gehler et al. (CVPR08)] • 568 images, both indoor and outdoor • GreyBall[Cieurea & Funt (CIC03)] • 11346 images extracted from video recorded under a wide variety of imaging conditions • HDR dataset [Funt et al. (2010)] • 105 HDR images

  14. The Bright-Pixels Method • Remove clipped pixels • Do local mean • {no, Median, Gaussian, Bicubic } • Select top T% brightness pixels • Threshold = {.5%,1%,2%,5%,10%} • Estimate illuminant by shade of grey eq. • p = {1,2,4,8} • if the estimated illuminant is not in the possible illuminant gamut use grey-edge

  15. Further Experiment Comparison with well-known colour constancy methods

  16. Optimal parameters • Gaussian for high resolution images and no blurring for lower resolution images • Even .5% threshold is enough for in-laboratory images, for real images threshold should be 1-2%

  17. Conclusion • Based on current datasets in the field we saw that the simple idea of using the p-norm of bright pixels, after a local mean preprocessing step, can perform surprisingly competitively to complex methods. • Either the probability of having an image without strong highlights, source of light, or white surface in the real world is not overwhelmingly great or the current color constancy datasets are conceivably not good indicators of performance with regard to possible real world images.

  18. Questions? Thank you.

More Related