David Plowman has been sharing over at RaspberryPi.org (and an earlier post here) about the progress the team has been making with the Raspberry Pi Camera Module. These are worth a look given how open he has been about the challenges facing any lightweight camera system, including this section on white-balancing:
The Eye of the Beholder
That’s where beauty lies, so the saying goes. And for all the test charts, metrics and objective measurements that imaging engineers like to throw at their pictures, it’s perhaps sobering that the human eye – what people actually like – is the final arbiter of Image Quality (IQ). There has been much discussion, and no little research, on what makes for “good IQ”, but the consensus probably has it that while the micro aspects of IQ, such as sharpness, noise and detail, are very important, your eye turns first to the the macro (in the sense of large scale) image features – exposure and contrast, colours and colour balance.
We live in a grey world…
All camera modules respond differently to red, green and blue stimuli. Of itself this isn’t so problematic as the behaviour can be measured, calibrated and transformations applied to map the camera’s RGB response (which you saw in the final ugly image of my previous post!) onto our canonical (or standard) notion of RGB. It’s in coping with the different kinds of illumination that things get a little tricky. Let me explain.
Imagine you’re looking at a sheet of white paper. That’s just the thing – it’s always white. If you’re outside on a sunny day, it’s white, and if you’re indoors in gloomy artificial lighting, it’s still white. Yet if you were objectively to measure the colour of the paper with your handy spectrometer, you’d find it wasn’t the same at all. In the first case your spectrometer will tell you the paper is quite blue, and in the second, that it’s very orange. The Human Visual System has adapted itself brilliantly over millions of years simply not to notice any difference, a phenomenon known as colour constancy.
No such luck with digital images, though. Here we have to correct for the ambient illumination to make the colours look “right”. Take a look at the two images [above].
It’s a scene taken in the Science Park in Cambridge, outside the Broadcom offices. The top one looks fine, but the bottom one has a strong blue cast. This is precisely because the top one has been (in the jargon) white-balanced for an outdoor illuminant and the bottom one for in indoor illuminant. But how do we find the right white balance?
The simplest assumption that camera systems can make is that every scene is, on average, grey, and it works surprisingly well. It has some clear limitations too, of course. With the scene above, a “grey world” white balance would actually give a noticeable yellow cast because of the preponderance of blue sky skewing the average. So in reality more sophisticated algorithms are generally employed which constrain the candidate illuminants to a known set (predominantly those a physicist would describe as being radiated by a black body, which includes sunlight and incandescent bulbs), and in keying on colours other than merely grey (often specific memory colours, such as blue sky or skin tones)….
Have an amazing project to share? Join the SHOW-AND-TELL every Wednesday night at 7:30pm ET on Google+ Hangouts.
Join us every Wednesday night at 8pm ET for Ask an Engineer!
Learn resistor values with Mho’s Resistance or get the best electronics calculator for engineers “Circuit Playground” – Adafruit’s Apps!
No comments yet.
Sorry, the comment form is closed at this time.