Fun photography fact! You might not know that the human eye works basically the same as a 100-megapixel digital camera with autofocus. The main difference is in that we have two different types of pixel: R/G/B cone cells for color vision and grayscale rod cells that let us see in low light. The ‘pixels’ are also a lot denser at the middle point where we focus (the fovea) than further out in the peripheral areas.
Pic taken with the Panasonic Lumix 20mm f/1.7, a not quite aberration-
free lens with intangible image qualities that micro four thirds camera
Our firmware also presents an interesting question. For example, the eye focuses just like my E-P1: it hunts back and forth until it finds a sharp edge. We call that contrast-detect autofocus (CDAF). Traditionally CDAF kind of sucks. One problem is that you cannot tell whether blur is too close or too far away, so half the time the lens starts hunting in the wrong direction. Even if it starts well it always overshoots and then will hunt back and forth over the ideal spot. A second kind of autofocus traditionally works faster, phase-detect autofocus (PDAF), but evolution never gave us a phase prism so that is not an option.
A few months ago Olympus announced a new version of my camera that made a few small things better plus one big thing: they claimed the fastest autofocus in the world. Chat boards predictably went a little bit nuts, since you cannot add a phase prism to this kind of camera. That means hey made a CDAF faster than PDAF, and that’s like neutrinos beating photons. Weird and a little disturbing. Though, in this case, not wrong.
Then some rumors board found an obscure Olympus patent that pointed to an answer. Maybe they seeded some infrared-sensitive pixels into the sensor and used the difference between the normal light image and the IR image to tell whether blur is too close or too far and by how much? It was a great idea, but wrong. Olympus just beefed up their processor and put faster motors in the lenses.
The eye, on the other hand, does work like that mythical Olympus sensor. We can tell whether blur is too far or too close.
Wilson Geisler and Johannes Burge, psychologists at the Center for Perceptual Systems at the University of Texas, Austin […] attempted to mimic how the human visual system might be processing these images by adding a set of filters to their model designed to detect these features. When they blurred the images by systematically changing the focus error in the computer simulation and tested the response of the filters, the researchers found that they could predict the exact amount of focus error by the pattern of response they observed in the feature detectors. The researchers say this provides a potential explanation for how the brains of humans and animals can quickly and accurately determine focus error without guessing and checking. Their research appears online this week in the Proceedings of the National Academy of Sciences.
The key bit is even more interesting. Geisler and Burge found that lens defects change the blur in small ways that make it easier for the brain to figure out which way to focus, so it’s better to have a slightly off lens than one that makes a flawless image. People who have corrective eye surgery, whose eyes are (one hopes) a lot more perfect than they were before, thus spend a while with serious focusing problems. I don’t know whether defects need to be significant to work best, but if so then the blazing fastest lenses of the future might have a slight circus mirror effect to them.