Knowing the exact distance to an object might give robots better spatial vision than humans and allow them to perform delicate tasks now beyond their abilities. People are coming up with many things they might do with this, Fife said. The three researchers published a paper on their work in the February edition of the IEEE ISSCC Digest of Technical Papers.
Their multi-aperture camera would look and feel like an ordinary camera, or even a smaller cell phone camera. The cell phone aspect is important, Fife said, given that the majority of the cameras in the world are now on phones.
Heres how it works:
The main lens (also known as the objective lens) of an ordinary digital camera focuses its image directly on the cameras image sensor, which records the photo. The objective lens of the multi-aperture camera, on the other hand, focuses its image about 40 microns (a micron is a millionth of a meter) above the image sensor arrays. As a result, any point in the photo is captured by at least four of the chips mini-cameras, producing overlapping views, each from a slightly different perspective, just as the left eye of a human sees things differently than the right eye.
The outcome is a detailed depth map, invisible in the photograph itself but electronically stored along with it. Its a virtual model of the scene, ready for manipulation by computation. You can choose to do things with that image that you werent able to do with the regular 2-D image, Fife said. You can say, I want to see only the objects at this distance, and suddenly theyll appear for you. And you can wipe away everything else.
Or the sensor could be deployed naked, with no objective lens at all. By placing the sensor very close to an object, each micro lens would take its own photo without the need for an objective lens. It has been suggested that a very small probe could be placed against the brain of a laboratory mouse, for example
|Contact: Abbas El Gamal|