Navigation Links
Stanford researchers developing 3-D camera with 12,616 lenses

The camera you own has one main lens and produces a flat, two-dimensional photograph, whether you hold it in your hand or view it on your computer screen. On the other hand, a camera with two lenses (or two cameras placed apart from each other) can take more interesting 3-D photos.

But what if your digital camera saw the world through thousands of tiny lenses, each a miniature camera unto itself" Youd get a 2-D photo, but youd also get something potentially more valuable: an electronic depth map containing the distance from the camera to every object in the picture, a kind of super 3-D.

Stanford electronics researchers, lead by electrical engineering Professor Abbas El Gamal, are developing such a camera, built around their multi-aperture image sensor. Theyve shrunk the pixels on the sensor to 0.7 microns, several times smaller than pixels in standard digital cameras. Theyve grouped the pixels in arrays of 256 pixels each, and theyre preparing to place a tiny lens atop each array.

Its like having a lot of cameras on a single chip, said Keith Fife, a graduate student working with El Gamal and another electrical engineering professor, H.-S. Philip Wong. In fact, if their prototype 3-megapixel chip had all its micro lenses in place, they would add up to 12,616 cameras.

Point such a camera at someones face, and it would, in addition to taking a photo, precisely record the distances to the subjects eyes, nose, ears, chin, etc. One obvious potential use of the technology: facial recognition for security purposes.

But there are a number of other possibilities for a depth-information camera: biological imaging, 3-D printing, creation of 3-D objects or people to inhabit virtual worlds, or 3-D modeling of buildings.

The technology is expected to produce a photo in which almost everything, near or far, is in focus. But it would be possible to selectively defocus parts of the photo after the fact, using editing software on a computer

Knowing the exact distance to an object might give robots better spatial vision than humans and allow them to perform delicate tasks now beyond their abilities. People are coming up with many things they might do with this, Fife said. The three researchers published a paper on their work in the February edition of the IEEE ISSCC Digest of Technical Papers.

Their multi-aperture camera would look and feel like an ordinary camera, or even a smaller cell phone camera. The cell phone aspect is important, Fife said, given that the majority of the cameras in the world are now on phones.

Heres how it works:

The main lens (also known as the objective lens) of an ordinary digital camera focuses its image directly on the cameras image sensor, which records the photo. The objective lens of the multi-aperture camera, on the other hand, focuses its image about 40 microns (a micron is a millionth of a meter) above the image sensor arrays. As a result, any point in the photo is captured by at least four of the chips mini-cameras, producing overlapping views, each from a slightly different perspective, just as the left eye of a human sees things differently than the right eye.

The outcome is a detailed depth map, invisible in the photograph itself but electronically stored along with it. Its a virtual model of the scene, ready for manipulation by computation. You can choose to do things with that image that you werent able to do with the regular 2-D image, Fife said. You can say, I want to see only the objects at this distance, and suddenly theyll appear for you. And you can wipe away everything else.

Or the sensor could be deployed naked, with no objective lens at all. By placing the sensor very close to an object, each micro lens would take its own photo without the need for an objective lens. It has been suggested that a very small probe could be placed against the brain of a laboratory mouse, for example, to detect the location of neural activity.

Other researchers are headed toward similar depth-map goals from different approaches. Some use intelligent software to inspect ordinary 2-D photos for the edges, shadows or focus differences that might infer the distances of objects. Others have tried cameras with multiple lenses, or prisms mounted in front of a single camera lens. One approach employs lasers; another attempts to stitch together photos taken from different angles, while yet another involves video shot from a moving camera.

But El Gamal, Fife and Wong believe their multi-aperture sensor has some key advantages. Its small and doesnt require lasers, bulky camera gear, multiple photos or complex calibration. And it has excellent color quality. Each of the 256 pixels in a specific array detects the same color. In an ordinary digital camera, red pixels may be arranged next to green pixels, leading to undesirable crosstalk between the pixels that degrade color.

The sensor also can take advantage of smaller pixels in a way that an ordinary digital camera cannot, El Gamal said, because camera lenses are nearing the optical limit of the smallest spot they can resolve. Using a pixel smaller than that spot will not produce a better photo. But with the multi-aperture sensor, smaller pixels produce even more depth information, he said.

The technology also may aid the quest for the huge photos possible with a gigapixel camerathats 140 times as many pixels as todays typical 7-megapixel cameras. The first benefit of the Stanford technology is straightforward: Smaller pixels mean more pixels can be crowded onto the chip.

The second benefit involves chip architecture. With a billion pixels on one chip, some of them are sure to go bad, leaving dead spots, El Gamal said. But the overlapping views provided by the multi-aperture sensor provide backups when pixels fail.

The researchers are now working out the manufacturing details of fabricating the micro-optics onto a camera chip.

The finished product may cost less than existing digital cameras, the researchers say, because the quality of a cameras main lens will no longer be of paramount importance. We believe that you can reduce the complexity of the main lens by shifting the complexity to the semiconductor, Fife said.


Contact: Abbas El Gamal
Stanford University

Related biology news :

1. Americans remain pessimistic about the environment, Stanford-AP survey finds
2. Stanford/Packard researchers find disease genes hidden in discarded data
3. Stanford researchers say climate change will significantly increase impending bird extinctions
4. Stanfords nanowire battery holds 10 times the charge of existing ones
5. Stanford researchers publish review of US medical device regulation
6. Stanford researchers make first direct observation of 3-D molecule folding in real time
7. Stanford researchers say living corals thousands of years old hold clues to past climate changes
8. Group led by Stanford physicist says theres an urgent need for nuclear detectives
9. Researchers identify proteins involved in new neurodegenerative syndrome
10. Texas researchers and educators head for Antarctica
11. MGH researchers describe new way to identify, evolve novel enzymes
Post Your Comments:
Related Image:
Stanford researchers developing 3-D camera with 12,616 lenses
(Date:10/27/2015)... -- Munich, Germany , October ... automatically maps data from mobile eye tracking videos created ... that they can be quantitatively analyzed with SMI,s analysis ... , October 28-29, 2015. SMI,s Automated Semantic Gaze ... tracking videos created with SMI,s Eye Tracking Glasses ...
(Date:10/26/2015)... India , October 26, 2015 ... --> adds ... 2015 to 2021 as well as ... 2015-2019 research reports to its collection ... . --> ...
(Date:10/26/2015)... 26, 2015  Delta ID Inc., a company focused ... and PC devices, announced its ActiveIRIS® technology powers the ... F-02H launched by NTT DOCOMO, INC in ... second smartphone to include iris recognition technology, after a ... F-04G in May 2015, world,s first smartphone to have ...
Breaking Biology News(10 mins):
(Date:11/24/2015)... Muncie, IN (PRWEB) , ... November 24, 2015 , ... ... its newest Special Interest Group (SIG), MultiGP, also known as Multirotor Grand Prix, to ... in the last few years. Many AMA members have embraced this type of racing ...
(Date:11/24/2015)... , Nov. 24, 2015  Tikcro Technologies Ltd. (OTCQB: TIKRF) today announced ... 29, 2015 at 11:00 a.m. Israel time, at ... 98 Yigal Allon Street, 36 th Floor, Tel Aviv, ... Eric Paneth and Izhak Tamir to the Board of ... as external directors; , approval of an amendment to certain terms ...
(Date:11/24/2015)... (PRWEB) , ... November 24, 2015 , ... ... and the environment are paramount. Insertion points for in-line sensors can represent a ... developed the InTrac 781/784 series of retractable sensor housings , which are ...
(Date:11/24/2015)... LAVAL, QC , Nov. 24, 2015 /CNW Telbec/ - ... the "Corporation") announced today that Mr. Pierre Laurin , ... a corporate presentation at the upcoming Piper Jaffray 27 th ... York Palace Hotel, on December 1-2, 2015. ... be available for one-on-one meetings throughout the day. The presentation ...
Breaking Biology Technology: