Shree Nayar with the BigShot camera project |
As a kid did you ever spend hours trying to pick apart technological objects lying around the house? Did you ever imagine it might be possible to build your own working camera? Perhaps, if the science and engineering bug had already caught you, you tried to take apart your parents old cameras at peril of landing up with a lot of pieces and no way to put them back together (and slightly disgruntled parents). Things have changed a bit since then. Gen-Z kids, as young as eight years old – those born of the tech revolution, now actually have the chance to build their own complex camera and learn a few basic engineering and physics concepts along the way. All thanks to Columbia University’s Shree Nayar who has spearheaded the BigShot Camera project – cute, brightly coloured cameras, which come in kits with instructions on how to assemble.
But this is just one facet of the multi-talented T.C. Chang Chair Professor of the Computer Science Department at Columbia University. Nayar is also the head of the Columbia Computer Vision Laboratory (CAVE), which is dedicated to the development of advanced computer vision. After completing a B.S. in Electrical Engineering from the Birla Institute of Technology in Ranchi, India, Nayar went on to complete an M.S. degree in Electrical and Computer Engineering from North Carolina State University at Raleigh and followed it up with a PhD from the Robotics Institute at Carnegie Mellon University in 1990.
Siddhartha Chandra interviewed Professor Nayar on his passion for the rising subject of Computational Photography, and the latest on upcoming technogical action in the field.
1. What exactly is Computational Photography?
A Computational Camera is a device that does optical coding. It uses an unconventional lens to capture an image that is coded. When I say that it is coded, what it really means is that it is not your usual perspective image; it is not the final image. And then we have a computational algorithm which knows the type of coding that’s taken place and so it’s able to decode this image to produce the final image.
When you talk about Computational photography, it allows one to do computational things to images using computational cameras which are more interesting than image-processing or photoshop operations. It also tends to, though not strictly, be focused on photography. Computational cameras and imaging are a younger concept - you’ll find computational imaging concepts in astronomy, in pretty much all medical imaging (if you look at an MRI, CAT scan) etc.
2. What can one do with Computational Photography?
A working example of a Computational Camera |
To put it quite simply there are two things: one is that you can create functionalities that you simply can’t do with a traditional camera. So, for instance the idea of being able to navigate a complete 360 degree space, or the idea of being able to focus an image after it has been captured, or the idea of being able to explore brightness and colors that are beyond what a camera can capture in terms of dynamic range and color fidelity. These are all functionalities that would not be possible with a traditional camera - but I may be able to do them with a computational imaging system.
The second reason [to use a computational camera] is that through the combination of coding and computational decoding you can actually reduce the complexity of the imaging system. For instance your camera manufacturers spend a lot of money on sophisticated optics or lenses. What this also says is that you may be able to use an inexpensive but carefully designed lens - it may just have one element for instance. And then use computation to compute the final image.
4. Your view of the evolution in computational photography?
I would say that we are in a stage right now where there are lots of areas which are being explored... and it is not so much about a big turn, but about which functionalities and which complexity gains would pay off in which domains, that will determine its evolution. That's what will decide which idea survives and which ideas fade away. But remember, very often ideas that fade away are ideas that sometime resurface a few decades down the line.
5. Can you tell us how computational cameras and computational photography can help photography reach its next stage in evolution.
A Computational Photography setup |
If you’ve noted the Lytro camera, there is another camera that’s going to be in the market; it’s called ‘The Pelican’. What they’re trying to do is make the ability to change focus after the capture, or provide complete 3D structure of the scene - what it allows you to do is not only refocus the image or the video afterwards, but also it allows you to change your viewpoint. It’s all about clicking and later deciding how you want to change the content - in terms of perspective, in terms of zoom and in terms of depth-of-field.
6. Can camera devices of tomorrow provide the blind with hope of seeing the world?
Yes. Take, for example, the Googleglass.You have a camera that is sitting close to the eye; with computer vision technology beginning to mature; clearly even without seeing you have this eye. Computer vision systems can begin to give lots of guidance and cues to a person who is visually impaired. That is one part of it - using technology to complement or to supplement your system. There is another line of work which has to do with using image sensors and computational equipment which are embedded within the eye. When people are unable to see it is because of numerous things - it could be that the retina doesn’t work, it could be a problem with the optic nerve, it could be something that is more in the Visual Cortex. So there are many levels at which things may not work. Depending on what it is, there may be institutions that might emerge in the years to come.
7. Can you tell us about research works in developing optical devices that emulate visual systems superior than the human visual system?
Different organisms have eyes that sometimes surpass the human visual system, but in certain ways. For example, the scallop's eye has hundreds or thousands of little eyes that are distributed around the perimeter; each one is like a telescope - it actually has a mirror inside. The Fly, of course has an array of little-little cameras; single pixel or few pixel - called the ommatidia. And then you have certain organisms that can detect motion a lot better than the human eye can. Now you have to understand that imaging always acts with a certain budget. There are a certain number of photons that you can collect - and it’s upto the imaging system to decide how will it decides to use its resources to measure these photons; whether it wants higher spatial (space) resolution, or temporal (time) resolution or dynamic range; and so you can design all kinds of systems. So it’s not necessary to develop cameras that exactly mimic each one these systems - but it’s more interesting to want to know something about the world and then develop a system that does that.
8. Please tell us about your project – ‘BigShot’, the camera educational kit
BigShot Cameras |
I wouldn’t say it’s a research project, I think it’s a social venture. The camera is a device with normal social appeal and social integration, right? And because it has tremendous appeal, I began to think in terms of how the appeal could be exploited and leveraged to turn the camera into a medium for education, for educating about various science and engineering concepts - and then, after you’re done with that maybe you give them the experience of photography and story-telling and documentation - and finally it boils to be able to share pictures so that you get to know how people in different corners of the world do it. So, I see it less as a research project - I see it more as a social venture - and that’s how I am approaching it. It can be a device that you use on a daily basis.- to capture regular pictures, dynamic pictures and stereo pictures. So it really is, in my mind, a fairly broad experience.
*When does the camera go live?
Well I’m certainly hoping before the end of the year, 2013, that we’ll be able to launch it at least in the U.S. What I’m also excited about is the fact that we hope to use some of the royalties that we from the cameras to run a program that we call ‘Bigshots for Good’. The idea is to donate cameras to severely underserved communities If it is a success and the more people that buy it, the more we’d be able to give.
*Are there plans to launch outside the US?
We want to. We have the sort of the infrastructure and the necessary pieces in place to do it in the US. So we’ll start with. And then, meanwhile we will try to find distributors and retailers in various parts of the world, including India of course, and then it will catch up.
You can read more about the BigShot Camera here:
9. What would be your advice to students aspiring to develop a career in the field of vision and computational photography?
Your passion for something should always guide you. You know you have to be a realist, and you’re always working with various daily constraints - but if you have the resources, if you have the flexibility to follow your passion -and you have done a bit of a check in terms of whether the field you’re passionate about requires skills that you have an aptitude for -then I think it’s almost like a calling in life.You just jump into it and do it, and by and large, things go well when you do that.
Siddhartha Chandra is an amateur photographer and a passionate musician. He studied engineering at Sardar Patel College of Engg, Mumbai University and is now on his way to pursue a Masters degree in Computer Science – focusing on vision sciences and camera technologies – at Columbia University in the U.S.