DIY Computational Photography
December 21, 2009 9:01 AM Subscribe
Very cool, although 'do-it-yourself' may not be true for all values of 'yourself'. I know I haven't got a dozen cameras lying around.
It'll be interesting to see what comes out of this. Right now the horizontal ghost artifacts are distracting, but if they can simulate bokeh in a visually pleasing way, there could be some interesting shots made with extremely shallow depth of field. Like the tilt-shift look that's been popular, but more controlled.
posted by echo target at 9:20 AM on December 21, 2009
It'll be interesting to see what comes out of this. Right now the horizontal ghost artifacts are distracting, but if they can simulate bokeh in a visually pleasing way, there could be some interesting shots made with extremely shallow depth of field. Like the tilt-shift look that's been popular, but more controlled.
posted by echo target at 9:20 AM on December 21, 2009
Seems like a whole lot of science for some fairly uninteresting results. YMMV, I guess.
posted by Thorzdad at 9:22 AM on December 21, 2009
posted by Thorzdad at 9:22 AM on December 21, 2009
Well... you can do it with just one camera, free style, without having to measure everything in advance... if you don't mind some tradeoffs.
Here are some examples done with a single stock Nikon D40 and some time...
One of my many flickr albums, this one is various scenes from Chicago processed into virtual focus.
Especially relevant to this article are the foreground and background image derived from the same set of input files.
I've done a lot of playing around with this, and you can too!.
You need Hugin, and some time. Here is a summary I wrote up some time ago of my experiments.
posted by MikeWarot at 9:32 AM on December 21, 2009 [5 favorites]
Here are some examples done with a single stock Nikon D40 and some time...
One of my many flickr albums, this one is various scenes from Chicago processed into virtual focus.
Especially relevant to this article are the foreground and background image derived from the same set of input files.
I've done a lot of playing around with this, and you can too!.
You need Hugin, and some time. Here is a summary I wrote up some time ago of my experiments.
posted by MikeWarot at 9:32 AM on December 21, 2009 [5 favorites]
I think the next step of cameras does involve computation, but those 'pictures' would serve a very specific purpose.
For example, a big thing in amateur astronomy is making a 'picture' out of multiple frames of a video to correct for atmospheric interference. It is simple, and just uses photoshop layers, and you end up with a very focused image of Jupiter where a normal camera would fail.
'Capturing a light field' sounds amazing - there must be something out there that could use refocusing after the fact. Then again, you could just use a faster camera motor drive and adjustable focus to achieve almost the same results. Then again, you could just use video. But there must be some application somewhere...
posted by infinitefloatingbrains at 9:32 AM on December 21, 2009
For example, a big thing in amateur astronomy is making a 'picture' out of multiple frames of a video to correct for atmospheric interference. It is simple, and just uses photoshop layers, and you end up with a very focused image of Jupiter where a normal camera would fail.
'Capturing a light field' sounds amazing - there must be something out there that could use refocusing after the fact. Then again, you could just use a faster camera motor drive and adjustable focus to achieve almost the same results. Then again, you could just use video. But there must be some application somewhere...
posted by infinitefloatingbrains at 9:32 AM on December 21, 2009
Hey, I found this on Make this weekend. I followed some of the light field links and quickly got out of my depth. I'm looking forward to some translation from the MeFi brainiacs on what, other than computational focusing, this will be good for.
posted by DU at 9:34 AM on December 21, 2009
posted by DU at 9:34 AM on December 21, 2009
The future of photography sure is blurry.
posted by CitrusFreak12 at 9:39 AM on December 21, 2009 [4 favorites]
posted by CitrusFreak12 at 9:39 AM on December 21, 2009 [4 favorites]
Using an animated gif for the examples is unfortunate. The limited color palette makes it very hard to tell what's going on. Also, refocusing after the fact is interesting. Moving the camera after the fact is not so much; after all, you used multiple cameras.
posted by jedicus at 10:04 AM on December 21, 2009
posted by jedicus at 10:04 AM on December 21, 2009
that looks wonderful.
posted by sgt.serenity at 10:14 AM on December 21, 2009
posted by sgt.serenity at 10:14 AM on December 21, 2009
Mike, your shots are really interesting. I love the "Enter the Matrix" series of portraits--is that the same technique you describe in your blog post? Very cool.
That said, I'm still scratching my head about the FPP.
posted by Admiral Haddock at 10:16 AM on December 21, 2009
That said, I'm still scratching my head about the FPP.
posted by Admiral Haddock at 10:16 AM on December 21, 2009
HELL YES! Focus in post!
That is a great DIY implementation of some pretty cutting-edge complex research.
posted by GuyZero at 10:18 AM on December 21, 2009
That is a great DIY implementation of some pretty cutting-edge complex research.
posted by GuyZero at 10:18 AM on December 21, 2009
Haddock,
The Enter the Matrix set of photos all involve both the camera and the subject moving. This leads to some interesting background patterns.
posted by MikeWarot at 10:27 AM on December 21, 2009
The Enter the Matrix set of photos all involve both the camera and the subject moving. This leads to some interesting background patterns.
posted by MikeWarot at 10:27 AM on December 21, 2009
Using an animated gif for the examples is unfortunate. The limited color palette makes it very hard to tell what's going on.
Yep, sorry about that. I tried to get a flash viewer going, but it was rough. If you want to see them in full resolution, check out our refocusing software, which comes with example images.
I dont understand what this is about.
Sorry, I know it's not the clearest. Computational Photography is a new field of research that is developing extremely powerful cameras. These cameras allow the Depth of Field, the object in focus, and the position of the camera to be modified after the picture is taken. None of those things are possible with a traditional camera. Our camera is only one time, far from the only one. I am aware that it's big and klutzy.
Our camera -- a light field array, is the easiest kind to built. There are many other designs out there. One of the groups working on it right now is MIT's Camera Culture Group, which has some other cameras posted. We modeled our camera after the Stanford Array.
Very cool, although 'do-it-yourself' may not be true for all values of 'yourself'. I know I haven't got a dozen cameras lying around.
Our second Instructable shows how to play with one camera, or you can use MikeWarot's example linked above.
Refocusing is good for a lot of things -- for example, if you miss focus in the original picture. Another thing to notice is that the DOF is extremely shallow, so shallow that it is possible to "focus through" things.
Here are some examples of images from the software, they show the effect a lot better than the GIFs.
Skater Near
Skater Far
Garden Near
Garden Far
Tree Near
Tree Far
I'm really glad to see MikeWarot here, his posts early this year inspired me to create the array and to get Matti working on the software (mostly because I got so frustrated trying to abuse Hugin into doing the job, and I wanted to shoot moving subjects like my brother, the skater).
posted by fake at 11:00 AM on December 21, 2009
Yep, sorry about that. I tried to get a flash viewer going, but it was rough. If you want to see them in full resolution, check out our refocusing software, which comes with example images.
I dont understand what this is about.
Sorry, I know it's not the clearest. Computational Photography is a new field of research that is developing extremely powerful cameras. These cameras allow the Depth of Field, the object in focus, and the position of the camera to be modified after the picture is taken. None of those things are possible with a traditional camera. Our camera is only one time, far from the only one. I am aware that it's big and klutzy.
Our camera -- a light field array, is the easiest kind to built. There are many other designs out there. One of the groups working on it right now is MIT's Camera Culture Group, which has some other cameras posted. We modeled our camera after the Stanford Array.
Very cool, although 'do-it-yourself' may not be true for all values of 'yourself'. I know I haven't got a dozen cameras lying around.
Our second Instructable shows how to play with one camera, or you can use MikeWarot's example linked above.
Refocusing is good for a lot of things -- for example, if you miss focus in the original picture. Another thing to notice is that the DOF is extremely shallow, so shallow that it is possible to "focus through" things.
Here are some examples of images from the software, they show the effect a lot better than the GIFs.
Skater Near
Skater Far
Garden Near
Garden Far
Tree Near
Tree Far
I'm really glad to see MikeWarot here, his posts early this year inspired me to create the array and to get Matti working on the software (mostly because I got so frustrated trying to abuse Hugin into doing the job, and I wanted to shoot moving subjects like my brother, the skater).
posted by fake at 11:00 AM on December 21, 2009
Our camera is only one KIND, far from the only one. blasted typos.
posted by fake at 11:01 AM on December 21, 2009
posted by fake at 11:01 AM on December 21, 2009
but if they can simulate bokeh in a visually pleasing way
Check out MikeWarot's stuff, or Todor Georgiev's incredible images.
posted by fake at 11:05 AM on December 21, 2009
Check out MikeWarot's stuff, or Todor Georgiev's incredible images.
posted by fake at 11:05 AM on December 21, 2009
Moving the camera after the fact is not so much; after all, you used multiple cameras.
Except it sounds to me like you could interpolate the apparent camera position between two actual ones. So with 12 photographs spaced over a meter (say), I could simulate a moving video image sweeping over that same meter where the subject is motionless. I.e. bullet time.
posted by DU at 11:10 AM on December 21, 2009
Except it sounds to me like you could interpolate the apparent camera position between two actual ones. So with 12 photographs spaced over a meter (say), I could simulate a moving video image sweeping over that same meter where the subject is motionless. I.e. bullet time.
posted by DU at 11:10 AM on December 21, 2009
DU, exactly. And you can have DOF 1mm thin across it.
posted by fake at 11:12 AM on December 21, 2009
posted by fake at 11:12 AM on December 21, 2009
This is pretty cool, but I guess I don't understand the significance of what this is doing.
"These cameras allow the Depth of Field, the object in focus, and the position of the camera to be modified after the picture is taken. None of those things are possible with a traditional camera."
All of them except changing DOF are possible with traditional a motion picture/video cameras and digital processing. More to the point, once we understand that in the animated gifs here there is no single "picture" but a group that are processed into a composite, it makes more sense.
If the object isn't moving (like a tree or a building) then a sequence of images taken over time can be compressed to a single time and thought of as a group of simultaneously taken images. So a sequence of frames in a video taken of a still scene as the focus, aperture, and position of camera are changed (together or separately) are equivalent to a group of frames taken simultaneous by a number of still cameras.
Furthermore, the DOF is only an artifact of aperture. But as image sensors become more sensitive, it becomes possible to shoot with smaller apertures giving a larger depth of field, and in many cases the DOF will be large enough that everything in the shot will be in focus. Then post-processing can simulate the blurring typically found with shallower DOFs. Right?
posted by Pastabagel at 11:55 AM on December 21, 2009
"These cameras allow the Depth of Field, the object in focus, and the position of the camera to be modified after the picture is taken. None of those things are possible with a traditional camera."
All of them except changing DOF are possible with traditional a motion picture/video cameras and digital processing. More to the point, once we understand that in the animated gifs here there is no single "picture" but a group that are processed into a composite, it makes more sense.
If the object isn't moving (like a tree or a building) then a sequence of images taken over time can be compressed to a single time and thought of as a group of simultaneously taken images. So a sequence of frames in a video taken of a still scene as the focus, aperture, and position of camera are changed (together or separately) are equivalent to a group of frames taken simultaneous by a number of still cameras.
Furthermore, the DOF is only an artifact of aperture. But as image sensors become more sensitive, it becomes possible to shoot with smaller apertures giving a larger depth of field, and in many cases the DOF will be large enough that everything in the shot will be in focus. Then post-processing can simulate the blurring typically found with shallower DOFs. Right?
posted by Pastabagel at 11:55 AM on December 21, 2009
Pastabagel, other people simulate an array inside a medium format camera and get their simultaneous multiple images that way. Twelve cameras or twelve thousand micro lenses, the principle is the same. My way is cheap, the medium format way costs about 15kilobucks.
I think the ability to take pictures of moving objects, like the skateboarder or whatever, is pretty important. And no, properly simulated out-of-focus regions still require at least a depth map, so why not capture depth information to begin with?
But you're right a sequence of images can be thought of that way, and that's how MikeWarot does things, and that's how I do things in the second Instructable I linked. It's just a limited way of doing things, that's all.
The big innovation of the whole field is moving the optics of the camera into computation by coding the scene somehow. Check out the Camera Culture link above -- they can unblur blurred images, compute arbitrary bokeh, and more. In our case we code the scene by the regular spacing of our cameras, which lets us do all the aforementioned stuff. Check out the "coded aperture imaging" to see another way to code the scene. It really does go way beyond what a normal camera can do.
posted by fake at 12:19 PM on December 21, 2009
I think the ability to take pictures of moving objects, like the skateboarder or whatever, is pretty important. And no, properly simulated out-of-focus regions still require at least a depth map, so why not capture depth information to begin with?
But you're right a sequence of images can be thought of that way, and that's how MikeWarot does things, and that's how I do things in the second Instructable I linked. It's just a limited way of doing things, that's all.
The big innovation of the whole field is moving the optics of the camera into computation by coding the scene somehow. Check out the Camera Culture link above -- they can unblur blurred images, compute arbitrary bokeh, and more. In our case we code the scene by the regular spacing of our cameras, which lets us do all the aforementioned stuff. Check out the "coded aperture imaging" to see another way to code the scene. It really does go way beyond what a normal camera can do.
posted by fake at 12:19 PM on December 21, 2009
Ok, I get it. Your rig is trying to do on a gross scale what these masks are doing on a fine scale, which is to record information about the angle (and direction) of the incident light. This information can then be used to select a subset of these light rays from among the total set available to construct an image (i.e. a different focus, blur correction, etc.)
So how small can these apertures be made? I assume smaller = better because it means more angles known more precisely?
This is actually very cool.
posted by Pastabagel at 12:35 PM on December 21, 2009
So how small can these apertures be made? I assume smaller = better because it means more angles known more precisely?
This is actually very cool.
posted by Pastabagel at 12:35 PM on December 21, 2009
WOW!
There is a lot of cool work being done out there... and I'm very impressed, and quite happy to not be alone.
This is my first non-lurking day here at Metafilter... I'm learning tons of things I didn't know about it.
I've always figured that eventually I'd find a way to take a random set of exposures (as I've done, MANY times) and get the coordinates figured out automatically, to then put into a refocusing program like Fake has done. I hope someone else has done part of the work. 8)
The experimenter is alive and well... glad to see it.
posted by MikeWarot at 12:38 PM on December 21, 2009
There is a lot of cool work being done out there... and I'm very impressed, and quite happy to not be alone.
This is my first non-lurking day here at Metafilter... I'm learning tons of things I didn't know about it.
I've always figured that eventually I'd find a way to take a random set of exposures (as I've done, MANY times) and get the coordinates figured out automatically, to then put into a refocusing program like Fake has done. I hope someone else has done part of the work. 8)
The experimenter is alive and well... glad to see it.
posted by MikeWarot at 12:38 PM on December 21, 2009
Yes- precisely.
So how small can these apertures be made? I assume smaller = better because it means more angles known more precisely?
You need at least a couple pixels per aperture for it to be useful, so the limiting factors are actually sensor resolution and size. The guys working on this with medium and large format sensors would love nothing more than ridiculously small pixels at insane density -- imagine these 14 megapixel, fingernail size sensors in some compact cams now -- but tiled to produce 100 or more megapixel sensors with a large area (large being perhaps a few inches on a side).
If you check this link on futurepicture, you'll see that a crazy Russian (P.P. Sokolov) actually did this in 1911. He took a copper plate and drilled 1200 holes in it, stuck a piece of film on the back, exposed it to a lamp, and then lit the exposure from behind, creating a 3-D image of the lamp that you could view without a set of glasses. In other words, he recreated a crude lightfield.
posted by fake at 12:44 PM on December 21, 2009
So how small can these apertures be made? I assume smaller = better because it means more angles known more precisely?
You need at least a couple pixels per aperture for it to be useful, so the limiting factors are actually sensor resolution and size. The guys working on this with medium and large format sensors would love nothing more than ridiculously small pixels at insane density -- imagine these 14 megapixel, fingernail size sensors in some compact cams now -- but tiled to produce 100 or more megapixel sensors with a large area (large being perhaps a few inches on a side).
If you check this link on futurepicture, you'll see that a crazy Russian (P.P. Sokolov) actually did this in 1911. He took a copper plate and drilled 1200 holes in it, stuck a piece of film on the back, exposed it to a lamp, and then lit the exposure from behind, creating a 3-D image of the lamp that you could view without a set of glasses. In other words, he recreated a crude lightfield.
posted by fake at 12:44 PM on December 21, 2009
Mike, we're trying to do that now. The next version of LFtextures, being authored as we speak, is going to have a region selector that lets you pick an object of interest in one image. We're using SIFT to find corresponding features in the other images and then we align to that region automatically.
posted by fake at 12:45 PM on December 21, 2009
posted by fake at 12:45 PM on December 21, 2009
I'm reading about Sokolov now. So this is awesome. I have to say, what threw me at first was the phrase "4D" which in my world is space + time.
But you could throw this mask onto a video camera, and add time dimension information as well, right?
And while we are at it, let's throw in polarization too. A very high resolution LCD in conjunction with the mask would allow you to potentially collect polarization information as well as angular information, allowing you to selectively tune glare and reflections.
posted by Pastabagel at 12:59 PM on December 21, 2009
But you could throw this mask onto a video camera, and add time dimension information as well, right?
And while we are at it, let's throw in polarization too. A very high resolution LCD in conjunction with the mask would allow you to potentially collect polarization information as well as angular information, allowing you to selectively tune glare and reflections.
posted by Pastabagel at 12:59 PM on December 21, 2009
Yes, absolutely. Check out the "flexible multimodal camera" on this page:
http://graphics.stanford.edu/~levoy/publications.html#light-fields
One of the things I will be doing over the next few months is reviewing these papers on the futurepicture blog, so it's a bit easier to figure out what people are up to. The field is so hot and fast right now that it's quite difficult to know what's going on.
posted by fake at 1:06 PM on December 21, 2009
http://graphics.stanford.edu/~levoy/publications.html#light-fields
One of the things I will be doing over the next few months is reviewing these papers on the futurepicture blog, so it's a bit easier to figure out what people are up to. The field is so hot and fast right now that it's quite difficult to know what's going on.
posted by fake at 1:06 PM on December 21, 2009
If you check this link on futurepicture, you'll see that a crazy Russian (P.P. Sokolov) actually did this in 1911. He took a copper plate and drilled 1200 holes in it, stuck a piece of film on the back, exposed it to a lamp, and then lit the exposure from behind, creating a 3-D image of the lamp that you could view without a set of glasses. In other words, he recreated a crude lightfield.
I didn't catch this from the post or the other comments, but isn't this basically creating a transmission hologram with regular light? I remember seeing one in school, where each point in the picture held the entire scene. If you cut it up, you had two pictures of the same scene, just with fewer angles you could look at.
I agree, this is awesome.
posted by ArgentCorvid at 1:19 PM on December 21, 2009
I didn't catch this from the post or the other comments, but isn't this basically creating a transmission hologram with regular light? I remember seeing one in school, where each point in the picture held the entire scene. If you cut it up, you had two pictures of the same scene, just with fewer angles you could look at.
I agree, this is awesome.
posted by ArgentCorvid at 1:19 PM on December 21, 2009
Oh man, I think about the implications of what this tech could do one day and I get all tingly feeling on the inside.
fake : And you can have DOF 1mm thin across it.
I accomplished this with the hacky technique of flipping an 18-55mm lens around and mounting it backwards on my DSLR with the aperture fully open. Pros: it gives you an unbelievable macro, Cons: shooting 1mm depth of field means that the ridges on the edge of the dime filling the frame are in focus, but the date is blurry. To say it's difficult to shoot with completely undervalues the word "difficult".
Adding in a bit of matting with a hole in the center to act as an aperture makes the whole thing much better.
posted by quin at 1:38 PM on December 21, 2009
fake : And you can have DOF 1mm thin across it.
I accomplished this with the hacky technique of flipping an 18-55mm lens around and mounting it backwards on my DSLR with the aperture fully open. Pros: it gives you an unbelievable macro, Cons: shooting 1mm depth of field means that the ridges on the edge of the dime filling the frame are in focus, but the date is blurry. To say it's difficult to shoot with completely undervalues the word "difficult".
Adding in a bit of matting with a hole in the center to act as an aperture makes the whole thing much better.
posted by quin at 1:38 PM on December 21, 2009
I can using this for a pretty nifty narrative device. Capture a scene with lots of action happening at once. Now start your story, and as you describe it, virtually pan, zoom and focus on each subject. Explain how the monkey in the upper left foreground threw the peach at the dog in the lower left background who fell and tipped over the table in the middle, spewing the tea-set in all directions which finally allows the hidden little mouse which annoyed the monkey in the first place to catch a crumpet.
And for those complaining that the same effects can be accomplished with overlapping video frames, consider that you could just as easily build a synthetic aperture VIDEO camera. Imagine my tale above. Play it once through focussed on the monkey for his perspective, then rewind and play it from the man's perspective, then focus on the crumpet and track it all the way down until the focus reaches the mouse's grubby paws. Fun!
posted by Popular Ethics at 5:44 PM on December 21, 2009
And for those complaining that the same effects can be accomplished with overlapping video frames, consider that you could just as easily build a synthetic aperture VIDEO camera. Imagine my tale above. Play it once through focussed on the monkey for his perspective, then rewind and play it from the man's perspective, then focus on the crumpet and track it all the way down until the focus reaches the mouse's grubby paws. Fun!
posted by Popular Ethics at 5:44 PM on December 21, 2009
Er man = dog. You get the picture.
posted by Popular Ethics at 5:45 PM on December 21, 2009
posted by Popular Ethics at 5:45 PM on December 21, 2009
« Older Visual Business Cliches | Chowned Newer »
This thread has been archived and is closed to new comments
posted by sciurus at 9:19 AM on December 21, 2009 [1 favorite]