Enhanced enhancement
October 11, 2011 10:15 AM Subscribe
I was just watching this, unfortunately the video is so blurry I only get a general idea of what this is doing. Lots of oohs and aahs from the audience, though.
posted by carter at 10:18 AM on October 11, 2011 [5 favorites]
posted by carter at 10:18 AM on October 11, 2011 [5 favorites]
So you're saying the deblurring tool needs to be applied to the video talking about the deblurring tool?
posted by kmz at 10:21 AM on October 11, 2011 [7 favorites]
posted by kmz at 10:21 AM on October 11, 2011 [7 favorites]
Just print the damn thing!
posted by Blazecock Pileon at 10:23 AM on October 11, 2011
posted by Blazecock Pileon at 10:23 AM on October 11, 2011
You know your presentation is going well when someone indignantly rises and shouts "What!? That.. that's impossible!" Second only to "Guards! Seize him!!"
posted by theodolite at 10:24 AM on October 11, 2011 [19 favorites]
posted by theodolite at 10:24 AM on October 11, 2011 [19 favorites]
Why are there bookshelves and guys in recliners on stage? Are they attempting to simulate a "hey, look at this neat feature I just checked in" environment?
posted by DU at 10:26 AM on October 11, 2011 [3 favorites]
posted by DU at 10:26 AM on October 11, 2011 [3 favorites]
(I still have a copy of the page open where the text linked to "Let's Enhance" is literally "http://let's enhance/". Which would be pretty funny if it worked, but it doesn't. Or didn't. Is this an obscure Metafilter bug?)
posted by JHarris at 10:26 AM on October 11, 2011
posted by JHarris at 10:26 AM on October 11, 2011
theodoline, my favorite is when something is "more [x] than you can possibly imagine."
posted by JHarris at 10:27 AM on October 11, 2011
posted by JHarris at 10:27 AM on October 11, 2011
I think I heard the Double Rainbow guy around 1:15 - 1:20.
posted by ultraviolet catastrophe at 10:28 AM on October 11, 2011 [1 favorite]
posted by ultraviolet catastrophe at 10:28 AM on October 11, 2011 [1 favorite]
That first link is supposed to be: http://www.youtube.com/watch?v=Vxq9yj2pVWk
posted by cccorlew at 10:31 AM on October 11, 2011
posted by cccorlew at 10:31 AM on October 11, 2011
Who's the "minor television celebrity"? Can someone zoom in and enhance this video?!
posted by chavenet at 10:52 AM on October 11, 2011
posted by chavenet at 10:52 AM on October 11, 2011
It's Rainn Wilson.
posted by spiderskull at 10:53 AM on October 11, 2011
posted by spiderskull at 10:53 AM on October 11, 2011
Yeah, pretty cool. From what I can gather, the plugin analyzes an image or section of an image and looks for the direction of the motion blur. That part is important, this seems to only work for motion blur / jitter. Then it does some computations and outputs a "blur kernel" which to me looked kinda like a swoopy japanese character for the first example. I'm guessing the thickness of the kernel at different parts along the motion trajectory correlate to amplitude. So after isolating the blur kernel, it just kinda....reverses it. And you get a clear image.
Lay person, etc.
posted by lazaruslong at 10:56 AM on October 11, 2011 [2 favorites]
Lay person, etc.
posted by lazaruslong at 10:56 AM on October 11, 2011 [2 favorites]
So you're saying the deblurring tool needs to be applied to the video talking about the deblurring tool?
Sounds like they need a ....
metafilter.
Ba-dum tish.
posted by carter at 10:56 AM on October 11, 2011 [12 favorites]
Sounds like they need a ....
metafilter.
Ba-dum tish.
posted by carter at 10:56 AM on October 11, 2011 [12 favorites]
Deconvolution using the motion kernel is not really new. The question is how they derive the kernel. Unfortunately in the demo he just loads in some setting he saved earlier, which doesn't really tell you anything about how well it works in analysing a real world image.
posted by unSane at 11:11 AM on October 11, 2011 [2 favorites]
posted by unSane at 11:11 AM on October 11, 2011 [2 favorites]
Want.
I dropped my K-1000 down a pit in about '96, and the last 2 or 3 year's worth of pictures I took with it set on infinity are all just a bit blurred. I'd love to see what this plug could do for them.
posted by Devils Rancher at 11:11 AM on October 11, 2011
I dropped my K-1000 down a pit in about '96, and the last 2 or 3 year's worth of pictures I took with it set on infinity are all just a bit blurred. I'd love to see what this plug could do for them.
posted by Devils Rancher at 11:11 AM on October 11, 2011
I am curious to see how it can be used to enlarge, or change, the COC on certain shots.
posted by bz at 11:16 AM on October 11, 2011
posted by bz at 11:16 AM on October 11, 2011
This may be a limited feature, but I think in the future we will have "rotate and enhance" for e.g. security cam footage, achieved with virtual or compound cameras (qv like 4 metafilter posts). Jokes gonna be on you pedants.
posted by grobstein at 11:31 AM on October 11, 2011
posted by grobstein at 11:31 AM on October 11, 2011
Ba-dum tish.
Ahem, the correct form is: *puts on sunglasses* YEAAAAAAHHHHHHHHHHHHHHHHHH
posted by kmz at 11:32 AM on October 11, 2011 [3 favorites]
Ahem, the correct form is: *puts on sunglasses* YEAAAAAAHHHHHHHHHHHHHHHHHH
posted by kmz at 11:32 AM on October 11, 2011 [3 favorites]
Just to clear this up for folks, this wouldn't work on images that are out of focus, or when the subject moved during the exposure, or beyond the actual resolution of the sensor. It only works to reverse camera movement. Won't be any good for security cameras or lenses that don't focus right.
Still very impressive.
posted by echo target at 11:32 AM on October 11, 2011
Still very impressive.
posted by echo target at 11:32 AM on October 11, 2011
Gah, got motion sickness from that video. It's like they had a crappy camera mount that kept slowly moving off target.
posted by zengargoyle at 11:34 AM on October 11, 2011
posted by zengargoyle at 11:34 AM on October 11, 2011
I really want to watch the non floating, horribly corrected, video of this demo.
posted by mrzarquon at 11:36 AM on October 11, 2011
posted by mrzarquon at 11:36 AM on October 11, 2011
Enhance 57 to 19. Track 45 left. Stop. Enhance 15 to 23. Give me a hard copy right there.
posted by finite at 11:46 AM on October 11, 2011
posted by finite at 11:46 AM on October 11, 2011
. The question is how they derive the kernel. Unfortunately in the demo he just loads in some setting he saved earlier, which doesn't really tell you anything about how well it works in analysing a real world image.
He didn't do it completely by hand. I expect it will be like panorama stitching - once upon a time it took a lot of manual tweaking to get right. Now panorama stitching is downright magical.
posted by GuyZero at 12:02 PM on October 11, 2011
He didn't do it completely by hand. I expect it will be like panorama stitching - once upon a time it took a lot of manual tweaking to get right. Now panorama stitching is downright magical.
posted by GuyZero at 12:02 PM on October 11, 2011
It looks like they're building a path through which the camera traveled while the shutter was open based on the 'direction of blurriness' which you can clearly observe in most blurred photos. Taking that path, you could think of the photo as a big multiple exposure, with one hypothetical exposure for each time unit along the path, say. You then backtrack and figure out what the 'actual' image was, given the motion of the camera.
So this trick probably wouldn't work well if there were a lot of movement in the shot, such as if you had a tripod camera and a fast-moving sprinter or something.
It could perhaps be used to two different 'de-blurred' images, based on the start-point of the camera and the endpoint of the camera. The small amount of parallax involved could then be used to create a very thin 3d image. But nothing like moving 45 degrees around a particular subject or anything like that...
posted by kaibutsu at 12:06 PM on October 11, 2011
So this trick probably wouldn't work well if there were a lot of movement in the shot, such as if you had a tripod camera and a fast-moving sprinter or something.
It could perhaps be used to two different 'de-blurred' images, based on the start-point of the camera and the endpoint of the camera. The small amount of parallax involved could then be used to create a very thin 3d image. But nothing like moving 45 degrees around a particular subject or anything like that...
posted by kaibutsu at 12:06 PM on October 11, 2011
Can we enable the img tag just for this thread so we can all post blurry versions of our comments?
posted by brundlefly at 12:33 PM on October 11, 2011
posted by brundlefly at 12:33 PM on October 11, 2011
I wonder if it uses a technique like this: High quality single-image motion deblurring.
posted by SpookyFish at 12:39 PM on October 11, 2011
posted by SpookyFish at 12:39 PM on October 11, 2011
I have, right here on my desk, a 3x5 card that says I should put together an FPP on deconvolution (which really ought to be added to the tags for this post).
Blind deconvolution works just fine on out of focus images. The author of Unshake goes into detail on why this is.
If you want to look at some Fast Fourier Transforms of your holiday pictures (and who doesn't) I heartily recommend Fiji, which is like Photoshop for people who have particle accelerators in their basement (only it's a free download). If you save your FFTs as tiff files you can even turn them back into spatial domain pictures, much the the disbelief of everyone you show this trick to.
I'm also pretty sure that if you have a person who was moving in an otherwise still picture and the same picture taken with the same still mounted camera with no focus adjustments (i.e. every crappy security camera picture in every TV show ever) you could throw the two FFTs at some math and get back a much less blurred image of the person. Careful readers will note that "throw the two FFTs at some math" does not constitute a detailed description of how this might be achieved (and I'm real sure it ain't happening in the three mouse clicks that you see on NCIS - and don't get me started on the magic resolution Abby gets with what must be like 30 second HPLC runs).
Where this really comes into it's own is in Deconvolution Microscopy which is where all those super sharp multicolor cellular biology porn images you see these days have come from.
The giant laser at about 2:45 of this awesome video is doing something similar but it allows for non-blind deconvolution (about which I do not know enough to even attempt the most hand wavy of explanations.)
posted by Kid Charlemagne at 12:44 PM on October 11, 2011 [6 favorites]
Blind deconvolution works just fine on out of focus images. The author of Unshake goes into detail on why this is.
If you want to look at some Fast Fourier Transforms of your holiday pictures (and who doesn't) I heartily recommend Fiji, which is like Photoshop for people who have particle accelerators in their basement (only it's a free download). If you save your FFTs as tiff files you can even turn them back into spatial domain pictures, much the the disbelief of everyone you show this trick to.
I'm also pretty sure that if you have a person who was moving in an otherwise still picture and the same picture taken with the same still mounted camera with no focus adjustments (i.e. every crappy security camera picture in every TV show ever) you could throw the two FFTs at some math and get back a much less blurred image of the person. Careful readers will note that "throw the two FFTs at some math" does not constitute a detailed description of how this might be achieved (and I'm real sure it ain't happening in the three mouse clicks that you see on NCIS - and don't get me started on the magic resolution Abby gets with what must be like 30 second HPLC runs).
Where this really comes into it's own is in Deconvolution Microscopy which is where all those super sharp multicolor cellular biology porn images you see these days have come from.
The giant laser at about 2:45 of this awesome video is doing something similar but it allows for non-blind deconvolution (about which I do not know enough to even attempt the most hand wavy of explanations.)
posted by Kid Charlemagne at 12:44 PM on October 11, 2011 [6 favorites]
I'll go you one better than a blurred image of a post.
posted by Kid Charlemagne at 12:56 PM on October 11, 2011
posted by Kid Charlemagne at 12:56 PM on October 11, 2011
I've actually been doing some stuff in this area recently. Here are some tools I've used
ffmpeg -- extract frames from video into various image formats
imagemagick -- locate subimages, score and sort images, and align perspective and rotation. Also useful for bulk operations
ale -- for irani-peleg enhancement. If you are looking for extreme quality you have to select good sample images and build your own alignment translation table.
A typical process would be:
-Use ffmpeg to get frames from the the video.
-Select sample images. Reject blurry, images
-Modify the sample images with gamma, noise, and color filtered to make certain details more prominent.
-Build your ale translation table.
-Use ale with various irani-peleg iterations and scaling factors to see what details emerge and what resolution you can achieve.
It is not point and click though. I've been both utterly pointed and amazed with results. I'm just doing this as a hobby because I wanted to learn more about it. I bet the pros have better tools.
posted by humanfont at 1:10 PM on October 11, 2011 [3 favorites]
ffmpeg -- extract frames from video into various image formats
imagemagick -- locate subimages, score and sort images, and align perspective and rotation. Also useful for bulk operations
ale -- for irani-peleg enhancement. If you are looking for extreme quality you have to select good sample images and build your own alignment translation table.
A typical process would be:
-Use ffmpeg to get frames from the the video.
-Select sample images. Reject blurry, images
-Modify the sample images with gamma, noise, and color filtered to make certain details more prominent.
-Build your ale translation table.
-Use ale with various irani-peleg iterations and scaling factors to see what details emerge and what resolution you can achieve.
It is not point and click though. I've been both utterly pointed and amazed with results. I'm just doing this as a hobby because I wanted to learn more about it. I bet the pros have better tools.
posted by humanfont at 1:10 PM on October 11, 2011 [3 favorites]
Eh, you can't add data to a still image. Or, rather, you can but then you're getting made up data, not necessarily what was really there. We might be able to fake up a pretty good approximation of what was really there, but ultimately it's just guesswork.
Working with multiple frames from a video clip is a different matter, there you've got the data it's just spread out across time and patching it together is computationally tricky but doable (at least to an extent).
I think in the end what will change won't so much be deblurring old photos, but new tech that avoids the blur problem. Some people are looking at multiple depth cameras, devices that photograph at dozens of focal depths simultaneously. That plus snapping short video clips at a high shutter rate rather than a single still photo would pretty much eliminate blurring problems.
And as time and progress march one eventually even the cheapest of crappy security cameras will be high res by today's standards which is another plus.
posted by sotonohito at 1:15 PM on October 11, 2011
Working with multiple frames from a video clip is a different matter, there you've got the data it's just spread out across time and patching it together is computationally tricky but doable (at least to an extent).
I think in the end what will change won't so much be deblurring old photos, but new tech that avoids the blur problem. Some people are looking at multiple depth cameras, devices that photograph at dozens of focal depths simultaneously. That plus snapping short video clips at a high shutter rate rather than a single still photo would pretty much eliminate blurring problems.
And as time and progress march one eventually even the cheapest of crappy security cameras will be high res by today's standards which is another plus.
posted by sotonohito at 1:15 PM on October 11, 2011
Military computing tech (on the high-end of the spectrum) is a few years ahead of the mainstream. They have been working on the issue of enhancing out-of-focus images for quite a long time.
The Adobe demo is very cool, but as stated upthread, seems to work specifically on camera-motion-generated blur, where a directional vector can be worked out.
From what little I've heard from folks I've done some behind-the-scenes imaging consulting for, the magic that the military HW/SW does is enough to make you shit your pants, but I was also told it would be a long, long time before any of this trickled down to the desktop.
posted by dbiedny at 1:18 PM on October 11, 2011
The Adobe demo is very cool, but as stated upthread, seems to work specifically on camera-motion-generated blur, where a directional vector can be worked out.
From what little I've heard from folks I've done some behind-the-scenes imaging consulting for, the magic that the military HW/SW does is enough to make you shit your pants, but I was also told it would be a long, long time before any of this trickled down to the desktop.
posted by dbiedny at 1:18 PM on October 11, 2011
Would this work?
1. Pick a kernel size to be larger than the maximum jitter you expect.
2. Scan through every pixel in the image and generate a kernel for each one.
2a. For every neighbor pixel from the current pixel that "looks like" the current pixel (just in terms of color value?) set the current pixel's kernel accordingly based off the offset of the neighbor from the current pixel. Say the top-right neighbor is 50% "similar", we set the top-right neighbor in the kernel to 0.5.
3. Average the kernels for every pixel in the photograph. Since all the pixels should share in common one trending kernel from the blur, that should rise above the noise.
4. Deconvolve
I'm by no means an image processing expert, but I've written image processing code for the New Horizons/Kuiper Belt target selection difference-images. Astronomical image processing is very different from this, however, since the point-spread function is very well defined, so you always know what a "good" image will look like.
Anybody have the expertise to critique my simple (probably wrong) algorithm?
posted by ilikemefi at 1:20 PM on October 11, 2011
1. Pick a kernel size to be larger than the maximum jitter you expect.
2. Scan through every pixel in the image and generate a kernel for each one.
2a. For every neighbor pixel from the current pixel that "looks like" the current pixel (just in terms of color value?) set the current pixel's kernel accordingly based off the offset of the neighbor from the current pixel. Say the top-right neighbor is 50% "similar", we set the top-right neighbor in the kernel to 0.5.
3. Average the kernels for every pixel in the photograph. Since all the pixels should share in common one trending kernel from the blur, that should rise above the noise.
4. Deconvolve
I'm by no means an image processing expert, but I've written image processing code for the New Horizons/Kuiper Belt target selection difference-images. Astronomical image processing is very different from this, however, since the point-spread function is very well defined, so you always know what a "good" image will look like.
Anybody have the expertise to critique my simple (probably wrong) algorithm?
posted by ilikemefi at 1:20 PM on October 11, 2011
Well, I neglected to mention the folks I did the work for, but they would know. Now, if I you think I am bullshitting about that sentence, feel free. I'm just another moron on the internet.
posted by dbiedny at 1:36 PM on October 11, 2011
posted by dbiedny at 1:36 PM on October 11, 2011
Eh, you can't add data to a still image. Or, rather, you can but then you're getting made up data, not necessarily what was really there. We might be able to fake up a pretty good approximation of what was really there, but ultimately it's just guesswork.
The technique in the video isn't adding data, it's removing the extra data caused by movement during the exposure. This is more about reduction than it is about enhacement.
posted by furtive at 1:56 PM on October 11, 2011
The technique in the video isn't adding data, it's removing the extra data caused by movement during the exposure. This is more about reduction than it is about enhacement.
posted by furtive at 1:56 PM on October 11, 2011
Eh, you can't add data to a still image. Or, rather, you can but then you're getting made up data...
Does the second lens in a refractory telescope add made up data to the image you see? Because if you take it away you get an incredibly blurry image, but if the image is blurry going into the lens it must be adding data, right? This technique is just using math to do the transformations that lenses would do.
In fact, if you can manage a very precisely controlled light source, you don't need lenses or math.
posted by Kid Charlemagne at 2:12 PM on October 11, 2011 [1 favorite]
Does the second lens in a refractory telescope add made up data to the image you see? Because if you take it away you get an incredibly blurry image, but if the image is blurry going into the lens it must be adding data, right? This technique is just using math to do the transformations that lenses would do.
In fact, if you can manage a very precisely controlled light source, you don't need lenses or math.
posted by Kid Charlemagne at 2:12 PM on October 11, 2011 [1 favorite]
Given the quality of the video, I guess we'll have to take their word for it that this actually works.
posted by ShutterBun at 2:26 PM on October 11, 2011
posted by ShutterBun at 2:26 PM on October 11, 2011
We might be able to fake up a pretty good approximation of what was really there, but ultimately it's just guesswork.
The entire theory of image compression is predicated on the fact that computers (with the right software) can be pretty darn good at guesswork and making up data.
posted by ShutterBun at 2:42 PM on October 11, 2011
The entire theory of image compression is predicated on the fact that computers (with the right software) can be pretty darn good at guesswork and making up data.
posted by ShutterBun at 2:42 PM on October 11, 2011
Eh, you can't add data to a still image. Or, rather, you can but then you're getting made up data...
the point of this kind of deconvolution is that motion blur does not remove information from an image, just distributes it differently. If you know or can work out the motion that caused the blur you can deconvolve to obtain the information that is encoded in the blurred image.
posted by unSane at 2:53 PM on October 11, 2011 [1 favorite]
the point of this kind of deconvolution is that motion blur does not remove information from an image, just distributes it differently. If you know or can work out the motion that caused the blur you can deconvolve to obtain the information that is encoded in the blurred image.
posted by unSane at 2:53 PM on October 11, 2011 [1 favorite]
Ahem, the correct form is: *puts on sunglasses* YEAAAAAAHHHHHHHHHHHHHHHHHH
Rats, I was thinking this.
Meme, schmeme ...
posted by carter at 4:02 PM on October 11, 2011
Rats, I was thinking this.
Meme, schmeme ...
posted by carter at 4:02 PM on October 11, 2011
Here's a good page with more of the details showing how this sort of thing works.
I kind of doubt the military has anything super special in this area just because other aspects of the technique have been in the sciences forever. You can do a Fourier transform with a really fast computer, but it's also easy to do with ancient analog electronics.
posted by Kid Charlemagne at 10:41 PM on October 11, 2011
I kind of doubt the military has anything super special in this area just because other aspects of the technique have been in the sciences forever. You can do a Fourier transform with a really fast computer, but it's also easy to do with ancient analog electronics.
posted by Kid Charlemagne at 10:41 PM on October 11, 2011
What kind of screen / projector is being used in the Adobe demo? Looks awful good. Maybe I've been in the corporate backwoods too long.
posted by defcom1 at 9:20 AM on October 12, 2011
posted by defcom1 at 9:20 AM on October 12, 2011
« Older From Alchemy to Chemistry back to Alchemy | Let's have a paaaarty! Newer »
This thread has been archived and is closed to new comments
posted by kmz at 10:18 AM on October 11, 2011 [1 favorite]