All Summer in a Frame
May 11, 2016 8:00 AM Subscribe
Jason Shulman takes single long-exposure photographs of entire films.
These are like beautiful Impressionist paintings. I really want the Under the Skin one.
posted by Kitteh at 8:05 AM on May 11, 2016 [2 favorites]
posted by Kitteh at 8:05 AM on May 11, 2016 [2 favorites]
It's amazing the difference the completely stationary camera in A Trip To The Moon makes. The rest are more-or-less impressionist like Kitteh says, but that one feels so dynamic and futurist. It reminds me a bit of Malevich's The Knifegrinder.
posted by griphus at 8:15 AM on May 11, 2016 [3 favorites]
posted by griphus at 8:15 AM on May 11, 2016 [3 favorites]
Hiroshi Sugimoto's portraits of movie theatres got very different results from a very similar idea.
posted by Mike Smith at 8:26 AM on May 11, 2016 [4 favorites]
posted by Mike Smith at 8:26 AM on May 11, 2016 [4 favorites]
I thought 2001: A Space Odyessy translated well, too. So did The Shining. Something about the architecture? Confined spaces? Rear Window has a bit more structure to it, as well.
Duel really looks like the desert!
posted by notyou at 8:27 AM on May 11, 2016
Duel really looks like the desert!
posted by notyou at 8:27 AM on May 11, 2016
are you sure these are single long exposures of entire films? i suspect they are edited in some way. imho average is more boring than this (speaking from some failed experiences of making mediocre art from various automated processes).
posted by andrewcooke at 8:34 AM on May 11, 2016 [2 favorites]
posted by andrewcooke at 8:34 AM on May 11, 2016 [2 favorites]
He says:
Interestingly, it came about after an unexpectedly successful experiment. “I set up my camera in front of my computer and pointed it at a movie, expecting that, if you expose the negative for an hour and a half with a film in front of it, you’d get a bit like what you get when you mix balls of Play-Doh together – just a brown monotone hue,” - from here
I can't help wondering if a piece of software could somehow take a movie file and produce one of these images as output.
posted by vacapinta at 8:40 AM on May 11, 2016 [1 favorite]
Interestingly, it came about after an unexpectedly successful experiment. “I set up my camera in front of my computer and pointed it at a movie, expecting that, if you expose the negative for an hour and a half with a film in front of it, you’d get a bit like what you get when you mix balls of Play-Doh together – just a brown monotone hue,” - from here
I can't help wondering if a piece of software could somehow take a movie file and produce one of these images as output.
posted by vacapinta at 8:40 AM on May 11, 2016 [1 favorite]
hmmm. thanks. (fwiw i don't (and didn't) see any text on that linked page; may be blocked by some plugin).
posted by andrewcooke at 8:44 AM on May 11, 2016 [1 favorite]
posted by andrewcooke at 8:44 AM on May 11, 2016 [1 favorite]
Hiroshi Sugimoto's portraits of movie theatres got very different results from a very similar idea.
Those are very nice! Though I'm sort of confused at the writeup at that link; the suggestion that these were film-length exposures being made surreptitiously at actual screenings clashes with my assumption that there'd be muddy but very-much-there silhouettes of audience members in some of the seats.
are you sure these are single long exposures of entire films?
I found a couple of short writeups about Shulman's work on this, but nothing that was interesting/enlightening enough to busy up the post itself with. A bit from Wired, another from, uh, AnOther. No details on the process other than large camera, large monitor, and it sounds like a lot of photographs that didn't make the cut precisely because they weren't particularly interesting. (Avatar just a blue smear, etc.)
I can't help wondering if a piece of software could somehow take a movie file and produce one of these images as output.
Same! Though it's interesting to think about how a photographic exposure is going to be different from a probable naive "average the pixels" approach there; there's going to be some non-linearity in how the film stock responds to varying levels of brightness over the course of an exposure that you'd probably have to model with some extra weighting in an algorithmic approximation to get a similar treatement.
posted by cortex at 8:45 AM on May 11, 2016 [3 favorites]
Those are very nice! Though I'm sort of confused at the writeup at that link; the suggestion that these were film-length exposures being made surreptitiously at actual screenings clashes with my assumption that there'd be muddy but very-much-there silhouettes of audience members in some of the seats.
are you sure these are single long exposures of entire films?
I found a couple of short writeups about Shulman's work on this, but nothing that was interesting/enlightening enough to busy up the post itself with. A bit from Wired, another from, uh, AnOther. No details on the process other than large camera, large monitor, and it sounds like a lot of photographs that didn't make the cut precisely because they weren't particularly interesting. (Avatar just a blue smear, etc.)
I can't help wondering if a piece of software could somehow take a movie file and produce one of these images as output.
Same! Though it's interesting to think about how a photographic exposure is going to be different from a probable naive "average the pixels" approach there; there's going to be some non-linearity in how the film stock responds to varying levels of brightness over the course of an exposure that you'd probably have to model with some extra weighting in an algorithmic approximation to get a similar treatement.
posted by cortex at 8:45 AM on May 11, 2016 [3 favorites]
NSFW tag? I didn't need to see that Deep Throat one with my boss walking in on me, jesus
posted by beerperson at 8:52 AM on May 11, 2016 [5 favorites]
posted by beerperson at 8:52 AM on May 11, 2016 [5 favorites]
beerperson: "NSFW tag? I didn't need to see that Deep Throat one with my boss walking in on me, jesus"
The one blended image version is a lot more exciting than the actual film (at this distance of 40 years and given what you can find on the internets these days)
posted by chavenet at 8:56 AM on May 11, 2016
The one blended image version is a lot more exciting than the actual film (at this distance of 40 years and given what you can find on the internets these days)
posted by chavenet at 8:56 AM on May 11, 2016
Whistle and I'll come to You seems to win for Grimdark
posted by OHenryPacey at 8:57 AM on May 11, 2016
posted by OHenryPacey at 8:57 AM on May 11, 2016
I can't help wondering if a piece of software could somehow take a movie file and produce one of these images as output.
What's fun is taking video of a scene from a static viewpoint and then taking the median value across all frames. Pretty handily removes all moving objects.
posted by RobotVoodooPower at 9:35 AM on May 11, 2016 [2 favorites]
What's fun is taking video of a scene from a static viewpoint and then taking the median value across all frames. Pretty handily removes all moving objects.
posted by RobotVoodooPower at 9:35 AM on May 11, 2016 [2 favorites]
expecting that, if you expose the negative for an hour and a half with a film in front of it, you’d get a bit like what you get when you mix balls of Play-Doh together – just a brown monotone hue,”
and this isn't mostly what he got?
I can't help but feel that it would be much more rewarding (in terms of sumptuous visual mmmm) if this trick were applied to single scenes/sequences. Because though the actual hues of the various murky blurs vary, they're still just murky blurs. To my eyes.
posted by philip-random at 9:47 AM on May 11, 2016
and this isn't mostly what he got?
I can't help but feel that it would be much more rewarding (in terms of sumptuous visual mmmm) if this trick were applied to single scenes/sequences. Because though the actual hues of the various murky blurs vary, they're still just murky blurs. To my eyes.
posted by philip-random at 9:47 AM on May 11, 2016
I would like the Digby one, huge, on my living room wall.
posted by mochapickle at 9:48 AM on May 11, 2016
posted by mochapickle at 9:48 AM on May 11, 2016
"and this isn't mostly what he got?"
I haven't seen most of these films but if you look at the Shining one then various elements start to jump out at you. You can see z flag, windows, a hallway, Jack with the axe, etc.
Also, is there text on the page or isn't there?
posted by I-baLL at 9:51 AM on May 11, 2016 [2 favorites]
I haven't seen most of these films but if you look at the Shining one then various elements start to jump out at you. You can see z flag, windows, a hallway, Jack with the axe, etc.
Also, is there text on the page or isn't there?
posted by I-baLL at 9:51 AM on May 11, 2016 [2 favorites]
What's fun is taking video of a scene from a static viewpoint and then taking the median value across all frames. Pretty handily removes all moving objects.
Take a look at his image of Voyage de la Lune and then go look at the film itself which is basically a series of static sets with people moving around. (though the title is 'Le Voyage dans la Lune')
Not surprisingly his image is basically the static sets. This one is an extreme example but the recognisable features in the others are essentially the long static shots.
posted by vacapinta at 9:52 AM on May 11, 2016 [1 favorite]
Take a look at his image of Voyage de la Lune and then go look at the film itself which is basically a series of static sets with people moving around. (though the title is 'Le Voyage dans la Lune')
Not surprisingly his image is basically the static sets. This one is an extreme example but the recognisable features in the others are essentially the long static shots.
posted by vacapinta at 9:52 AM on May 11, 2016 [1 favorite]
I can't help wondering if a piece of software could somehow take a movie file and produce one of these images as output.
Here you go! This python notebook uses OpenCV to average every 24th frame of the given video file: video_average_cv2.ipynb
It's currently ridiculously slow, but if you're patient it should produce the right output. I'm working on speeding it up now...
posted by LegoForte at 9:54 AM on May 11, 2016 [7 favorites]
Here you go! This python notebook uses OpenCV to average every 24th frame of the given video file: video_average_cv2.ipynb
It's currently ridiculously slow, but if you're patient it should produce the right output. I'm working on speeding it up now...
posted by LegoForte at 9:54 AM on May 11, 2016 [7 favorites]
I can't help but feel that it would be much more rewarding (in terms of sumptuous visual mmmm) if this trick were applied to single scenes/sequences.
Going back to the automation idea, it might be interesting to see a treatment like that that reduced a film to a sequence of single-scene frames, so the film itself becomes an ordered set of averaged/smeared scene-stills.
I wonder if you could get something solid there just by searching for keyframes in an mpeg and using those as the split points for starting each new smear series. Though my (very very limited) understanding is that keyframing would be happening more on a cut-by-cut than scene-by-scene basis, or even mid-cut if it's a long one, so maybe it'd need to be something a little more clever than that.
posted by cortex at 9:55 AM on May 11, 2016
Going back to the automation idea, it might be interesting to see a treatment like that that reduced a film to a sequence of single-scene frames, so the film itself becomes an ordered set of averaged/smeared scene-stills.
I wonder if you could get something solid there just by searching for keyframes in an mpeg and using those as the split points for starting each new smear series. Though my (very very limited) understanding is that keyframing would be happening more on a cut-by-cut than scene-by-scene basis, or even mid-cut if it's a long one, so maybe it'd need to be something a little more clever than that.
posted by cortex at 9:55 AM on May 11, 2016
Wow. These are beautiful. I want all of them.
posted by feckless fecal fear mongering at 9:56 AM on May 11, 2016
posted by feckless fecal fear mongering at 9:56 AM on May 11, 2016
Looking at these I'm pretty sure there's a strong hand-of-the-artist at work here, it's not just an algorithm or simple long exposure. The Shining is my best evidence, specifically the highly visible Stuart Ullman nameplate. I can't imagine that was on-screen long enough to make that impression on any simple time averaging algorithm.
posted by Nelson at 10:19 AM on May 11, 2016 [1 favorite]
posted by Nelson at 10:19 AM on May 11, 2016 [1 favorite]
There's no algorithm. It's actual film being exposed.
posted by feckless fecal fear mongering at 10:28 AM on May 11, 2016 [1 favorite]
posted by feckless fecal fear mongering at 10:28 AM on May 11, 2016 [1 favorite]
I wondered about The Shining one too, but it's probably because there just wasn't much else in that section of the frame to cover up the nameplate.
posted by yhbc at 10:35 AM on May 11, 2016
posted by yhbc at 10:35 AM on May 11, 2016
A little Googling on StackOverflow finds this:
This outputs the average of every four consecutive frames, so repeated application will (should) eventually average the entire thing.
posted by RobotVoodooPower at 10:38 AM on May 11, 2016 [4 favorites]
ffmpeg -i input.vid -vf "tblend=average,framestep=2,tblend=average,framestep=2,setpts=0.25*PTS" output.vid
This outputs the average of every four consecutive frames, so repeated application will (should) eventually average the entire thing.
posted by RobotVoodooPower at 10:38 AM on May 11, 2016 [4 favorites]
That interview scene in Ullman's office is actually quite long! A bunch of static lingering on his desk as he talks about the hotel history stuff, between cuts over to Jack. So it's not surprising in that sense that the nameplate would end up sticking out: it gets a pretty fair chunk of screen time, it has sharp edges, and the bright white of the lettering in particular is going to leave an outsized share of exposure on the film compared to even relatively prominent static darker elements in the film.
posted by cortex at 10:42 AM on May 11, 2016 [3 favorites]
posted by cortex at 10:42 AM on May 11, 2016 [3 favorites]
I can't imagine that was on-screen long enough to make that impression on any simple time averaging algorithm.
To be fair, that nameplate occupies the bottom of the screen prominently for a solid minute (starts at 48 seconds in)
posted by vacapinta at 10:42 AM on May 11, 2016 [2 favorites]
To be fair, that nameplate occupies the bottom of the screen prominently for a solid minute (starts at 48 seconds in)
posted by vacapinta at 10:42 AM on May 11, 2016 [2 favorites]
These are delightful, and happily extremely divergent from Sugimoto's ideas.
posted by Theta States at 10:44 AM on May 11, 2016
posted by Theta States at 10:44 AM on May 11, 2016
Here are a few more generated with a more robust version of the Python script: video_average_pyav.ipynb. The image for Voyage to the Moon is actually not that far off the original, but it's clear that I'm handling the colors differently from the film used by the artist.
posted by LegoForte at 12:10 PM on May 11, 2016 [6 favorites]
posted by LegoForte at 12:10 PM on May 11, 2016 [6 favorites]
Nice, LegoForte.
I'm trying to think through my gut feelings about film exposure as far as doing something more than just a basic average across each stack of pixels to produce the final image, but am having a hard time getting past my handwavey bits. But in essence, I wonder if it'd make sense to either (a) weight brighter values a little stronger than linearly with something like n^1.2, or (b) track a kind of high-water mark for bright values where the brighter a spot on the screen has been at some point during the film, the brighter proportionally that spot will be at the end, total average for that spot notwithstanding. A kind of burn-in aesthetic, that latter one.
I also wonder if handling the color channels as three separate averages would make a difference. I can't make a mathematical argument offhand for why that would be, just something itching at me as far as how color film can have different response curves for exposure on different color-sensitive substrates.
But that's all also probably a lot more fiddly than the script you've got so far.
posted by cortex at 12:32 PM on May 11, 2016
I'm trying to think through my gut feelings about film exposure as far as doing something more than just a basic average across each stack of pixels to produce the final image, but am having a hard time getting past my handwavey bits. But in essence, I wonder if it'd make sense to either (a) weight brighter values a little stronger than linearly with something like n^1.2, or (b) track a kind of high-water mark for bright values where the brighter a spot on the screen has been at some point during the film, the brighter proportionally that spot will be at the end, total average for that spot notwithstanding. A kind of burn-in aesthetic, that latter one.
I also wonder if handling the color channels as three separate averages would make a difference. I can't make a mathematical argument offhand for why that would be, just something itching at me as far as how color film can have different response curves for exposure on different color-sensitive substrates.
But that's all also probably a lot more fiddly than the script you've got so far.
posted by cortex at 12:32 PM on May 11, 2016
The image of the character lying down in UNDER THE SKIN is [I recall] the first scene of the movie. But it only lasts for a couple minutes. So how can that image remain when he does an exposure of the entire film? Could it be that the first image has more of an impact on his exposure than any other image thereafter?
posted by Rashomon at 1:12 PM on May 11, 2016 [1 favorite]
posted by Rashomon at 1:12 PM on May 11, 2016 [1 favorite]
Of all the ones I've seen, Duel looks most like I expected it to. You can even make out the spectre of the truck because of so many rear-view mirror shots. They're all gorgeous, though. What a great project.
posted by Devils Rancher at 1:18 PM on May 11, 2016 [1 favorite]
posted by Devils Rancher at 1:18 PM on May 11, 2016 [1 favorite]
LegoForte, maybe including display gamma correction before summation would produce more similar results?
posted by ikalliom at 2:03 PM on May 11, 2016
posted by ikalliom at 2:03 PM on May 11, 2016
if the guy is doing long exposures on film, then maybe the key is to simulate reciprocity failure.
posted by joeblough at 3:29 PM on May 11, 2016 [1 favorite]
posted by joeblough at 3:29 PM on May 11, 2016 [1 favorite]
Rashomon, I remember that early shot of "Under the Skin" as also being one of the brightest and longest fixed-camera shots of the movie, which is an alternative explanation.
posted by Mapes at 5:19 PM on May 11, 2016
posted by Mapes at 5:19 PM on May 11, 2016
Yeah, this is coming from ~15 years ago as a highschool photography student, but if the entire film is dark and muted, while the first scene is bright and light emitting, then you'll get that as a standout image. Haven't seen Under the Skin, and also worked mainly with black and white, as a caveat.
posted by codacorolla at 9:13 PM on May 11, 2016
posted by codacorolla at 9:13 PM on May 11, 2016
LegoForte:
You might try converting the input frames to a linear color space, then doing the averaging, then converting the average back to sRGB -- that's roughly what the camera-pointed-at-display is doing.
posted by reventlov at 2:41 PM on May 12, 2016
You might try converting the input frames to a linear color space, then doing the averaging, then converting the average back to sRGB -- that's roughly what the camera-pointed-at-display is doing.
posted by reventlov at 2:41 PM on May 12, 2016
I'm looking at this on my phone and, curses! I can't find out which picture is which film. These are gorgeous but I would love to know which films they are!
posted by WalkerWestridge at 8:29 AM on May 17, 2016
posted by WalkerWestridge at 8:29 AM on May 17, 2016
« Older How did the Neutral Milk Hotel legend get so out... | A video feed from your house you'd never want to... Newer »
This thread has been archived and is closed to new comments
posted by phunniemee at 8:01 AM on May 11, 2016