HalfLife Photos
May 30, 2005 5:18 AM Subscribe
OK, I didn't expect to be impressed, but that was pretty cool. Think about the Machinima possibilities...
posted by Simon! at 6:03 AM on May 30, 2005
posted by Simon! at 6:03 AM on May 30, 2005
Oh this is so friggin' coooool. And thanks, SM, for the extra links. Particularly the more link, where I had my "Ah-ha!" moment.
posted by Civil_Disobedient at 6:07 AM on May 30, 2005
posted by Civil_Disobedient at 6:07 AM on May 30, 2005
Cool.
I wonder why the forum members in the thread kept seeking a point to it all. Cool is cool.
posted by slf at 6:11 AM on May 30, 2005
I wonder why the forum members in the thread kept seeking a point to it all. Cool is cool.
posted by slf at 6:11 AM on May 30, 2005
His high dynamic range lighting fighting technique is unstoppable. Great stuff.
posted by gwint at 6:18 AM on May 30, 2005
posted by gwint at 6:18 AM on May 30, 2005
MetaFilter: Why do dogs lick their dick?
Again... cool is cool.
posted by farishta at 7:01 AM on May 30, 2005
Again... cool is cool.
posted by farishta at 7:01 AM on May 30, 2005
Electricity bills are going up.
posted by gorgor_balabala at 7:19 AM on May 30, 2005
posted by gorgor_balabala at 7:19 AM on May 30, 2005
Some of the HL2 images remind me of Charlie White's work. "That's a big robot in the middle of some guy's yard."
Video games are going to look like this before you know it.
posted by Eamon at 7:21 AM on May 30, 2005
Video games are going to look like this before you know it.
posted by Eamon at 7:21 AM on May 30, 2005
I don't get it. I mean, I get the limitations of film and the idea of dynamic range, but I don't get what it has to do with Half-Life (other than that the game is probably rendered in HDR) or why it requires you to photograph things off a reflective sphere. None of those links appear to have been written for the layman. Or maybe I am just missing some key fact when I read the FA over and over.
posted by Eideteker at 7:46 AM on May 30, 2005
posted by Eideteker at 7:46 AM on May 30, 2005
Eideteker: It shows 3D and gaming geeks how to insert and render game items in photographs photorealistically.
Cool is cool.
posted by Enron Hubbard at 8:02 AM on May 30, 2005
Cool is cool.
posted by Enron Hubbard at 8:02 AM on May 30, 2005
I spent an hour or two reading about the technique, because it looks pretty cool.
There are 2 techniques here.
One is HDR, which allows for a higher dynamic range (thus the name) of light in a picture. I'm not sure this is necessary for making the CG models look like they belong in the photos.
The second is using a "light probe" to map the light from the photo onto a CG model so that the model looks like it belongs in the photo. The refective sphere is used to get a map of what the light from the scene that hits the model looks like. The sphere picture is then used to map / project the light from the scene onto the CG model which is superimposed in the scene.
The use of HalfLife is not necessary.
That's what I gathered anyway.
posted by Bort at 8:07 AM on May 30, 2005
There are 2 techniques here.
One is HDR, which allows for a higher dynamic range (thus the name) of light in a picture. I'm not sure this is necessary for making the CG models look like they belong in the photos.
The second is using a "light probe" to map the light from the photo onto a CG model so that the model looks like it belongs in the photo. The refective sphere is used to get a map of what the light from the scene that hits the model looks like. The sphere picture is then used to map / project the light from the scene onto the CG model which is superimposed in the scene.
The use of HalfLife is not necessary.
That's what I gathered anyway.
posted by Bort at 8:07 AM on May 30, 2005
None of those links appear to have been written for the layman.
I'll give it a shot:
The basic idea of HDR lighting is to recreate natural lightning as accurately as possible.
The approach used is to make a 360 degree, full-spherical panorama photograph of the original scene and place the virtual object you want to illuminate in the center (in a 3d program).
Now the textured sphere casts light at the object at it's center in exactly the same way as the original environment would have. It is therefore relatively easy to compose an artificial 3d object into a real scene, since the lightning on the object will look 100% natural.
HDR lightning requires the scene to be rendered using the radiosity method, i.e. light bouncing of objects is calculated in addition to direct light, resulting in very accurate shadows and light.
The game Half Life 2 is not rendered in HDR. There are supposed to be some "HDR maps" soon, but I'm not quite sure what that means. I guess it's just a marketing term for something else, because no CPU or GPU available for today's PCs is even remotely capable of real-time radiosity.
As for why Half Life, you could use any 3d model. I guess someone got the inspiration from the term "HDR maps", and used the HL models because they where sufficiently detailed to save him modelling his own :)
posted by uncle harold at 8:16 AM on May 30, 2005
I'll give it a shot:
The basic idea of HDR lighting is to recreate natural lightning as accurately as possible.
The approach used is to make a 360 degree, full-spherical panorama photograph of the original scene and place the virtual object you want to illuminate in the center (in a 3d program).
Now the textured sphere casts light at the object at it's center in exactly the same way as the original environment would have. It is therefore relatively easy to compose an artificial 3d object into a real scene, since the lightning on the object will look 100% natural.
HDR lightning requires the scene to be rendered using the radiosity method, i.e. light bouncing of objects is calculated in addition to direct light, resulting in very accurate shadows and light.
The game Half Life 2 is not rendered in HDR. There are supposed to be some "HDR maps" soon, but I'm not quite sure what that means. I guess it's just a marketing term for something else, because no CPU or GPU available for today's PCs is even remotely capable of real-time radiosity.
As for why Half Life, you could use any 3d model. I guess someone got the inspiration from the term "HDR maps", and used the HL models because they where sufficiently detailed to save him modelling his own :)
posted by uncle harold at 8:16 AM on May 30, 2005
Yeah, I was confused at first on what the sphere was used for. It seems, as Bort but it, that it is only used to get a "map" of where the light in the scene is coming from so you can render the half-life object with the same lighting orientation and intensity. Still, it is really interesting what you can do with a 3D modeled object, lighting tricks, and a lot of time on your hands...
posted by qwip at 8:16 AM on May 30, 2005
posted by qwip at 8:16 AM on May 30, 2005
lighting, not lightning, of course. Sorry.
posted by uncle harold at 8:19 AM on May 30, 2005
posted by uncle harold at 8:19 AM on May 30, 2005
I don't get it.
Ok, think about a photograph (any photograph). That photo has light sources in it--the sun coming in through venetian blinds, an office with a bunch of flourescent overhead lights... whatever.
Now, imagine you could capture the way the light interacts with the objects in a scene. A bright sunny day gives you harsh contrasts, a cloudy day gives you saturated colors, a single-point desk lamp creates stark shadows, etc. By placing a metal sphere in the picture, you're capturing 180 degrees of the light and reflection, in reference to the observer of the sphere (you, the photographer).
What they're doing is using these captured reflections and using them as a reference for the lights in the entire scene. Then you fire up your 3D modeling program, design whatever 3D object you want, but instead of illuminating it with spot-lights or omni-directional lights or whatever other lights that come with the 3D program, you're using this reference you've captured in the picture. This lighting model is then applied to the object you're rendering.
The result is that the rendered object will look like it's illuminated just like the picture you took. The upshot is that you can now place that object in the original scene, and it looks seamless.
Normally 3D modelers spend (waste) tons of time trying to duplicate the lighting in a scene, then composite the rendered object with the original picture. But it rarely comes out just right, so the 3D object stands out like a sore thumb. This technique is a wonderful shortcut to trying to duplicate all the complex lighting in any particular scene, and it looks better to boot.
posted by Civil_Disobedient at 8:26 AM on May 30, 2005
Ok, think about a photograph (any photograph). That photo has light sources in it--the sun coming in through venetian blinds, an office with a bunch of flourescent overhead lights... whatever.
Now, imagine you could capture the way the light interacts with the objects in a scene. A bright sunny day gives you harsh contrasts, a cloudy day gives you saturated colors, a single-point desk lamp creates stark shadows, etc. By placing a metal sphere in the picture, you're capturing 180 degrees of the light and reflection, in reference to the observer of the sphere (you, the photographer).
What they're doing is using these captured reflections and using them as a reference for the lights in the entire scene. Then you fire up your 3D modeling program, design whatever 3D object you want, but instead of illuminating it with spot-lights or omni-directional lights or whatever other lights that come with the 3D program, you're using this reference you've captured in the picture. This lighting model is then applied to the object you're rendering.
The result is that the rendered object will look like it's illuminated just like the picture you took. The upshot is that you can now place that object in the original scene, and it looks seamless.
Normally 3D modelers spend (waste) tons of time trying to duplicate the lighting in a scene, then composite the rendered object with the original picture. But it rarely comes out just right, so the 3D object stands out like a sore thumb. This technique is a wonderful shortcut to trying to duplicate all the complex lighting in any particular scene, and it looks better to boot.
posted by Civil_Disobedient at 8:26 AM on May 30, 2005
a-ha!
posted by gorgor_balabala at 9:31 AM on May 30, 2005
posted by gorgor_balabala at 9:31 AM on May 30, 2005
I suppose the opposite could be done, too: Say, in the latest Star Wars where many of the scenes are mostly 3D. Take the actors and put them on a blue stage with lighting to mimic the lighting of the 3D set they will be superimposed onto.
posted by Mr.Encyclopedia at 10:10 AM on May 30, 2005
posted by Mr.Encyclopedia at 10:10 AM on May 30, 2005
odinsdream : "that guy in the wrinkly space suit with the helmet, the one sitting on the bed... that's not a person in a wrinkly home-made suit? That's a computer model composited into the scene?"
Exactly. It was the image I found most impressive, as well. Look at the edge of his thigh pads, and it's clear that he's a computer render (the original model didn't spend a lot of polygons on that section).
odinsdream : "Has someone already done this?"
I gather from one of Smart Dalek's (very excellent) links, that it's pretty much being done, though I'm not so sure about the details (90 degree angles, sphere during filming, etc.) I suspect they just do a scan of the scene's lighting before or after filming the scene, and use the light at that point to composite in.
posted by Bugbread at 10:18 AM on May 30, 2005
Exactly. It was the image I found most impressive, as well. Look at the edge of his thigh pads, and it's clear that he's a computer render (the original model didn't spend a lot of polygons on that section).
odinsdream : "Has someone already done this?"
I gather from one of Smart Dalek's (very excellent) links, that it's pretty much being done, though I'm not so sure about the details (90 degree angles, sphere during filming, etc.) I suspect they just do a scan of the scene's lighting before or after filming the scene, and use the light at that point to composite in.
posted by Bugbread at 10:18 AM on May 30, 2005
hang on, people are spreading misinformation, here. That's not what HDRI is.
HDRI stands for "High Dynamic Range Image." What is a high dynamic range? Well, that's the property of the range of light in a picture at different exposures. What the hell does that mean? Well...
When you take a picture, you set (or the camera sets for you) the exposure of the image before you click. The exposure is the length of time the camera's shutter opens for, exposing the film or CCD as it does so to the light and thereby generating your picture. Now, the length of the exposure will drastically alter the way the picture looks because light will continually affect the exposed film/ccd until you close that shutter, so that if it's held open too long the image will seem way too washed out and white, and if it's held open for too short a time it will seem too dark and obscure. (This is also why moving objects look blurry in low light photos. The longer the shutter is open, the greater the chance that an object will not be in one precise position for the entire duration of the exposure. If you watch Richard Pryor: Live in Concert, you'll see that he's kind of followed by a red trail the whole movie because they had to use a very long exposure time during filming because of the venue's low light.)
Anyway, HDR is a way to make pictures that show that same image at a variety of different exposures. So you'll have a dark version at very low exposure, then you'll have washed out bright versions at very high exposure, and of course a number of middle-ground versions that look closer to normal. But how can one image show all these different versions, you ask? Well, they can't. We don't have any way to see an image with all that data simultaneously. Your monitor can't display it, and most software can't read it. HDRShop is an exception, and certain 3rd party renderers for 3d applications (like mental ray for Maya, SoftImage and vRay for 3DSMax, I believe) can parse the data and use it to render things with accurate lighting. In HDRShop, if you open an HDRI, it will just show you the median, or middle range exposure for the image by default, but you can then tell the app to show you darker or lighter exposures, depending on what you want to see. So the HDR image has a lot of data that you only see a small part of at one given time.
So how does this apply to these images? Well, imagine that you have a normal, NON-HDR image that you want to put those models in. BUT! in the image you have a window with sunlight coming in and a bed with white sheets on it. How would your 3d program tell the difference between the white of the window and the white of the sheets? If you used the NON-HDR image to light the scene, it would probably think the sheets were a light source and your charcater sitting on the bed would be lit unrealistically from underneath. Oops. Now, in an HDR image, it would be clear that the bedsheets were not a light source because at lower exposures they would be darker where the window would still be a bright white, just like a light source should be. That's just ONE application of the many that HDRI are capable of. Hope this helps, and if anyone has any questions, feel free to ask them in thread or to email me.
posted by shmegegge at 12:25 PM on May 30, 2005
HDRI stands for "High Dynamic Range Image." What is a high dynamic range? Well, that's the property of the range of light in a picture at different exposures. What the hell does that mean? Well...
When you take a picture, you set (or the camera sets for you) the exposure of the image before you click. The exposure is the length of time the camera's shutter opens for, exposing the film or CCD as it does so to the light and thereby generating your picture. Now, the length of the exposure will drastically alter the way the picture looks because light will continually affect the exposed film/ccd until you close that shutter, so that if it's held open too long the image will seem way too washed out and white, and if it's held open for too short a time it will seem too dark and obscure. (This is also why moving objects look blurry in low light photos. The longer the shutter is open, the greater the chance that an object will not be in one precise position for the entire duration of the exposure. If you watch Richard Pryor: Live in Concert, you'll see that he's kind of followed by a red trail the whole movie because they had to use a very long exposure time during filming because of the venue's low light.)
Anyway, HDR is a way to make pictures that show that same image at a variety of different exposures. So you'll have a dark version at very low exposure, then you'll have washed out bright versions at very high exposure, and of course a number of middle-ground versions that look closer to normal. But how can one image show all these different versions, you ask? Well, they can't. We don't have any way to see an image with all that data simultaneously. Your monitor can't display it, and most software can't read it. HDRShop is an exception, and certain 3rd party renderers for 3d applications (like mental ray for Maya, SoftImage and vRay for 3DSMax, I believe) can parse the data and use it to render things with accurate lighting. In HDRShop, if you open an HDRI, it will just show you the median, or middle range exposure for the image by default, but you can then tell the app to show you darker or lighter exposures, depending on what you want to see. So the HDR image has a lot of data that you only see a small part of at one given time.
So how does this apply to these images? Well, imagine that you have a normal, NON-HDR image that you want to put those models in. BUT! in the image you have a window with sunlight coming in and a bed with white sheets on it. How would your 3d program tell the difference between the white of the window and the white of the sheets? If you used the NON-HDR image to light the scene, it would probably think the sheets were a light source and your charcater sitting on the bed would be lit unrealistically from underneath. Oops. Now, in an HDR image, it would be clear that the bedsheets were not a light source because at lower exposures they would be darker where the window would still be a bright white, just like a light source should be. That's just ONE application of the many that HDRI are capable of. Hope this helps, and if anyone has any questions, feel free to ask them in thread or to email me.
posted by shmegegge at 12:25 PM on May 30, 2005
shmegegge : "We don't have any way to see an image with all that data simultaneously."
Actually, one of Smart Dalek's links mentions the Sunnybrook HDRI monitor. But if you mean "the vast, vast, vast majority of us", then, yeah, you're on the money.
shmegegge : "Hope this helps"
That last paragraph helped absolutely fucking well! I understood from the links what HDRI was, but I wasn't clear on what that had to do with the spherical-light-source-3D-composition bit. Your explanation cleared that up extremely well.
posted by Bugbread at 12:48 PM on May 30, 2005
Actually, one of Smart Dalek's links mentions the Sunnybrook HDRI monitor. But if you mean "the vast, vast, vast majority of us", then, yeah, you're on the money.
shmegegge : "Hope this helps"
That last paragraph helped absolutely fucking well! I understood from the links what HDRI was, but I wasn't clear on what that had to do with the spherical-light-source-3D-composition bit. Your explanation cleared that up extremely well.
posted by Bugbread at 12:48 PM on May 30, 2005
shmegegge, so the 3D program has a kind of threshold value that can be set to tell the sheets from the window? I guess that more generally (having only set lights as given in maya to simulate a proper setup), i still don't quite get how the 3d program is to know what represents what in the 'sphere'. Maybe i just have to work with it.
also this whole hdr thing got me thinking of when i took multiple pictures of my friend's bedroom: The white balance on my digicamera didn't set right. The camera couldn't capture both the sunlight and the interior light adequately in one shot, so i just layered like four of them and erased sections in photoshop, which was time-consuming but kind of worth it for the final effect.
does hdr also assist in the individual reading of light sources, in a similar way, so that the colors are represented accurately?
posted by gorgor_balabala at 1:07 PM on May 30, 2005
also this whole hdr thing got me thinking of when i took multiple pictures of my friend's bedroom: The white balance on my digicamera didn't set right. The camera couldn't capture both the sunlight and the interior light adequately in one shot, so i just layered like four of them and erased sections in photoshop, which was time-consuming but kind of worth it for the final effect.
does hdr also assist in the individual reading of light sources, in a similar way, so that the colors are represented accurately?
posted by gorgor_balabala at 1:07 PM on May 30, 2005
hang on, people are spreading misinformation, here. That's not what HDRI is
Maybe incomplete information, but not misinformation.
Both the mirror ball->sphere->radiosity bit and the high dynamic range bit are essential parts of the process, and one of them alone does nothing as far as the linked images go.
People seemed to have problems with the role of the mirror ball, so that's the part we explained.
posted by uncle harold at 1:55 PM on May 30, 2005
Maybe incomplete information, but not misinformation.
Both the mirror ball->sphere->radiosity bit and the high dynamic range bit are essential parts of the process, and one of them alone does nothing as far as the linked images go.
People seemed to have problems with the role of the mirror ball, so that's the part we explained.
posted by uncle harold at 1:55 PM on May 30, 2005
Can HDR techniques be used for other purposes? It seems like some of the "examples" don't have anything added (not the computer ones, more like the atrium pic) and are just using it as a technique to create glow/glare on really bright sources and contrast through the middle range. Seems like an interesting way to get around the tendency of some DigiCams to wash out contrast when there's one really bright light source in a scene.
posted by thedevildancedlightly at 1:59 PM on May 30, 2005
posted by thedevildancedlightly at 1:59 PM on May 30, 2005
Gorgor: From what I understand, it works like this:
A standard digicam picture assigns each pixel a brightness from 0 to 255. 0 is the blackest your monitor can produce, 255 is the brightest. If, for example, you take a picture with 6 pixels, of brightnesses:
0, 50, 100, 150, 200, 255
0 will be the black point and 255 will be the white point. Now, let's say it's too dark, so you lighten it up in Photoshop by 50 brightnessotrons.
50, 100, 150, 200, 250, 255.
Now you darken it again by 50 points
0, 50, 100, 150, 200, 205
It isn't the same image as it started as.
Now, add to that idea the idea that you're taking a picture of a room with white sheets, a grey carpet, and a black chair. Perhaps your camera decides to parse it like this:
Black chair = 0
Grey carpet = 120
White sheets = 255
Ok, fine. Now, what about if there's a lamp turned on? It should be considerably lighter, but it will also be 255, same as the sheets.
Or, you could get the camera to meter off the average, and you might have:
Black chair = 0
Grey carpet = 50
White sheets = 100
Lamp = 255
You now have much less gradation in the majority of the room (the carpet is closer to the darkness of the chair, for example).
Now add...the sun. That's right, camera pointing right at window, in same room, with sun coming in.
Black chair = 0
Grey carpet = 1
White sheets = 2
Lamp = 50
Sun = 255
Well, now everything's in there, but the whole room, except for the sun, is incredibly dark. If you try to lighten it, you get:
Black chair = 100
Grey carpet = 101
White sheets = 102
Lamp = 150
Sun = 255
Lighter, but now instead of the room being all dark, it's all greyish.
And on and on.
So with HDR, you take a true picture (or, I should say, a closer-to-true picture), and your screen / computer just parses which parts of it to show. So your picture might be:
Black chair = 100
Grey carpet = 220
White sheets = 340
Lamp = 700
Sun = 3500
Your computer parses everything 255 or over as "as white as the screen can display", and everything 0 and under as "as black as the screen can display". If you move the parameters, it might show everything over 500 as "white as possible", and everything under 245 as "black as possible". Adjusting the brightness of the image doesn't change the image, as it does in the 6 pixel color adjustment example above. You can take a picture of your friend's bedroom, and you won't be able to get both the sunlight and your interior light on the screen at the same time, but they will be in the image at the same time. One single photo will contain all the detail, instead of taking multiple photos adjusted for different brightnesses.
So, right there, you have a lot of usefulness as far as image manipulation.
Then add to the that the fact that there are HRI screens (prohibitively expensive, I assume). If you have one, and look at the image, the white sheets and the white sunlight won't be the same, like they are on your computer. Instead, the white sheets will be white, and the sun will be WHITE!! Apparently, not painfully bright, but really frickin' bright. Not only will the image be retaining the detail for use in later manipulation, but the screen will actually show the full range of image, instead of clipping everything 255 or higher as being the same shade of white.
As for the sphere...well, I don't know.
posted by Bugbread at 2:31 PM on May 30, 2005
A standard digicam picture assigns each pixel a brightness from 0 to 255. 0 is the blackest your monitor can produce, 255 is the brightest. If, for example, you take a picture with 6 pixels, of brightnesses:
0, 50, 100, 150, 200, 255
0 will be the black point and 255 will be the white point. Now, let's say it's too dark, so you lighten it up in Photoshop by 50 brightnessotrons.
50, 100, 150, 200, 250, 255.
Now you darken it again by 50 points
0, 50, 100, 150, 200, 205
It isn't the same image as it started as.
Now, add to that idea the idea that you're taking a picture of a room with white sheets, a grey carpet, and a black chair. Perhaps your camera decides to parse it like this:
Black chair = 0
Grey carpet = 120
White sheets = 255
Ok, fine. Now, what about if there's a lamp turned on? It should be considerably lighter, but it will also be 255, same as the sheets.
Or, you could get the camera to meter off the average, and you might have:
Black chair = 0
Grey carpet = 50
White sheets = 100
Lamp = 255
You now have much less gradation in the majority of the room (the carpet is closer to the darkness of the chair, for example).
Now add...the sun. That's right, camera pointing right at window, in same room, with sun coming in.
Black chair = 0
Grey carpet = 1
White sheets = 2
Lamp = 50
Sun = 255
Well, now everything's in there, but the whole room, except for the sun, is incredibly dark. If you try to lighten it, you get:
Black chair = 100
Grey carpet = 101
White sheets = 102
Lamp = 150
Sun = 255
Lighter, but now instead of the room being all dark, it's all greyish.
And on and on.
So with HDR, you take a true picture (or, I should say, a closer-to-true picture), and your screen / computer just parses which parts of it to show. So your picture might be:
Black chair = 100
Grey carpet = 220
White sheets = 340
Lamp = 700
Sun = 3500
Your computer parses everything 255 or over as "as white as the screen can display", and everything 0 and under as "as black as the screen can display". If you move the parameters, it might show everything over 500 as "white as possible", and everything under 245 as "black as possible". Adjusting the brightness of the image doesn't change the image, as it does in the 6 pixel color adjustment example above. You can take a picture of your friend's bedroom, and you won't be able to get both the sunlight and your interior light on the screen at the same time, but they will be in the image at the same time. One single photo will contain all the detail, instead of taking multiple photos adjusted for different brightnesses.
So, right there, you have a lot of usefulness as far as image manipulation.
Then add to the that the fact that there are HRI screens (prohibitively expensive, I assume). If you have one, and look at the image, the white sheets and the white sunlight won't be the same, like they are on your computer. Instead, the white sheets will be white, and the sun will be WHITE!! Apparently, not painfully bright, but really frickin' bright. Not only will the image be retaining the detail for use in later manipulation, but the screen will actually show the full range of image, instead of clipping everything 255 or higher as being the same shade of white.
As for the sphere...well, I don't know.
posted by Bugbread at 2:31 PM on May 30, 2005
funny. I thought I saw this from a metafilter link, but it was just from the forums that those previous half life2 links brought me too.
bugbread, thanks for your explanation.
it trampled my preconceptions, as I thought I kind of understood how it worked, but now think quite otherwise.
posted by Busithoth at 4:44 PM on May 30, 2005
bugbread, thanks for your explanation.
it trampled my preconceptions, as I thought I kind of understood how it worked, but now think quite otherwise.
posted by Busithoth at 4:44 PM on May 30, 2005
Video games are going to look like this before you know it.
I've seen a couple of sites (sorry, don't know where they are now) that had supposed screenshots of upcoming Playstation 3 games...and yes, they are absolutely mind-blowing. Not exactly photorealistic, but damn close. Imagine the clarity of a Pixar movie CGI, that seems to be what PS3 will do. I haven't bought a game system...well, ever. I've just mooched from friends. This PS3, I'll buy, no question.
/rubs hands in glee
posted by zardoz at 4:59 PM on May 30, 2005
Keep in mind that Sony has quite a..."reputation"...for using fakish screenshots (they used a prerendered scene from Final Fantasy VIII as an example of a "real-time-rendered" scene. Once you do that, you may as well just take photos of real actors and call it an example of a "real-time-rendered" scene, since, after all, your PS2 can play DVD video).
Not saying it won't be mind-blowing, just that the screenshots you saw should be totally ignored. Maybe it'll be great, and maybe it'll suck, but either way the screenshots are completely unreliable in forecasting that.
posted by Bugbread at 5:08 PM on May 30, 2005
Not saying it won't be mind-blowing, just that the screenshots you saw should be totally ignored. Maybe it'll be great, and maybe it'll suck, but either way the screenshots are completely unreliable in forecasting that.
posted by Bugbread at 5:08 PM on May 30, 2005
Also, it seems like this would be the perfect thing for motion pictures.
Hehe, yeh theyve thought of thet :) If you watch the making of any visual effects heavy movie you'l usually see someone in the background with a mirrored sphere, this is what they are for.
Most of the time its not as easy as these shots though. if you want a hard shadow to cast across the object it requires more work. Objects interacting with the environment ads another level of complexity.
posted by phyle at 5:24 PM on May 30, 2005
Hehe, yeh theyve thought of thet :) If you watch the making of any visual effects heavy movie you'l usually see someone in the background with a mirrored sphere, this is what they are for.
Most of the time its not as easy as these shots though. if you want a hard shadow to cast across the object it requires more work. Objects interacting with the environment ads another level of complexity.
posted by phyle at 5:24 PM on May 30, 2005
Seems like an interesting way to get around the tendency of some DigiCams to wash out contrast when there's one really bright light source in a scene.
Not really. Merging multiple exposures of the same image to compensate for dynamic range loss is old, old, old tech that is absolutely, stunningly unimpressive when you take into account that all your photos have to be shot with a tripod, and your subjects can't move at all. So no wind, no living beings, etc., etc.
What's being done here is absolutely amazing.
posted by Civil_Disobedient at 5:57 PM on May 30, 2005
Not really. Merging multiple exposures of the same image to compensate for dynamic range loss is old, old, old tech that is absolutely, stunningly unimpressive when you take into account that all your photos have to be shot with a tripod, and your subjects can't move at all. So no wind, no living beings, etc., etc.
What's being done here is absolutely amazing.
posted by Civil_Disobedient at 5:57 PM on May 30, 2005
bugbread - you know for a fact that the FFVIII was pre-rendered? Ken Kutaragi implied during the press conference that it was real time... if we're talking about a processing engine with twice the calculating power of the XBOX 360, it's not unbelievable to me that the demos we saw coming out of Sony were in fact, real-time (well, maybe not the Killzone one...)
posted by jonson at 7:29 PM on May 30, 2005
posted by jonson at 7:29 PM on May 30, 2005
I understand HDRI fine, but I don't quite understand how the HDRI photograph is obtained. Do CCDs with that kind of exposure latitude even exist? I've only ever heard of HDRI being generated by CG rendering (where such a thing is easy -- just increase the lighting precision) but not applied to "real-world" photographs (except for simple cases where a few photographs are combined to have different exposures for different parts of the photo).
gorgor_balabala: I don't think HDRI would help with your problem with the bedroom lighting. You were dealing with what's known in the photography world as a "mixed-lighting situation". There's not much that can be done photographically to avoid it (although most color films these days are designed to even out the color balance), so the best solution is simply to change the situation so the light is no longer mixed. One way would be to place a large tinted correcting gel over the window, so the color of the light coming from the window is the same as the indoor lighting. Or you could go the opposite way and gel the indoor lights instead...
posted by neckro23 at 8:50 PM on May 30, 2005
gorgor_balabala: I don't think HDRI would help with your problem with the bedroom lighting. You were dealing with what's known in the photography world as a "mixed-lighting situation". There's not much that can be done photographically to avoid it (although most color films these days are designed to even out the color balance), so the best solution is simply to change the situation so the light is no longer mixed. One way would be to place a large tinted correcting gel over the window, so the color of the light coming from the window is the same as the indoor lighting. Or you could go the opposite way and gel the indoor lights instead...
posted by neckro23 at 8:50 PM on May 30, 2005
(Oh wait. Nevermind. I suspected it was done like this...)
posted by neckro23 at 8:54 PM on May 30, 2005
posted by neckro23 at 8:54 PM on May 30, 2005
Um, that's a very complicated method.
Here's a much easier one.
And it looks like Photoshop CS2 has this feature built-in. I guess I should get around to installing it.
posted by Civil_Disobedient at 8:12 AM on May 31, 2005
Here's a much easier one.
And it looks like Photoshop CS2 has this feature built-in. I guess I should get around to installing it.
posted by Civil_Disobedient at 8:12 AM on May 31, 2005
Off topic: Every time I see one of these types of forums, it makes me happy that Matt never decided it would be cool for us all to have a signature block appended to our comments. I mean, there's a guy dropping a 650 x 400 px image of his car every time he posts. What, I ask, is the point of that shit? Waste of friggin' bandwidth, visually unappealing, distracting, and even the clever / funny ones get old the zillionth time you see them.
posted by caution live frogs at 10:34 AM on May 31, 2005
posted by caution live frogs at 10:34 AM on May 31, 2005
all your photos have to be shot with a tripod, and your subjects can't move at all. So no wind, no living beings, etc., etc.
They'll fix that, it's only software. Photoshop already will line up images for you if they're "close enough."
posted by kindall at 10:55 AM on May 31, 2005
They'll fix that, it's only software. Photoshop already will line up images for you if they're "close enough."
posted by kindall at 10:55 AM on May 31, 2005
Photoshop already will line up images for you if they're "close enough."
There's a big difference between being slightly off-axis and a moving subject. I do believe they'll solve this problem, however. In the short term, camera manufacturers are going to patch their SLRs to enable user-customized bracket definitions. Right now, you can only auto-bracket a stop or so in either direction--soon enough you'll see tweaks where people auto-bracket 3 stops in either direction so they can machine-gun a large exposure latitude without risking movement (too much).
Later, software manufacturers will start using sophisticated algorithms to analyze the alternate pictures and determine how dark and light areas should be adjusted. This will be far more complicated than just "lining them up," however (if it's going to be of any use).
Eventually, camera manufacturers will come to the rescue with more sensitive CCDs.
posted by Civil_Disobedient at 11:18 AM on May 31, 2005
There's a big difference between being slightly off-axis and a moving subject. I do believe they'll solve this problem, however. In the short term, camera manufacturers are going to patch their SLRs to enable user-customized bracket definitions. Right now, you can only auto-bracket a stop or so in either direction--soon enough you'll see tweaks where people auto-bracket 3 stops in either direction so they can machine-gun a large exposure latitude without risking movement (too much).
Later, software manufacturers will start using sophisticated algorithms to analyze the alternate pictures and determine how dark and light areas should be adjusted. This will be far more complicated than just "lining them up," however (if it's going to be of any use).
Eventually, camera manufacturers will come to the rescue with more sensitive CCDs.
posted by Civil_Disobedient at 11:18 AM on May 31, 2005
I saw the Sunnybrook HDRI monitor once. Yes, it's freakin' amazing.
I can't believe how many people in the CG world get "HDRI" wrong. The acronym is all you need to remember - high dynamic rage. It's not just about environmental lighting.
posted by tomplus2 at 3:50 PM on May 31, 2005
I can't believe how many people in the CG world get "HDRI" wrong. The acronym is all you need to remember - high dynamic rage. It's not just about environmental lighting.
posted by tomplus2 at 3:50 PM on May 31, 2005
1. If by using the word "misinformation" I implied that anyone was lying or being deliberately deceptive, I apologize. That wasn't my intent. I just thought that people were getting the wrong idea, and misinformation was the word I used. Sorry about that.
2. uncle harold's explanation of the use of a mirror ball was excellent.
3. bugbread's explanation of HDRI was also excellent.
4. I hadn't checked the Sunnybrook HDRI monitor link, thanks for pointing that out.
5. sorry for responding so late.
posted by shmegegge at 5:35 PM on May 31, 2005
2. uncle harold's explanation of the use of a mirror ball was excellent.
3. bugbread's explanation of HDRI was also excellent.
4. I hadn't checked the Sunnybrook HDRI monitor link, thanks for pointing that out.
5. sorry for responding so late.
posted by shmegegge at 5:35 PM on May 31, 2005
From the linked-to page:
Shots like this (which would need radiosity, sub-surface scattering and a few other things to be truly photoreal) will only be truly revolutionary when it doesn't take a multi-thousand dollar program and dozens of hours' experience to create them. When that happens, photos will no longer be acceptable in court as evidence...
All of which is not to discount the images on that site. They really are very well done.
posted by jiawen at 4:03 AM on June 1, 2005
All you need is: ... Big program called 3D Studio Max.That's like saying "You can make a nuclear weapon in your garage. All you need is: 1) A large amount of pre-processed weapons-grade plutonium, 2) Several dozen people with PhDs in high-energy physics..." 3DS Max is one of the high-end 3D apps out there, costing several thousand dollars and requiring many hours to learn. And that doesn't even include the renderer! (Of course, many people download it illegally, but that still doesn't stop it from taking a long time to learn.)
Shots like this (which would need radiosity, sub-surface scattering and a few other things to be truly photoreal) will only be truly revolutionary when it doesn't take a multi-thousand dollar program and dozens of hours' experience to create them. When that happens, photos will no longer be acceptable in court as evidence...
All of which is not to discount the images on that site. They really are very well done.
posted by jiawen at 4:03 AM on June 1, 2005
when it doesn't take a multi-thousand dollar program and dozens of hours' experience to create them
Well, hours of experience will help in any artistic endeavor. But the free renderer is already here: POVray. Here's and example. Another and another. A couple more of my faves.
Here's the hall of fame gallery.
posted by Civil_Disobedient at 4:26 AM on June 1, 2005
Well, hours of experience will help in any artistic endeavor. But the free renderer is already here: POVray. Here's and example. Another and another. A couple more of my faves.
Here's the hall of fame gallery.
posted by Civil_Disobedient at 4:26 AM on June 1, 2005
Oh, and from the faves picture: how it was done.
posted by Civil_Disobedient at 4:28 AM on June 1, 2005
posted by Civil_Disobedient at 4:28 AM on June 1, 2005
POVray is a great program. (I'd already listed it on my list of 3D apps, with a pretty good review.) It takes a long time to learn, though -- and does it do HDRI? I know it can do radiosity, at least through YafRay, but I hadn't heard of it doing HDRI.
And: that's why I was careful to say "... and dozens of hours' experience to create them". I was actually thinking more of Blender, but either way, there's a steep learning curve. In the case of making photoreal renders, dozens of hours' experience doesn't just help; it's a positive necessity.
posted by jiawen at 3:10 PM on June 1, 2005
And: that's why I was careful to say "... and dozens of hours' experience to create them". I was actually thinking more of Blender, but either way, there's a steep learning curve. In the case of making photoreal renders, dozens of hours' experience doesn't just help; it's a positive necessity.
posted by jiawen at 3:10 PM on June 1, 2005
« Older Terrify kids to come to the Lord | Musical Curiosities, Obscurities and other... Newer »
This thread has been archived and is closed to new comments
posted by Smart Dalek at 5:41 AM on May 30, 2005