Because DRAM doesn't get frostbite.
February 21, 2008 10:12 AM Subscribe
Whole-disk encryption defeated with canned air. [via.]
Main site, with video.
The security of whole-disk encryption rests on the fact that the secret key used to read the encrypted disk resides in the computer's main memory and thus disappears when the power is removed. It was widely believed that occurred quickly enough not to be trivially exploitable, an assumption that has proven incorrect.
Main site, with video.
The security of whole-disk encryption rests on the fact that the secret key used to read the encrypted disk resides in the computer's main memory and thus disappears when the power is removed. It was widely believed that occurred quickly enough not to be trivially exploitable, an assumption that has proven incorrect.
Fascinating. I am perpetually impressed when people come up with these completely left-field exploits that no normal user would have ever thought of.
That said, if the person has the physical access to my computers to do this sort of thing, I think there is a fair chance that I have greater concerns.
And in conclusion: this is why you should wire your entire machine with thermite1 if you are really worried about security.
1: This is a joke: please don't do this, it won't end well.
posted by quin at 10:35 AM on February 21, 2008
That said, if the person has the physical access to my computers to do this sort of thing, I think there is a fair chance that I have greater concerns.
And in conclusion: this is why you should wire your entire machine with thermite1 if you are really worried about security.
1: This is a joke: please don't do this, it won't end well.
posted by quin at 10:35 AM on February 21, 2008
It's just as well, really. FDE was going to be the answer to lost laptops containing sensitive data, but this just overlooked the poor practices that led to those laptops having data on them they shouldn't have had.
posted by tommasz at 10:35 AM on February 21, 2008
posted by tommasz at 10:35 AM on February 21, 2008
Physical access side-channel attack circumvents security. Film at 11.
Mikey-San: the whole point of disk encryption is that the content of the disks can't be read even with physical access to the drive. That is, you turn off your computer, and the data is lost until you re-enter your password.
If you're confident in the physical security of the drive itself, there is no reason to bother with encryption, you can just use the operating system to lock down access to files.
posted by delmoi at 10:37 AM on February 21, 2008 [2 favorites]
Mikey-San: the whole point of disk encryption is that the content of the disks can't be read even with physical access to the drive. That is, you turn off your computer, and the data is lost until you re-enter your password.
If you're confident in the physical security of the drive itself, there is no reason to bother with encryption, you can just use the operating system to lock down access to files.
posted by delmoi at 10:37 AM on February 21, 2008 [2 favorites]
But, Mickey-San, isn't whole-disk encryption specifically designed with physical access in mind?
posted by krinklyfig at 10:37 AM on February 21, 2008
posted by krinklyfig at 10:37 AM on February 21, 2008
Fluorocarbons foil formerly formidable fortification.
posted by aerotive at 10:40 AM on February 21, 2008 [18 favorites]
posted by aerotive at 10:40 AM on February 21, 2008 [18 favorites]
This only poses a problem if someone takes a powered up system.
posted by zeoslap at 10:40 AM on February 21, 2008 [1 favorite]
posted by zeoslap at 10:40 AM on February 21, 2008 [1 favorite]
Interesting video, though I'd imagine that disabling booting to the USB port might thwart your lazier thieves -- you'd need some tech skills to crack into a stolen laptop without damaging it and still get to the RAM chips in time to freeze them.
nitpick: I believe this is for any disk encryption, not just full-disk. IIRC FileVault doesn't do full disk, and Truecrypt does both volumes/partitions and full disk.
posted by middleclasstool at 10:41 AM on February 21, 2008
nitpick: I believe this is for any disk encryption, not just full-disk. IIRC FileVault doesn't do full disk, and Truecrypt does both volumes/partitions and full disk.
posted by middleclasstool at 10:41 AM on February 21, 2008
And yes, my dumb ass just realized that some laptops give easy-peasy panel access to the chips.
posted by middleclasstool at 10:44 AM on February 21, 2008
posted by middleclasstool at 10:44 AM on February 21, 2008
quin writes "Fascinating. I am perpetually impressed when people come up with these completely left-field exploits that no normal user would have ever thought of."
Normal users don't need whole-disk encryption. Well, it might be nice if we all had it, but this isn't really meant for the average household. In the applications where this is used, I can see how being able to exploit any such system at all would be worthwhile, at least for the exploiter.
posted by krinklyfig at 10:45 AM on February 21, 2008
Normal users don't need whole-disk encryption. Well, it might be nice if we all had it, but this isn't really meant for the average household. In the applications where this is used, I can see how being able to exploit any such system at all would be worthwhile, at least for the exploiter.
posted by krinklyfig at 10:45 AM on February 21, 2008
zeoslap: It also works if the system is in sleep or hibernation mode.
The paper discusses some countermeasures, and the physical ones sound best. Virtually the entire attack can be circumvented by having a small piece of memory soldered and epoxied to the motherboard and placing all important cryptographic keys in that memory. The epoxy would provide both greater physical security and prevent the attaching of probe leads to the chip. Additionally, the chip could be programmed to wipe itself when the power is cut, using power from an onboard capacitor.
posted by jedicus at 10:46 AM on February 21, 2008 [3 favorites]
The paper discusses some countermeasures, and the physical ones sound best. Virtually the entire attack can be circumvented by having a small piece of memory soldered and epoxied to the motherboard and placing all important cryptographic keys in that memory. The epoxy would provide both greater physical security and prevent the attaching of probe leads to the chip. Additionally, the chip could be programmed to wipe itself when the power is cut, using power from an onboard capacitor.
posted by jedicus at 10:46 AM on February 21, 2008 [3 favorites]
Mefites Say "Film at 11" Frequently, film at 11.
posted by cortex at 10:46 AM on February 21, 2008 [3 favorites]
posted by cortex at 10:46 AM on February 21, 2008 [3 favorites]
I am perpetually impressed when people come up with these completely left-field exploits that no normal user would have ever thought of.
OK, here's a left-field defence, then: glue the chips in place using some kind of superglue. This prevents them from being removed & taken offsite in a liquid nitrogen thermos. This means the attacker has to work quickly, wherever the machine is located.
Next, ensure that a loss of power causes data in DRAM to be overwritten. This should be part of the shutdown process, and in cases of a catastrophic loss of power, a reserve battery could handle the task.
Then, set up a timer that prevents the machine from being restarted until after the "seconds to minutes" that it takes for DRAM data to fade. This covers the cases in which for some reason the data isn't overwritten during shutdown.
Finally, prevent people from messing about with the DRAM-eraser reserve battery & startup delay timer, by wiring them up as integral components of a small plastic explosive bomb.
Quite straightforward, really.
posted by UbuRoivas at 10:51 AM on February 21, 2008 [2 favorites]
OK, here's a left-field defence, then: glue the chips in place using some kind of superglue. This prevents them from being removed & taken offsite in a liquid nitrogen thermos. This means the attacker has to work quickly, wherever the machine is located.
Next, ensure that a loss of power causes data in DRAM to be overwritten. This should be part of the shutdown process, and in cases of a catastrophic loss of power, a reserve battery could handle the task.
Then, set up a timer that prevents the machine from being restarted until after the "seconds to minutes" that it takes for DRAM data to fade. This covers the cases in which for some reason the data isn't overwritten during shutdown.
Finally, prevent people from messing about with the DRAM-eraser reserve battery & startup delay timer, by wiring them up as integral components of a small plastic explosive bomb.
Quite straightforward, really.
posted by UbuRoivas at 10:51 AM on February 21, 2008 [2 favorites]
Though this brings up an interesting question: could you perform this same attack on the SRAM cache of a CPU? Presumably after a failed log-in attempt the cryptographic key would be in the cache, whereupon you could chill the CPU, pop it out, and dump the cache, no? SRAM is still volatile, but is it significantly more volatile than DRAM?
posted by jedicus at 10:53 AM on February 21, 2008
posted by jedicus at 10:53 AM on February 21, 2008
cortex: "Mefites Say "Film at 11" Frequently, film at 11."
Which seems like a lot seeing as I've never once actually heard a newscaster say that.
posted by octothorpe at 10:53 AM on February 21, 2008
Which seems like a lot seeing as I've never once actually heard a newscaster say that.
posted by octothorpe at 10:53 AM on February 21, 2008
Well thank god they posted this on the internets for everyone to know.
posted by ZaneJ. at 10:54 AM on February 21, 2008
posted by ZaneJ. at 10:54 AM on February 21, 2008
isn't whole-disk encryption specifically designed with physical access in mind?
I guess I'm not shitting my shorts because it seems like an inevitability: if the drive is encrypted, and the encryption key is stored in memory, someone will find a way to get to it somehow.
It is pretty impressive to get to it at all, and especially to do it with compressed air, so I guess I snarked too soon.
Mefites Say "Film at 11" Frequently, film at 11.
Hey, no fair using the new search against me! :P
posted by secret about box at 10:54 AM on February 21, 2008
I guess I'm not shitting my shorts because it seems like an inevitability: if the drive is encrypted, and the encryption key is stored in memory, someone will find a way to get to it somehow.
It is pretty impressive to get to it at all, and especially to do it with compressed air, so I guess I snarked too soon.
Mefites Say "Film at 11" Frequently, film at 11.
Hey, no fair using the new search against me! :P
posted by secret about box at 10:54 AM on February 21, 2008
Thermite is a more traditional self-destruct mechanism than plastic explosive, UbuRovias.
posted by hattifattener at 10:54 AM on February 21, 2008
posted by hattifattener at 10:54 AM on February 21, 2008
Well thank god they posted this on the internets for everyone to know.
Security matters should be open.
posted by secret about box at 10:55 AM on February 21, 2008
Security matters should be open.
posted by secret about box at 10:55 AM on February 21, 2008
Thermite is a more traditional self-destruct mechanism than plastic explosive, UbuRovias.
I just keep a tiny man inside my PC with a tiny can of gasoline and a tiny can of frozen orange juice concentrate. His name is Tiny Robert Paulson.
His name is Tiny Robert Paulson.
posted by middleclasstool at 10:59 AM on February 21, 2008 [6 favorites]
I just keep a tiny man inside my PC with a tiny can of gasoline and a tiny can of frozen orange juice concentrate. His name is Tiny Robert Paulson.
His name is Tiny Robert Paulson.
posted by middleclasstool at 10:59 AM on February 21, 2008 [6 favorites]
Very interesting (also note that Gene Spafford is one of the commentors on the blog post), and it shows the danger of making assumptions like "the data disappears from RAM once the power is cut."
In the short run, I think you could beef up security by programming the FDE product to overwrite the key in memory whenever the computer locks itself -- i.e. screensaver, sleep, hibernate, powerdown, etc. That would keep someone from grabbing the computer after you'd turned it off or 'made it secure' and extracting the key.
However, that would still not eliminate the risk of someone getting the computer while it was running, with the key in memory, and then just unplugging it (or pulling the RAM out of the running machine), preventing the wipe from taking place, and then recovering the key. But I think that's much less of a security risk.
The major vulnerability is that someone could shut their computer off, think that they've made it safe and that the data on the drive is encrypted, and then walk away from their desk. That an attacker could access the data if the computer is running and unsecured isn't as big a deal -- you just train users not to walk away from their machines without at least screenlocking them (and thus purging the key from memory).
An attack that works only against a running, logged-in, unlocked machine isn't much of an attack. One that works against a machine that's been locked or powered down is much more severe.
It's really a design flaw on the part of FDE products if they rely on the nature of the RAM chips to purge the key, rather than wiping them purposefully.
posted by Kadin2048 at 10:59 AM on February 21, 2008
In the short run, I think you could beef up security by programming the FDE product to overwrite the key in memory whenever the computer locks itself -- i.e. screensaver, sleep, hibernate, powerdown, etc. That would keep someone from grabbing the computer after you'd turned it off or 'made it secure' and extracting the key.
However, that would still not eliminate the risk of someone getting the computer while it was running, with the key in memory, and then just unplugging it (or pulling the RAM out of the running machine), preventing the wipe from taking place, and then recovering the key. But I think that's much less of a security risk.
The major vulnerability is that someone could shut their computer off, think that they've made it safe and that the data on the drive is encrypted, and then walk away from their desk. That an attacker could access the data if the computer is running and unsecured isn't as big a deal -- you just train users not to walk away from their machines without at least screenlocking them (and thus purging the key from memory).
An attack that works only against a running, logged-in, unlocked machine isn't much of an attack. One that works against a machine that's been locked or powered down is much more severe.
It's really a design flaw on the part of FDE products if they rely on the nature of the RAM chips to purge the key, rather than wiping them purposefully.
posted by Kadin2048 at 10:59 AM on February 21, 2008
An excellent elaboration at news.com of all places.
posted by Skorgu at 10:59 AM on February 21, 2008
posted by Skorgu at 10:59 AM on February 21, 2008
Mefites Say "Film at 11" Frequently, film at 11. - cortex
What is this film of which you speak of?
posted by Mister_A at 11:00 AM on February 21, 2008
What is this film of which you speak of?
posted by Mister_A at 11:00 AM on February 21, 2008
Hey, no fair using the new search against me! :P
Heh. Not just you; I only went and searched because I've noticed a little run of them lately and got to wondering. I wasn't expecting to see quite so many instances (though there's some noise in those results), but I wasn't shocked either.
posted by cortex at 11:02 AM on February 21, 2008
Heh. Not just you; I only went and searched because I've noticed a little run of them lately and got to wondering. I wasn't expecting to see quite so many instances (though there's some noise in those results), but I wasn't shocked either.
posted by cortex at 11:02 AM on February 21, 2008
Well thank god they posted this on the internets for everyone to know.
Would you rather they didn't say anything and you remained blissfully unaware and unable to respond to this problem while the "bad guys" went about exploiting it? And to assume the "bad guys" wouldn't know about attacks like this unless it was posted on the internets would be naive.
Honestly, this shouldn't come as a surprise to anyone who works closely with computers. Part of the reason behind whole-disk encryption (and not just per-file encryption) is that the contents of swap files can be parsed for information just like the RAM, except it's on disk and much more permanent. Now, rather than attacking the "ram" on disk, you attack the ram hardware.
One thing we'll probably see in the next year or two: new memory products which include on-chip capacitors to zap the contents of memory as soon as power to the chip is lost.
posted by ruthsarian at 11:08 AM on February 21, 2008
Would you rather they didn't say anything and you remained blissfully unaware and unable to respond to this problem while the "bad guys" went about exploiting it? And to assume the "bad guys" wouldn't know about attacks like this unless it was posted on the internets would be naive.
Honestly, this shouldn't come as a surprise to anyone who works closely with computers. Part of the reason behind whole-disk encryption (and not just per-file encryption) is that the contents of swap files can be parsed for information just like the RAM, except it's on disk and much more permanent. Now, rather than attacking the "ram" on disk, you attack the ram hardware.
One thing we'll probably see in the next year or two: new memory products which include on-chip capacitors to zap the contents of memory as soon as power to the chip is lost.
posted by ruthsarian at 11:08 AM on February 21, 2008
>>Insert Theme from Mission Impossible
/theme
posted by The Light Fantastic at 11:23 AM on February 21, 2008
/theme
posted by The Light Fantastic at 11:23 AM on February 21, 2008
So you're saying that the MacBook Air having the RAM chips soldered on to the motherboard is actually a FEATURE?
posted by blue_beetle at 11:23 AM on February 21, 2008
posted by blue_beetle at 11:23 AM on February 21, 2008
"Mefites Say "Film at 11" Frequently, film at 11."
Which seems like a lot seeing as I've never once actually heard a newscaster say that.
News programs haven't used film in decades, film at 11.
posted by Sys Rq at 11:28 AM on February 21, 2008
Which seems like a lot seeing as I've never once actually heard a newscaster say that.
News programs haven't used film in decades, film at 11.
posted by Sys Rq at 11:28 AM on February 21, 2008
I hope that all the Film at 11 "huh?" stuff is all snark, but since it keeps coming up..:
Before local news stations went to ENG -- electronic news gathering, so, you know, videotape -- news footage was shot on film cameras. Network stuff was 16mm but you even saw the occasional super 16 as well as (gasp) 8mm for the reallly cheap local stations. Anyway, so it was captured on film, which means that a) the film had to be processed, b) edited, c) put together with titles or whatever for broadcast. Couldn't just bring the tape back in and put it on the air.
So you had teasers on the afternoon/early evening shows saying "Holy shit! A bear just mauled an entire park full of people! Film at 11", and they were letting you know that perhaps yeah, maybe there was a tiny cub disturbing a campfire, but they don't have the footage ready yet so they're going to show this to you at 11pm.
Since my snark meter is probably broke, I'll run away now. And GTFO my lawn.
posted by cavalier at 11:33 AM on February 21, 2008 [3 favorites]
Before local news stations went to ENG -- electronic news gathering, so, you know, videotape -- news footage was shot on film cameras. Network stuff was 16mm but you even saw the occasional super 16 as well as (gasp) 8mm for the reallly cheap local stations. Anyway, so it was captured on film, which means that a) the film had to be processed, b) edited, c) put together with titles or whatever for broadcast. Couldn't just bring the tape back in and put it on the air.
So you had teasers on the afternoon/early evening shows saying "Holy shit! A bear just mauled an entire park full of people! Film at 11", and they were letting you know that perhaps yeah, maybe there was a tiny cub disturbing a campfire, but they don't have the footage ready yet so they're going to show this to you at 11pm.
Since my snark meter is probably broke, I'll run away now. And GTFO my lawn.
posted by cavalier at 11:33 AM on February 21, 2008 [3 favorites]
Though this brings up an interesting question: could you perform this same attack on the SRAM cache of a CPU? Presumably after a failed log-in attempt the cryptographic key would be in the cache, whereupon you could chill the CPU, pop it out, and dump the cache, no? SRAM is still volatile, but is it significantly more volatile than DRAM?
In the comments of the first link the SRAM issue is addressed. One of the authors of the paper says that there is some demonstrated retention of memory in SRAM, but significantly less.
(Please note that I have no idea what I'm talking about, merely parroting what I read.)
posted by OmieWise at 11:37 AM on February 21, 2008
In the comments of the first link the SRAM issue is addressed. One of the authors of the paper says that there is some demonstrated retention of memory in SRAM, but significantly less.
(Please note that I have no idea what I'm talking about, merely parroting what I read.)
posted by OmieWise at 11:37 AM on February 21, 2008
In particular, judging from your link, cortex, Mefites say "Mefites say "film at 11" frequently, film at 11." frequently, film at 11.
posted by Anything at 11:38 AM on February 21, 2008
posted by Anything at 11:38 AM on February 21, 2008
Heh, no, I dig it, cavalier. I gave a (less thorough) explanation about a month ago to someone who was genuinely confused by it. Which, come to think of it, may be what primed me to notice the recent little uptick.
Also, I didn't like your lawn anyway, man. I'm gonna go get a slurpee. Wevs.
posted by cortex at 11:38 AM on February 21, 2008
Also, I didn't like your lawn anyway, man. I'm gonna go get a slurpee. Wevs.
posted by cortex at 11:38 AM on February 21, 2008
"Film at 11" also doesn't apply here in the flyover states. Our newscasts are typically at 10pm and over by 10:30pm. By 11, we're usually hitting the REM cycle, watching Leno or HBO.
posted by KevinSkomsvold at 11:41 AM on February 21, 2008
posted by KevinSkomsvold at 11:41 AM on February 21, 2008
Apparently, some people believe that any attack which has a possible countermeasure is a useless attack, as if all systems implement the countermeasure and there are no counters to the countermeasure.
Yes, so this attack has countermeasures. What is your point? It is still a good attack, applicable in some instances and damn interesting in and of itself.
posted by Bovine Love at 11:43 AM on February 21, 2008 [1 favorite]
Yes, so this attack has countermeasures. What is your point? It is still a good attack, applicable in some instances and damn interesting in and of itself.
posted by Bovine Love at 11:43 AM on February 21, 2008 [1 favorite]
So, if I left my laptop running, and then allowed it to be stolen, the subsequent loss/theft of data is somehow a disk encryption deficiency? No boss/client in the world is going to buy that excuse; unless of course your boss is anything like Tommy Chong's character on That 70's Show.
posted by Brocktoon at 11:46 AM on February 21, 2008
posted by Brocktoon at 11:46 AM on February 21, 2008
Well thank god they posted this on the internets for everyone to know.
"The unscrupulous have the command of much of this kind of knowledge without our aid; and there is moral and commercial justice in placing on their guard those who might possibly suffer therefrom."
From Locks and Safes: The Construction of Locks by A. C. Hobbs, 1853.
posted by Kid Charlemagne at 11:47 AM on February 21, 2008 [21 favorites]
"The unscrupulous have the command of much of this kind of knowledge without our aid; and there is moral and commercial justice in placing on their guard those who might possibly suffer therefrom."
From Locks and Safes: The Construction of Locks by A. C. Hobbs, 1853.
posted by Kid Charlemagne at 11:47 AM on February 21, 2008 [21 favorites]
Heh. Truth be told, my lawn is actually looking like crap these days. Some kind of Day of the Dead meets Lost World thing going on out there. But that's an AskMefi I'm planning for another day.
Ahem, on topic, this was pretty neat to read. Yeah, my gut reaction too was "Hey! Physical access != Security", but then remembered an encrypted file was supposed to defeat that. Cool read.
posted by cavalier at 11:48 AM on February 21, 2008
Ahem, on topic, this was pretty neat to read. Yeah, my gut reaction too was "Hey! Physical access != Security", but then remembered an encrypted file was supposed to defeat that. Cool read.
posted by cavalier at 11:48 AM on February 21, 2008
blue_beetle:
Yes. It's a security feature that Apple accidentally built into that machine. They are cutting edge, yo.
posted by daq at 11:53 AM on February 21, 2008
Yes. It's a security feature that Apple accidentally built into that machine. They are cutting edge, yo.
posted by daq at 11:53 AM on February 21, 2008
DRAM basically stores the bit as a charge on a tiny capacitor. And -- like all capacitors-- they leak, so periodically the system needs to refresh the charge on the cells. (Hence "dynamic" RAM.) This leak depends (like just about everything) on temperature -- the colder, the slower. It's really cool that they managed to take advantage of that!
SRAM is likely to be much harder, because the information is usually stored as a current in a certain part of the circuit. The current dies really quickly when power is removed. To attack SRAM some side-effect will have to be exploited, and it's likely to depend strongly on the specific kind of chips used. So if you're truly paranoid, just switch over to 100% SRAM! (Actually I don't know if they even use SRAM for main memory these days since it's much more expensive.)
posted by phliar at 11:59 AM on February 21, 2008
SRAM is likely to be much harder, because the information is usually stored as a current in a certain part of the circuit. The current dies really quickly when power is removed. To attack SRAM some side-effect will have to be exploited, and it's likely to depend strongly on the specific kind of chips used. So if you're truly paranoid, just switch over to 100% SRAM! (Actually I don't know if they even use SRAM for main memory these days since it's much more expensive.)
posted by phliar at 11:59 AM on February 21, 2008
I had a teacher in college tell me he worked on a chip that did encryption. They had to continually cycle the key through different memory locations to prevent any image of the key to be left in memory.
posted by jeblis at 12:10 PM on February 21, 2008 [1 favorite]
posted by jeblis at 12:10 PM on February 21, 2008 [1 favorite]
Mefites Say "Film at 11" Frequently, film at 11. - cortex
What is this film of which you speak of?
Ocean's Eleven.
posted by UbuRoivas at 12:24 PM on February 21, 2008
What is this film of which you speak of?
Ocean's Eleven.
posted by UbuRoivas at 12:24 PM on February 21, 2008
This should not really be a concern, we just need to establish a National Canned Air Registry and ensure that, much like purchasing Sudafed, only people who provide personal information and a photo ID can ever get their hands on this terrorist tool. And of course, this needs to be added to the TSA list of banned substances.
posted by donovan at 1:02 PM on February 21, 2008 [7 favorites]
posted by donovan at 1:02 PM on February 21, 2008 [7 favorites]
This should not really be a concern, we just need to establish a National Canned Air Registry and ensure that, much like purchasing Sudafed, only people who provide personal information and a photo ID can ever get their hands on this terrorist tool. And of course, this needs to be added to the TSA list of banned substances.
When we outlaw canned air, only the outlaws will have canned air.
posted by secret about box at 1:42 PM on February 21, 2008 [1 favorite]
When we outlaw canned air, only the outlaws will have canned air.
posted by secret about box at 1:42 PM on February 21, 2008 [1 favorite]
Fact is, once physical access to the machine, the data is at risk. If the machine is taken out of its home environment (a data center, the bag of the owner, etc.), it is, theoretically, only a matter of time, if I really want the data (brute force on one hand, though something more elegant may come into play).
My guess, though, is that the average stolen laptop is taken not for the data, but the value of the machine itself. The encryption prevents some random bozo from getting a bonus from the theft.
Personally, I like the thermite idea.
posted by MrGuilt at 1:49 PM on February 21, 2008
My guess, though, is that the average stolen laptop is taken not for the data, but the value of the machine itself. The encryption prevents some random bozo from getting a bonus from the theft.
Personally, I like the thermite idea.
posted by MrGuilt at 1:49 PM on February 21, 2008
"Film at 11" is the new "I'm an asshole."
posted by shmegegge at 1:50 PM on February 21, 2008 [1 favorite]
posted by shmegegge at 1:50 PM on February 21, 2008 [1 favorite]
If anyone is interested, here is what one of the paper authors had to say about applying the attack to the SRAM cache of a CPU:
posted by jedicus at 1:51 PM on February 21, 2008
We haven't looked into remanence in the CPU cache, though it's known that SRAM does retain data for significant periods after power loss. As you say, it would probably be harder to extract the data, since the caches are flushed on boot and are not easy to transfer to other systems.For my part, I believe it would require a custom BIOS or special purpose hardware to extract the cache data, but it does show that merely locking down the RAM may not be enough.
posted by jedicus at 1:51 PM on February 21, 2008
When we outlaw canned air, only the outlaws will have canned air.
Outlaws like Mel Brooks?
posted by Sys Rq at 1:56 PM on February 21, 2008
Outlaws like Mel Brooks?
posted by Sys Rq at 1:56 PM on February 21, 2008
MrGuilt: Fact is, once physical access to the machine, the data is at risk. If the machine is taken out of its home environment ...
Yes, yes, but there is a continuum here; it isn't all or nothing. Real life isn't nearly so black and white. The cold boot methodology can -- at least on some machines-- be done quickly and is non-invasive. The only trace is that the machine has been rebooted, hardly a completely unexpected event (though zero trace certainly would be better). If one was to be clever, they would leave it in BSOD state. The attack is relatively benign looking; you can stand there looking like a tech or the like. So the user, leaving -- for example -- a locked machine might think they have a reasonable level of security (certainly not bullet proof, but not bad expectation assuming the laptop is still there when they return). This can bypass that, and furthermore can retrieve keys and other info that otherwise we would believe are only accessible via a trojan/virus, with possibly admin access. Most people would think locking their screen and using disk encryption to be a fairly reasonable level of security barring password theft; this makes that pretty weak.
It is pretty damn clever, giving full forensic-style memory snooping ability without installation of software and completely bypassing normal levels of defense.
posted by Bovine Love at 2:04 PM on February 21, 2008
Yes, yes, but there is a continuum here; it isn't all or nothing. Real life isn't nearly so black and white. The cold boot methodology can -- at least on some machines-- be done quickly and is non-invasive. The only trace is that the machine has been rebooted, hardly a completely unexpected event (though zero trace certainly would be better). If one was to be clever, they would leave it in BSOD state. The attack is relatively benign looking; you can stand there looking like a tech or the like. So the user, leaving -- for example -- a locked machine might think they have a reasonable level of security (certainly not bullet proof, but not bad expectation assuming the laptop is still there when they return). This can bypass that, and furthermore can retrieve keys and other info that otherwise we would believe are only accessible via a trojan/virus, with possibly admin access. Most people would think locking their screen and using disk encryption to be a fairly reasonable level of security barring password theft; this makes that pretty weak.
It is pretty damn clever, giving full forensic-style memory snooping ability without installation of software and completely bypassing normal levels of defense.
posted by Bovine Love at 2:04 PM on February 21, 2008
Outlaws like Mel Brooks?
There's a joke about encryption keys and luggage combinations here, I just know it.
posted by secret about box at 2:05 PM on February 21, 2008 [2 favorites]
There's a joke about encryption keys and luggage combinations here, I just know it.
posted by secret about box at 2:05 PM on February 21, 2008 [2 favorites]
OK, here's a left-field defence, then: glue the chips in place using some kind of superglue. This prevents them from being removed & taken offsite in a liquid nitrogen thermos.
Take the whole motherboard then. Or simply use tin snips to snip away the tiny section of the board containing the chips.
Your strategy also means that if one memory chip goes bad, you need to throw out the whole board. 'Course, we do that already a lot but then this sort of attitude is why our descendants, if there are any in the long-term, will hate us.
Near as I can see, only the "special tiny memory chip for protected data" strategy is at all a good counter to this.
posted by lupus_yonderboy at 2:51 PM on February 21, 2008
Take the whole motherboard then. Or simply use tin snips to snip away the tiny section of the board containing the chips.
Your strategy also means that if one memory chip goes bad, you need to throw out the whole board. 'Course, we do that already a lot but then this sort of attitude is why our descendants, if there are any in the long-term, will hate us.
Near as I can see, only the "special tiny memory chip for protected data" strategy is at all a good counter to this.
posted by lupus_yonderboy at 2:51 PM on February 21, 2008
Well, my first thought on reading it was to have some data written over the DRAM on power off or power loss. Seems that several other people thought of the same thing. If that were implemented, how likely would accidentally-triggered overwrites be in normal usage?
posted by caution live frogs at 2:51 PM on February 21, 2008
posted by caution live frogs at 2:51 PM on February 21, 2008
shmegegge: "Film at 11" is the new "I'm an asshole."
That may be a little harsh, but I tend to agree.
I think it's the arrogant know-it-all attitude that is projected by those "film at 11" comments that I dislike. Every time I see a "film at 11" comment, I have to picture a pimply teenager who can't get laid and tries to compensate by acting all cool on the internets.
posted by sour cream at 3:06 PM on February 21, 2008
That may be a little harsh, but I tend to agree.
I think it's the arrogant know-it-all attitude that is projected by those "film at 11" comments that I dislike. Every time I see a "film at 11" comment, I have to picture a pimply teenager who can't get laid and tries to compensate by acting all cool on the internets.
posted by sour cream at 3:06 PM on February 21, 2008
Man, that's cold.
posted by Roger Dodger at 3:30 PM on February 21, 2008
posted by Roger Dodger at 3:30 PM on February 21, 2008
I like how the imagined citizenship of the Internet consists almost entirely of pimply, mommabasementing virgins. There's a certain hopeless charm about the endurance and statis of that caricature, pimply virgins all of them for a decade or two already now: like The Picture of Dorian Gray but multiplied by a million and with more Mountain Dew.
posted by cortex at 3:40 PM on February 21, 2008 [3 favorites]
posted by cortex at 3:40 PM on February 21, 2008 [3 favorites]
Heh, so, what does happen if you flag cortex for a derail? :)
posted by Bovine Love at 4:02 PM on February 21, 2008
posted by Bovine Love at 4:02 PM on February 21, 2008
This is really old news with regard to secure computing and the exploit has been well known for some time. If you have physical access to the machine there are all sorts of easier and more reliable methods of snooping encryption methods. For example it would be much easier to just plug a logic analyzer into a spare memory slot and examine the system at your leisure.
There is an entire science devoted to computing security and a well established testing and certification process for commercial applications. Take for example the microcomputers embedded in point of sale terminals where you swipe your credit card. The processor is enclosed in a ball grid array package so that the pins can't be easily probed. All of the critical pins are in the center of the ball array so that they can't be probed from the side. The circuits on the chip are covered with metal layers so that you can't pop the cover off for probing. The metal layers are connected to on-chip tamper detection switches to detect if someone tries to remove the metal layer. There are on-chip temperature detectors to deter the exploit in the OP article. There are on-chip voltage detectors to determine if someone tries to use under or over voltage to trip up the system. There are on-chip clock detectors to detect underclocking, overclocking, or glitch injection. All of the encryption keys and algorithms are stored in on-chip SRAM that cannot be probed. If any of the tamper detectors are tripped, the entire SRAM is zeroed out in a few microseconds.
Designers have to submit their systems to an independent laboratory for certification testing. The well-equipped laboratory examines the system for vulnerabilities. A passing grade is based on the cost of the equipment required to breach the system and the number of highly skilled engineering hours required. For example it might require $100,000 of test equipment and four weeks of engineering time.
A standard desktop computer could never approach the level of security in a simple point of sale terminal.
posted by JackFlash at 4:22 PM on February 21, 2008 [5 favorites]
There is an entire science devoted to computing security and a well established testing and certification process for commercial applications. Take for example the microcomputers embedded in point of sale terminals where you swipe your credit card. The processor is enclosed in a ball grid array package so that the pins can't be easily probed. All of the critical pins are in the center of the ball array so that they can't be probed from the side. The circuits on the chip are covered with metal layers so that you can't pop the cover off for probing. The metal layers are connected to on-chip tamper detection switches to detect if someone tries to remove the metal layer. There are on-chip temperature detectors to deter the exploit in the OP article. There are on-chip voltage detectors to determine if someone tries to use under or over voltage to trip up the system. There are on-chip clock detectors to detect underclocking, overclocking, or glitch injection. All of the encryption keys and algorithms are stored in on-chip SRAM that cannot be probed. If any of the tamper detectors are tripped, the entire SRAM is zeroed out in a few microseconds.
Designers have to submit their systems to an independent laboratory for certification testing. The well-equipped laboratory examines the system for vulnerabilities. A passing grade is based on the cost of the equipment required to breach the system and the number of highly skilled engineering hours required. For example it might require $100,000 of test equipment and four weeks of engineering time.
A standard desktop computer could never approach the level of security in a simple point of sale terminal.
posted by JackFlash at 4:22 PM on February 21, 2008 [5 favorites]
Heh, so, what does happen if you flag cortex for a derail? :)
I bet that he'd probably just look at the flag and chuckle and consider it a point taken.
And then look up whoever submitted it and add them to The List.
posted by cortex at 4:26 PM on February 21, 2008 [1 favorite]
I bet that he'd probably just look at the flag and chuckle and consider it a point taken.
And then look up whoever submitted it and add them to The List.
posted by cortex at 4:26 PM on February 21, 2008 [1 favorite]
For example it would be much easier to just plug a logic analyzer into a spare memory slot and examine the system at your leisure.
Again, someone missing good portions of the point. For starters, you'd have to have a logic analyzer. The method described here can be done with only an external USB drive. Secondly, you would have to pop open the machine (with it running); this has additional risks, not to mention being kind of obvious. Not to mention walking around with a logic analyzer.
And again we get the disdainful "big deal" without apparently considering the details of actually swiping the data from a real, live, used machine not owned by wildly paranoid hacker/security types. Oh yes, and again we get the exhaustive list of countermeasures which are not employed on 99% of the worlds computers, the odd one of which just might be of interest to someone.
posted by Bovine Love at 4:45 PM on February 21, 2008
Again, someone missing good portions of the point. For starters, you'd have to have a logic analyzer. The method described here can be done with only an external USB drive. Secondly, you would have to pop open the machine (with it running); this has additional risks, not to mention being kind of obvious. Not to mention walking around with a logic analyzer.
And again we get the disdainful "big deal" without apparently considering the details of actually swiping the data from a real, live, used machine not owned by wildly paranoid hacker/security types. Oh yes, and again we get the exhaustive list of countermeasures which are not employed on 99% of the worlds computers, the odd one of which just might be of interest to someone.
posted by Bovine Love at 4:45 PM on February 21, 2008
Thank you all but I'll stick to the method of waterboarding the guy until he gives you the password.
I had a platoon on these nerds helping me with the hardware stuff but I killed them all because they were to snarky..
posted by uandt at 4:50 PM on February 21, 2008
I had a platoon on these nerds helping me with the hardware stuff but I killed them all because they were to snarky..
posted by uandt at 4:50 PM on February 21, 2008
Today canned air scored a decisive victory in the long term struggle with rivals canned heat and canned laughter.
posted by Tube at 4:55 PM on February 21, 2008
posted by Tube at 4:55 PM on February 21, 2008
And yet the criminals have defeated those point of sale terminals, or else they wouldn't be swapping them out with their own when the cashiers are not looking.
posted by Iax at 5:05 PM on February 21, 2008
posted by Iax at 5:05 PM on February 21, 2008
Here is a more practical method. It doesn't require you to have a place where you can read the ram later.
Note, not from a research group:
http://www.wiebetech.com/videos/HPLT_Demo.php
The canned air paper is crap.
posted by about_time at 7:19 PM on February 21, 2008
Note, not from a research group:
http://www.wiebetech.com/videos/HPLT_Demo.php
The canned air paper is crap.
posted by about_time at 7:19 PM on February 21, 2008
I get the feeling you didn't read the paper. The link you show is great stuff, if you want to take the computer away (which sometimes you do) and if it isn't already locked without you knowing the password, which can genuinely create some serious hassle. Again, if you want to leave it in place and scan the memory, non-intrusively, the "canned air paper" is pretty damn clever. Please provide a scintilla of evidence that it is "crap".
posted by Bovine Love at 8:16 PM on February 21, 2008
posted by Bovine Love at 8:16 PM on February 21, 2008
pimply virgins all of them for a decade or two already now
pimples already at age ten? must be all that mountain dew.
posted by UbuRoivas at 10:07 PM on February 21, 2008
pimples already at age ten? must be all that mountain dew.
posted by UbuRoivas at 10:07 PM on February 21, 2008
I'm one of the coauthors of the paper. I'm pretty happy that we've sparked some interesting conversation. I'm also happy to answer any specific questions that people may have if they've read the paper (or at least watched the video).
posted by ioerror at 11:03 PM on February 21, 2008
posted by ioerror at 11:03 PM on February 21, 2008
Nice to see you here, ioerror. Very nice exploitation of the property; the cold boot method in particular was some nice out-of-the-box thinking.
posted by Bovine Love at 5:18 AM on February 22, 2008
posted by Bovine Love at 5:18 AM on February 22, 2008
Well, there goes my hidden kiddie porn collection.
posted by wastelands at 6:52 AM on February 22, 2008
posted by wastelands at 6:52 AM on February 22, 2008
Heh. I just realized in the video that though they're cracking a Vista Ultimate laptop, the successful h4xx0r screen is actually Ubuntu. Script supervisor to the set, please.
posted by middleclasstool at 7:17 AM on February 22, 2008
posted by middleclasstool at 7:17 AM on February 22, 2008
The reboot Vista, copy the ram and crack it in Linux.
posted by Bovine Love at 7:41 AM on February 22, 2008
posted by Bovine Love at 7:41 AM on February 22, 2008
middleclasstool the decryption key lets them decrypt and mount the drive (under linux), but it doesn't give them the actual password they'd need to unlock it via windows.
Also, the secret files were brilliantly named, if anyone else caught that in the video.
ioerror I actually do have a question: for ECC ram my understanding is that the BIOS wipes the entire chip on bootup, so you'd need a special-purpose device (or patched BIOS) to recover the data. Am I right in assuming that such a special purpose device would consist of an ARM/FPGA chip, a bunch of RAM sockets and a couple of bored EEs for an hour?
Awesome work btw, I'm such a crypto fanboy.
posted by Skorgu at 7:43 AM on February 22, 2008
Also, the secret files were brilliantly named, if anyone else caught that in the video.
ioerror I actually do have a question: for ECC ram my understanding is that the BIOS wipes the entire chip on bootup, so you'd need a special-purpose device (or patched BIOS) to recover the data. Am I right in assuming that such a special purpose device would consist of an ARM/FPGA chip, a bunch of RAM sockets and a couple of bored EEs for an hour?
Awesome work btw, I'm such a crypto fanboy.
posted by Skorgu at 7:43 AM on February 22, 2008
*forehead slap*
posted by middleclasstool at 8:55 AM on February 22, 2008
posted by middleclasstool at 8:55 AM on February 22, 2008
I am sorry for calling the paper crap. It's not crap. The result is clever and it's a good contribution. The paper is written with care and the experiments are done well. It is unfortunate that I posted something without careful thought. Here is a more careful take of the paper.
With physical access to a machine, the ram is available. As the paper notes, many OSes swap ram out to disk unencrypted and keys can be found there as well, and RAM can also be dumped from a firewire port using DMA without access to the OS. So, I hope that this becomes a standard technique for forensic investigators, but it's not a significant advance over the Firewire/DMA dump technique, and I don't think it changes things more than the video I posted above from a small company trying to address a similar aspect of the problem. So the paper is a contribution, but it is an incremental contribution.
In practice, people don't lock machines, especially on thier home computers. Anyone who cares enough to lock their screen might now be aware of this attack. They can set their machine to hibernate to an encrypted portion of disk with a key that is entered by the user upon return. In fact, this can be done quite carefully. Some techniques for doing this are mentioned in the paper but they are dismissed as "not often the default behavior" or other reasons. That's not a strong argument, in my view.
To truly evaluate the strength of this attack, the paper should be concerned with the strongest adversary. I'd say the strongest adversary would use something like this Nicholson et al paper which swaps everything out when the user leaves the room and uses encrypted memory. See section 4.2 on physical memory specifically. Yet, I don't see this defense referenced even though it first came out in 2002.
posted by about_time at 6:27 AM on February 23, 2008
With physical access to a machine, the ram is available. As the paper notes, many OSes swap ram out to disk unencrypted and keys can be found there as well, and RAM can also be dumped from a firewire port using DMA without access to the OS. So, I hope that this becomes a standard technique for forensic investigators, but it's not a significant advance over the Firewire/DMA dump technique, and I don't think it changes things more than the video I posted above from a small company trying to address a similar aspect of the problem. So the paper is a contribution, but it is an incremental contribution.
In practice, people don't lock machines, especially on thier home computers. Anyone who cares enough to lock their screen might now be aware of this attack. They can set their machine to hibernate to an encrypted portion of disk with a key that is entered by the user upon return. In fact, this can be done quite carefully. Some techniques for doing this are mentioned in the paper but they are dismissed as "not often the default behavior" or other reasons. That's not a strong argument, in my view.
To truly evaluate the strength of this attack, the paper should be concerned with the strongest adversary. I'd say the strongest adversary would use something like this Nicholson et al paper which swaps everything out when the user leaves the room and uses encrypted memory. See section 4.2 on physical memory specifically. Yet, I don't see this defense referenced even though it first came out in 2002.
posted by about_time at 6:27 AM on February 23, 2008
That's a thought.
It's probably easier to pull a dongle from a drawer than to remove a chip from a motherboard though.
posted by Sys Rq at 1:50 PM on February 23, 2008
It's probably easier to pull a dongle from a drawer than to remove a chip from a motherboard though.
posted by Sys Rq at 1:50 PM on February 23, 2008
Thanks abouttime. That's mighty nice of you.
I do think this changes things a great deal. A firewire attack can be prevented. As an example, the firewire security mode can be enabled on a mac os x machine. This has been documented. Furthermore, our attacks can be performed in software only. Using off the shelf hardware or the hardware of the target if the attacker is feeling lazy.
To call this an "incremental contribution" leads me to believe that you don't fully understand the problem set. This is pretty common. I've heard people call the attack impractical. I've heard people call the attack impossible. I've even heard people say that it's outright lies. These statements are uncreative and incorrect.
I have personally compromised a number of people who were totally sure they were secure. I personally printed their AES keys out on the screen and mounted their drives. That's just for laptops.
You're incorrect when you say people don't lock machines. I see many people locking their machines and I have performed this attack against just that type of machine as well as others.
You state:
"They can set their machine to hibernate to an encrypted portion of disk with a key that is entered by the user upon return. In fact, this can be done quite carefully. Some techniques for doing this are mentioned in the paper but they are dismissed as "not often the default behavior" or other reasons. That's not a strong argument, in my view."
Can they? How is that? With Mac OS X, that certainly isn't the case. Safe Sleep can encrypt it with 128bit AES but the key is obfuscated and not actually protecting things as a user would probably like.
Regarding, Nicholson et al, you're correct in saying that they address issues of memory capture attacks. I think this paper is pretty nice from what I've read. You point out section 4.2 and I think it has a nice suggestion but not in the face of our attacks. It's totally impractical but it's a nice idea. If you have no network, it may be possible to use it. It would break many running processes on the system. I might personally use this on my laptop with my crypto disk setup if it was workable. However, I think it's worth mentioning that this proposed system doesn't exist.
Upon thinking about their solution, they propose wireless keying. Using what? Bluetooth? The broken and totally insecure protocol that everyone uses? And what prevents sniffing of the link? And where are the link keys? In memory on the system..? And what prevents you from a well timed seizure?
The key to breaking their system lies right here:
"When the user returns, the device will detect the return of the token's heartbeat. At this point, it needs to unencrypt the main memory, so the frozen processes can resume. To do so, it ships the encrypted key Kk(Kmem) to the token, which uses the key-encrypting-key to return Kmem to the device. After unencrypting RAM and restarting the frozen processes, execution resumes normally."
Do you see the problem with this solution? It seems clear to me that they're shifting the risk around but they don't actually solve it. It seems that you could thwart this by dumping all of the system memory, encrypted and not. The next step would simply be to sniff the bluetooth link or to acquire the token.
All of this is pretty silly though - I want to see it deployed in the wild. You show me a workable system and I'll tell you if I think we could break it. Currently, systems in the world that people use are all in bad shape.
posted by ioerror at 6:46 PM on February 23, 2008 [1 favorite]
I do think this changes things a great deal. A firewire attack can be prevented. As an example, the firewire security mode can be enabled on a mac os x machine. This has been documented. Furthermore, our attacks can be performed in software only. Using off the shelf hardware or the hardware of the target if the attacker is feeling lazy.
To call this an "incremental contribution" leads me to believe that you don't fully understand the problem set. This is pretty common. I've heard people call the attack impractical. I've heard people call the attack impossible. I've even heard people say that it's outright lies. These statements are uncreative and incorrect.
I have personally compromised a number of people who were totally sure they were secure. I personally printed their AES keys out on the screen and mounted their drives. That's just for laptops.
You're incorrect when you say people don't lock machines. I see many people locking their machines and I have performed this attack against just that type of machine as well as others.
You state:
"They can set their machine to hibernate to an encrypted portion of disk with a key that is entered by the user upon return. In fact, this can be done quite carefully. Some techniques for doing this are mentioned in the paper but they are dismissed as "not often the default behavior" or other reasons. That's not a strong argument, in my view."
Can they? How is that? With Mac OS X, that certainly isn't the case. Safe Sleep can encrypt it with 128bit AES but the key is obfuscated and not actually protecting things as a user would probably like.
Regarding, Nicholson et al, you're correct in saying that they address issues of memory capture attacks. I think this paper is pretty nice from what I've read. You point out section 4.2 and I think it has a nice suggestion but not in the face of our attacks. It's totally impractical but it's a nice idea. If you have no network, it may be possible to use it. It would break many running processes on the system. I might personally use this on my laptop with my crypto disk setup if it was workable. However, I think it's worth mentioning that this proposed system doesn't exist.
Upon thinking about their solution, they propose wireless keying. Using what? Bluetooth? The broken and totally insecure protocol that everyone uses? And what prevents sniffing of the link? And where are the link keys? In memory on the system..? And what prevents you from a well timed seizure?
The key to breaking their system lies right here:
"When the user returns, the device will detect the return of the token's heartbeat. At this point, it needs to unencrypt the main memory, so the frozen processes can resume. To do so, it ships the encrypted key Kk(Kmem) to the token, which uses the key-encrypting-key to return Kmem to the device. After unencrypting RAM and restarting the frozen processes, execution resumes normally."
Do you see the problem with this solution? It seems clear to me that they're shifting the risk around but they don't actually solve it. It seems that you could thwart this by dumping all of the system memory, encrypted and not. The next step would simply be to sniff the bluetooth link or to acquire the token.
All of this is pretty silly though - I want to see it deployed in the wild. You show me a workable system and I'll tell you if I think we could break it. Currently, systems in the world that people use are all in bad shape.
posted by ioerror at 6:46 PM on February 23, 2008 [1 favorite]
Afroblanco, you asked: "What if the encryption key was stored on a hardware dongle that was read directly by the HDD controller? It seems like you could work it out so that the key was never actually stored in RAM at any time."
I'm not sure that I understand. Are you suggesting that AES be done in the drive controller? And then memory map the key into the drive controller somehow? Is that correct?
posted by ioerror at 6:48 PM on February 23, 2008
I'm not sure that I understand. Are you suggesting that AES be done in the drive controller? And then memory map the key into the drive controller somehow? Is that correct?
posted by ioerror at 6:48 PM on February 23, 2008
Skorgu: "ioerror I actually do have a question: for ECC ram my understanding is that the BIOS wipes the entire chip on bootup, so you'd need a special-purpose device (or patched BIOS) to recover the data. Am I right in assuming that such a special purpose device would consist of an ARM/FPGA chip, a bunch of RAM sockets and a couple of bored EEs for an hour?"
ECC memory is suggested to be wiped by the ECC memory controller so that it can detect page faults. It's up to the programmer of the memory controller to ensure this happens.
A recovery device could be done with a normal motherboard that is electrically and physically compatible with the memory of the system. Install LinuxBIOS and modify it slightly to record, rather than zero memory. Easy as pie. Because you can control the BIOS entirely, these attacks should be possible without losing _any_ memory to software issues. Decay would be the main enemy to such a recovery effort.
Another device could be a small socket that is compatible with the memory as I mentioned above. Furthermore, it seems reasonable that anyway you'd like to read it out, including an FPGA, would be possible. I'll ask David Hulton if he wants to help build one ;-)
If you're concerned with data loss on the chips and choosing between those two (or other choices), you could easily build a DRAM charging device. Though we haven't built it, we mention it. You could build a socket with a battery or other power source. The socket would power chips put into it. Then you have all the time in the world.
"Awesome work btw, I'm such a crypto fanboy.
Thanks!
posted by ioerror at 6:56 PM on February 23, 2008
ECC memory is suggested to be wiped by the ECC memory controller so that it can detect page faults. It's up to the programmer of the memory controller to ensure this happens.
A recovery device could be done with a normal motherboard that is electrically and physically compatible with the memory of the system. Install LinuxBIOS and modify it slightly to record, rather than zero memory. Easy as pie. Because you can control the BIOS entirely, these attacks should be possible without losing _any_ memory to software issues. Decay would be the main enemy to such a recovery effort.
Another device could be a small socket that is compatible with the memory as I mentioned above. Furthermore, it seems reasonable that anyway you'd like to read it out, including an FPGA, would be possible. I'll ask David Hulton if he wants to help build one ;-)
If you're concerned with data loss on the chips and choosing between those two (or other choices), you could easily build a DRAM charging device. Though we haven't built it, we mention it. You could build a socket with a battery or other power source. The socket would power chips put into it. Then you have all the time in the world.
"Awesome work btw, I'm such a crypto fanboy.
Thanks!
posted by ioerror at 6:56 PM on February 23, 2008
Also, about_time, I forgot something.
How do you propose applying the token auth suggestions from section 4.2 of the paper for a server farm?
posted by ioerror at 7:10 PM on February 23, 2008
How do you propose applying the token auth suggestions from section 4.2 of the paper for a server farm?
posted by ioerror at 7:10 PM on February 23, 2008
shmegegge writes "'Film at 11' is the new 'I'm an asshole.'"
Well Film at 11 is hardly new, I'd be surprised if the first captain obvious post on UseNet didn't include the phrase.
posted by Mitheral at 8:11 PM on February 23, 2008
Well Film at 11 is hardly new, I'd be surprised if the first captain obvious post on UseNet didn't include the phrase.
posted by Mitheral at 8:11 PM on February 23, 2008
To call this an "incremental contribution" leads me to believe that you don't fully understand the problem set.
Precisely I think the problem. People seem to think fairly consistently in a problem space that is convenient, but not always reaslistic. In particular, they often ignore aspect of the problem that are not directly technical, like exposure, complexity, hardware requirements, deniability, traceability. This method has many very nice attributes; it's one serious flaw is that it may make someone suspicious to have their machines rebooted, although I still feel leaving a machine in BSOD would raise little suspicion on windows. I particularly like the low hardware requirements, and the banal, deniable aspect of the hardware; in general, getting caught with it would raise little or no suspicion. The exposure is bad, as it is with all direct physical attacks, but a much lower profile then many others. Even if employed in a sneak'n'peek in a home, there is little chance of disturbing things excessively. Of course there maybe other keys latent in memory that are of interest as well.
People also tend to think in terms of security expert vrs. attacker, but of course that isn't really the case normally; one sizes up the target and chooses an attack suitable to the target but with the least risk, and for a security expert you may choose differently. Maybe.
In any case, theoretical defenses are interesting from an academic or game point of view, but for the real world with deployed systems, this can be pretty serious fun given some physical access (emphasis on some, it isn't entire).
posted by Bovine Love at 8:45 PM on February 23, 2008
Precisely I think the problem. People seem to think fairly consistently in a problem space that is convenient, but not always reaslistic. In particular, they often ignore aspect of the problem that are not directly technical, like exposure, complexity, hardware requirements, deniability, traceability. This method has many very nice attributes; it's one serious flaw is that it may make someone suspicious to have their machines rebooted, although I still feel leaving a machine in BSOD would raise little suspicion on windows. I particularly like the low hardware requirements, and the banal, deniable aspect of the hardware; in general, getting caught with it would raise little or no suspicion. The exposure is bad, as it is with all direct physical attacks, but a much lower profile then many others. Even if employed in a sneak'n'peek in a home, there is little chance of disturbing things excessively. Of course there maybe other keys latent in memory that are of interest as well.
People also tend to think in terms of security expert vrs. attacker, but of course that isn't really the case normally; one sizes up the target and chooses an attack suitable to the target but with the least risk, and for a security expert you may choose differently. Maybe.
In any case, theoretical defenses are interesting from an academic or game point of view, but for the real world with deployed systems, this can be pretty serious fun given some physical access (emphasis on some, it isn't entire).
posted by Bovine Love at 8:45 PM on February 23, 2008
Thanks for the response. Without responding point by point, let me first that, yes, I am confident that I understand the problem set very well. I fully believe you can do the attack and I never said otherwise. I don't know why you are linking me with people who called you a liar.
Respectfully, I don't agree with your evaluation and characterization of the significance of the attack. The paper states that "no simply remedy would eliminate" the risk the attack poses, and I think that is just overstating things quite a bit.
The strength of this attack can be measured by the strength of the defense it can overcome. The significance of the result can be measured by the amount of new research it would take to defeat it. So let's measure what it would take to defend against the attack: I think you would agree that anyone who can encrypt the RAM and overwrite the key afterwards can defeat the attack. What new research is required to add such a routine to any current OS? No new research. What is the simplicity of that defense? it's a pretty simple.
The key used to encrypt RAM can be provided by the user when needed. In fact, a public key can be stored and encryption can be triggered by a time out. Or the encryption could be triggered by an internal temperature sensor when things get too cold (initiating a race). Whether symmetric or asymmetric, the decryption key need never be stored ahead of time and it can be received securely via a keyboard input. We don't have to worry about bluetooth security or acoustic attacks that listen to key presses, etc; these issues are orthogonal to the issue at hand.
Now, this routine will take some time, so the user is vulnerable until he executes the encryption routine. The canned air attack relies on physical presence. The computer owner is either removed with physical force or you wait for him to leave the room. So, in either case the defender needs some physical security that can defend against physical force for as long as it takes to encrypt all the memory. I'd say that is tens of seconds with some optimization. In other words, the defender runs the encryption routine any time he senses a threat and any time they want to leave the physically secured room.
To put it simply, we can alter "screen lock" -- which you say people run all the time --- to lock not just the screen but the entire memory. So the extra effort here is not a problem for the user, they are already locking their screen and using an encrypted drive. Someone just needs to alter the screen lock program.
Yes a server farm is harder, but I think it'll be pretty hard to open each blade in a server farm and blow compressed air in it before the RAM encryption routine completes running in parallel. A broadcast command on the network could initiate the routine when a physical break-in is detected.
While perhaps no current OS exists that has such a nifty RAM-encrypting screen lock, it's clear that it is possible to create one with little effort. Moreover, no new techniques are needed to design or code such a piece of software. In other words, known techniques defeat your attack.
Too often security research is too focused on defeating commodity hardware, software, and products than developing new techniques that push the state of the art. It's not a convincing argument in other research communities in computer science to ask what product does this now, or to say "i want to see it deployed". How long until a linux patch is out that encrypts memory upon screen lock? Will it be before the conference presentation? What then?
Like I said, I believe the result of the paper will be a new technique for forensic investigators that might end up as common practice. That's fantastic. But those who have something to hide and some smarts won't be affected.
On preview, @bovine, encrypting ram upon screen lock is not a theoretical defense. It's quite practical and not an academic ideal.
posted by about_time at 9:47 PM on February 23, 2008
Respectfully, I don't agree with your evaluation and characterization of the significance of the attack. The paper states that "no simply remedy would eliminate" the risk the attack poses, and I think that is just overstating things quite a bit.
The strength of this attack can be measured by the strength of the defense it can overcome. The significance of the result can be measured by the amount of new research it would take to defeat it. So let's measure what it would take to defend against the attack: I think you would agree that anyone who can encrypt the RAM and overwrite the key afterwards can defeat the attack. What new research is required to add such a routine to any current OS? No new research. What is the simplicity of that defense? it's a pretty simple.
The key used to encrypt RAM can be provided by the user when needed. In fact, a public key can be stored and encryption can be triggered by a time out. Or the encryption could be triggered by an internal temperature sensor when things get too cold (initiating a race). Whether symmetric or asymmetric, the decryption key need never be stored ahead of time and it can be received securely via a keyboard input. We don't have to worry about bluetooth security or acoustic attacks that listen to key presses, etc; these issues are orthogonal to the issue at hand.
Now, this routine will take some time, so the user is vulnerable until he executes the encryption routine. The canned air attack relies on physical presence. The computer owner is either removed with physical force or you wait for him to leave the room. So, in either case the defender needs some physical security that can defend against physical force for as long as it takes to encrypt all the memory. I'd say that is tens of seconds with some optimization. In other words, the defender runs the encryption routine any time he senses a threat and any time they want to leave the physically secured room.
To put it simply, we can alter "screen lock" -- which you say people run all the time --- to lock not just the screen but the entire memory. So the extra effort here is not a problem for the user, they are already locking their screen and using an encrypted drive. Someone just needs to alter the screen lock program.
Yes a server farm is harder, but I think it'll be pretty hard to open each blade in a server farm and blow compressed air in it before the RAM encryption routine completes running in parallel. A broadcast command on the network could initiate the routine when a physical break-in is detected.
While perhaps no current OS exists that has such a nifty RAM-encrypting screen lock, it's clear that it is possible to create one with little effort. Moreover, no new techniques are needed to design or code such a piece of software. In other words, known techniques defeat your attack.
Too often security research is too focused on defeating commodity hardware, software, and products than developing new techniques that push the state of the art. It's not a convincing argument in other research communities in computer science to ask what product does this now, or to say "i want to see it deployed". How long until a linux patch is out that encrypts memory upon screen lock? Will it be before the conference presentation? What then?
Like I said, I believe the result of the paper will be a new technique for forensic investigators that might end up as common practice. That's fantastic. But those who have something to hide and some smarts won't be affected.
On preview, @bovine, encrypting ram upon screen lock is not a theoretical defense. It's quite practical and not an academic ideal.
posted by about_time at 9:47 PM on February 23, 2008
On preview, @bovine, encrypting ram upon screen lock is not a theoretical defense. It's quite practical and not an academic ideal....
I look forward to seeing it in wide availability soon. No? It remains academic.
The strength of this attack can be measured by the strength of the defense it can overcome. The significance of the result can be measured by the amount of new research it would take to defeat it.
Again, you do not understand the problem set. Not all security problems are the big game of security guy vrs, attacker. Not all attacks have to be "strong"; sometimes, things like no custom hardware make they much better then other attacks, even if they are "weak". You apparently judge it from a purely academic, gaming point of view. Actually attacks have a host of problems and - in fact -- strong defenses are not common problem, since they are very rarely deployed. Very routine, poor defenses like locking your screen turn out to be surprisingly difficult problems, particularly when coupled with disk encryption. Sure, every guy who reads lots of security will tell me of 100 ways to defeat that, yet almost none of those ways are really practical in the real world. And for every attack, every guy who reads security will pooh-pooh it and tell me of 1000 defense, none of which are deployed and probably never will be. Meanwhile, people continue using un-patched IE.
One of the curiosities I find is that a lot of "security" people seem to believe that someone actually cares about their data. I'll bet the the intersection of people with interesting data and people with strong, knowledgeable defenses is a very very small set indeed outside of military/intel (and one could even have some arguments there). As much as it makes a great game, high-capability agents vrs high-capability defenders is largely an academic exercise.
Incidentally, you say "home users rarely lock their machines". Why do you restrict your problem set to home users?
posted by Bovine Love at 9:57 AM on February 24, 2008
I look forward to seeing it in wide availability soon. No? It remains academic.
The strength of this attack can be measured by the strength of the defense it can overcome. The significance of the result can be measured by the amount of new research it would take to defeat it.
Again, you do not understand the problem set. Not all security problems are the big game of security guy vrs, attacker. Not all attacks have to be "strong"; sometimes, things like no custom hardware make they much better then other attacks, even if they are "weak". You apparently judge it from a purely academic, gaming point of view. Actually attacks have a host of problems and - in fact -- strong defenses are not common problem, since they are very rarely deployed. Very routine, poor defenses like locking your screen turn out to be surprisingly difficult problems, particularly when coupled with disk encryption. Sure, every guy who reads lots of security will tell me of 100 ways to defeat that, yet almost none of those ways are really practical in the real world. And for every attack, every guy who reads security will pooh-pooh it and tell me of 1000 defense, none of which are deployed and probably never will be. Meanwhile, people continue using un-patched IE.
One of the curiosities I find is that a lot of "security" people seem to believe that someone actually cares about their data. I'll bet the the intersection of people with interesting data and people with strong, knowledgeable defenses is a very very small set indeed outside of military/intel (and one could even have some arguments there). As much as it makes a great game, high-capability agents vrs high-capability defenders is largely an academic exercise.
Incidentally, you say "home users rarely lock their machines". Why do you restrict your problem set to home users?
posted by Bovine Love at 9:57 AM on February 24, 2008
Oh, the thought I forgot ... I think you are much to caught up on the term "significant". It isn't really useful; almost anything can be spun back to "incremental", and quite a few things can be spun to be "significant". In security, pretty much everything has a defense and is also built upon previous research, so nothing is significant. I don't see the term bandied about much concerning this, either, though I would consider it fairly significant from a deployable point of view. If it doesn't get deployed, it will be probably due to too much uncertainty about the bit-rot rate of a given machine; although, if one were hitting a bunch of machines trying to get any one (or more) of them, it becomes more practical (instead of targeting a specific machine), but that is pretty rare for physical attacks, since the exposure is generally much too high.
posted by Bovine Love at 10:20 AM on February 24, 2008
posted by Bovine Love at 10:20 AM on February 24, 2008
Bovine Love writes "One of the curiosities I find is that a lot of 'security' people seem to believe that someone actually cares about their data."
It's not that I think anyone in particular cares about my data. It's more that someone might care. That recent Hong Kong celebrity nude photo flap is an example of what I'm talking about.
posted by Mitheral at 11:18 AM on February 24, 2008
It's not that I think anyone in particular cares about my data. It's more that someone might care. That recent Hong Kong celebrity nude photo flap is an example of what I'm talking about.
posted by Mitheral at 11:18 AM on February 24, 2008
Bovine, this is an academic paper published in an academic conference. High-quality academic research falls under certain standards that are higher than alt.2600 and defcon. The attack described in this paper is trivially defeated with known methods. You can choose to evaluate it under non-academic standards, but I suspect the authors from Princeton submitted their paper to an academic conference for a reason.
posted by about_time at 11:50 AM on February 24, 2008
posted by about_time at 11:50 AM on February 24, 2008
High-quality academic research falls under certain standards that are higher than alt.2600 and defcon.
Ahh, snark. I think you are seriously mis-informed about standards of evaluation, and clearly have little respect for others standards.
posted by Bovine Love at 12:11 PM on February 24, 2008
Ahh, snark. I think you are seriously mis-informed about standards of evaluation, and clearly have little respect for others standards.
posted by Bovine Love at 12:11 PM on February 24, 2008
We are done here I think.
posted by about_time at 12:33 PM on February 24, 2008
posted by about_time at 12:33 PM on February 24, 2008
It is too bad, really, it could be interesting. I'll end my thoughts on it to say that, even academically, the standards of evaluation extend beyond defensibility. There are other factors that are interesting and relevant.
posted by Bovine Love at 1:10 PM on February 24, 2008
posted by Bovine Love at 1:10 PM on February 24, 2008
On reflection I believe I misunderstood you. I apologize for my response.
posted by Bovine Love at 9:12 PM on February 24, 2008
posted by Bovine Love at 9:12 PM on February 24, 2008
« Older Frozen fire-gutted building | happy endings Newer »
This thread has been archived and is closed to new comments
posted by secret about box at 10:32 AM on February 21, 2008 [4 favorites]