The ++ operator is now illegal
November 21, 2012 10:00 PM Subscribe
What does proper authorization to access a computer system mean? Robert Graham of Errata Security writes about the recent conviction of Andrew Auernheimer (aka weev) for “hacking” AT&T. Two years ago, weev discovered a bug in AT&T's website that exposed the email addresses of customers with iPads. According to weev, the flaw was reported as per responsible disclosure practices by first informing AT&T before bringing it public. However the FBI investigated and arrested him under the Computer Fraud and Abuse Act (CFAA). On 20th November 2012, he was found guilty of identity fraud and conspiracy to access a computer without authorization.
The group revealed the security flaw to Gawker Media after AT&T had been notified, as well as exposing the data of 114,000 iPad users, including those of celebrities, the government and the military.
So, he notified AT&T, then released the data to the public? I would think that was why he was convicted. What an asshole.
posted by JujuB at 10:16 PM on November 21, 2012 [4 favorites]
So, he notified AT&T, then released the data to the public? I would think that was why he was convicted. What an asshole.
posted by JujuB at 10:16 PM on November 21, 2012 [4 favorites]
So, he notified AT&T, then released the data to the public? I would think that was why he was convicted. What an asshole.
This is one of the things that most discussions about responsible disclosure revolve around. Sure it was an asshole move, but given the nature of the flaw, you could very well say the data was already public anyway; anybody else could extract the data too. Is it illegal to repost publicly available information? Take the example in the article; you can hardly call incrementing a post ID a form of illegal access, and the real person to blame in such a situation is the person who built the website. If what we disagree with is not the access of the data, but the publicity of it, then shouldn't the law be targeting that, and not the "unauthorized entry"?
Some people also argue that if they don't publicise stuff like this, the vendors just won't bother to fix the bugs. But of course his main motivation for exposing the data was self-interest; fame and publicity.
posted by destrius at 10:34 PM on November 21, 2012 [4 favorites]
This is one of the things that most discussions about responsible disclosure revolve around. Sure it was an asshole move, but given the nature of the flaw, you could very well say the data was already public anyway; anybody else could extract the data too. Is it illegal to repost publicly available information? Take the example in the article; you can hardly call incrementing a post ID a form of illegal access, and the real person to blame in such a situation is the person who built the website. If what we disagree with is not the access of the data, but the publicity of it, then shouldn't the law be targeting that, and not the "unauthorized entry"?
Some people also argue that if they don't publicise stuff like this, the vendors just won't bother to fix the bugs. But of course his main motivation for exposing the data was self-interest; fame and publicity.
posted by destrius at 10:34 PM on November 21, 2012 [4 favorites]
Clearly I shouldn't be using the deleted posts greasemonkey script.
posted by wierdo at 10:37 PM on November 21, 2012 [23 favorites]
posted by wierdo at 10:37 PM on November 21, 2012 [23 favorites]
Someone rents storage space to people, and someone else discovers that the electronic gate system reveals the personal information of the renters on demand with a simple double key press. He motifies the guy running the storage facility, but the guy does nothing to fix it. So:
1. he could move on with his life.
2. he could pull that personal information, and use it to contact the exposed people anonymously so that they can follow up with the storage facility guy.
3. he could pull that personal information, and give it to a newspaper to prove there's an issue, so that the newspaper can publicize it so that interested parties can follow up/so the storage facility guy will be shamed into fixing the problem.
4. he could pull that personal information, and post it somewhere so that it can be accessed by folks who don't even know what the double key press is, or where the storage facility is.
1 isn't the kindest thing to do, but it is the easiest. 2 and 3 are effective ways to do the arguably "right" thing without being an asshole. 4 is being an asshole. Apparently this guy went for door number four, and got bitten for it.
posted by davejay at 10:43 PM on November 21, 2012 [13 favorites]
1. he could move on with his life.
2. he could pull that personal information, and use it to contact the exposed people anonymously so that they can follow up with the storage facility guy.
3. he could pull that personal information, and give it to a newspaper to prove there's an issue, so that the newspaper can publicize it so that interested parties can follow up/so the storage facility guy will be shamed into fixing the problem.
4. he could pull that personal information, and post it somewhere so that it can be accessed by folks who don't even know what the double key press is, or where the storage facility is.
1 isn't the kindest thing to do, but it is the easiest. 2 and 3 are effective ways to do the arguably "right" thing without being an asshole. 4 is being an asshole. Apparently this guy went for door number four, and got bitten for it.
posted by davejay at 10:43 PM on November 21, 2012 [13 favorites]
you could very well say the data was already public anyway; anybody else could extract the data too.
If I leave a stack of cash sitting on my windowsill, I haven't given it to the public, even if the absence of preventing the public from taking it is manifest. Failing to completely secure a server is a failure to completely secure it, not an implicit grant of permission to access it.
posted by fatbird at 10:45 PM on November 21, 2012 [4 favorites]
If I leave a stack of cash sitting on my windowsill, I haven't given it to the public, even if the absence of preventing the public from taking it is manifest. Failing to completely secure a server is a failure to completely secure it, not an implicit grant of permission to access it.
posted by fatbird at 10:45 PM on November 21, 2012 [4 favorites]
Are you reading this blog? If so, you are committing a crime under 18 USC 1030(a)
I wasn't trying to do that. Most of us here, most of my family, we were just trying to have a good Thanksgiving. We were sitting around, laughing about old times, enjoying each others' company. But Nana snuck off and started surfing the Web. I'm pretty sure she was the one who read that blog. Now we're all going to jail.
Fuck you, Grandma! You ruined Thanksgiving!
posted by twoleftfeet at 10:48 PM on November 21, 2012 [3 favorites]
I wasn't trying to do that. Most of us here, most of my family, we were just trying to have a good Thanksgiving. We were sitting around, laughing about old times, enjoying each others' company. But Nana snuck off and started surfing the Web. I'm pretty sure she was the one who read that blog. Now we're all going to jail.
Fuck you, Grandma! You ruined Thanksgiving!
posted by twoleftfeet at 10:48 PM on November 21, 2012 [3 favorites]
The ++ operator is now illegal
This is not a factual statement about the consequences of this person's actions.
posted by Blazecock Pileon at 10:49 PM on November 21, 2012 [3 favorites]
This is not a factual statement about the consequences of this person's actions.
posted by Blazecock Pileon at 10:49 PM on November 21, 2012 [3 favorites]
If I leave a stack of cash sitting on my windowsill, I haven't given it to the public, even if the absence of preventing the public from taking it is manifest.
This metaphor breaks down for data on the internet. If it's sitting on your windowsill, it's sitting on your windowsill. There are legal demarcations saying where your property begins and ends.
There is no explicit difference between typing in one URL and getting data someone intends to be public (http://www.metafilter.com), and typing in another URL and getting something that wasn't meant to be public (http://www.metafilter.com/some_directory_mathowie_forgot_to_delete_comtaining_a_list_of_everyones_email_addresses).
I agree with davejay, however. The means of publishing was an asshole move.
posted by Jimbob at 10:49 PM on November 21, 2012
This metaphor breaks down for data on the internet. If it's sitting on your windowsill, it's sitting on your windowsill. There are legal demarcations saying where your property begins and ends.
There is no explicit difference between typing in one URL and getting data someone intends to be public (http://www.metafilter.com), and typing in another URL and getting something that wasn't meant to be public (http://www.metafilter.com/some_directory_mathowie_forgot_to_delete_comtaining_a_list_of_everyones_email_addresses).
I agree with davejay, however. The means of publishing was an asshole move.
posted by Jimbob at 10:49 PM on November 21, 2012
But money you find on the street isn't your money either.
posted by Authorized User at 10:56 PM on November 21, 2012 [2 favorites]
posted by Authorized User at 10:56 PM on November 21, 2012 [2 favorites]
If I leave a stack of cash sitting on my windowsill, I haven't given it to the public, even if the absence of preventing the public from taking it is manifest. Failing to completely secure a server is a failure to completely secure it, not an implicit grant of permission to access it.
That's not really an appropriate metaphor for this case though... its more like if you leave a stack of cash sitting on your windowsill, and somebody comes along and takes a photograph of it, then posts the photo on twitter saying "hey this guy left a stack of cash sitting on his windowsill!". That's obviously a shitty thing to do, but again, is it illegal? And which part is illegal, taking the photo or publicising about it on the internet? In weev's case, he's being charged for taking the photo.
The ++ operator is now illegal
This is not a factual statement about the consequences of this person's actions.
Yeah... It was a quote from a snarky comment by a computer security guy on twitter. I kind of regret using it as a title now.
posted by destrius at 10:59 PM on November 21, 2012
That's not really an appropriate metaphor for this case though... its more like if you leave a stack of cash sitting on your windowsill, and somebody comes along and takes a photograph of it, then posts the photo on twitter saying "hey this guy left a stack of cash sitting on his windowsill!". That's obviously a shitty thing to do, but again, is it illegal? And which part is illegal, taking the photo or publicising about it on the internet? In weev's case, he's being charged for taking the photo.
The ++ operator is now illegal
This is not a factual statement about the consequences of this person's actions.
Yeah... It was a quote from a snarky comment by a computer security guy on twitter. I kind of regret using it as a title now.
posted by destrius at 10:59 PM on November 21, 2012
There is no explicit difference between typing in one URL and getting data someone intends to be public (http://www.metafilter.com), and typing in another URL and getting something that wasn't meant to be public
This is a good point, but I'm not sure I'm ready to accept that there's no such demarcation on the Internet, if only because we can sensibly talk about this, not simply as weev collecting data, but as a failure to secure something that was supposed to be. Intuitively we understand that weev wasn't supposed to do what he did, and that under normal circumstances it would have been prevented.
Put another way, a successful SQL injection attack that spits out the user table is likewise a demonstration that the information was de facto publicly available, and yet we have no problem understanding that as crossing a virtual property line. In fact, by your definition, any security flaw at all becomes, for practical purposes, a public offering.
posted by fatbird at 11:01 PM on November 21, 2012 [2 favorites]
This is a good point, but I'm not sure I'm ready to accept that there's no such demarcation on the Internet, if only because we can sensibly talk about this, not simply as weev collecting data, but as a failure to secure something that was supposed to be. Intuitively we understand that weev wasn't supposed to do what he did, and that under normal circumstances it would have been prevented.
Put another way, a successful SQL injection attack that spits out the user table is likewise a demonstration that the information was de facto publicly available, and yet we have no problem understanding that as crossing a virtual property line. In fact, by your definition, any security flaw at all becomes, for practical purposes, a public offering.
posted by fatbird at 11:01 PM on November 21, 2012 [2 favorites]
and somebody comes along and takes a photograph of it, then posts the photo on twitter
I don't accept this is an appropriate modification of the metaphor because the point you're making, that I'm not being deprived of the money from my windowsill, isn't germane here. What was lost by the taking of the data was the relative privacy of the people whose data was publicized.
posted by fatbird at 11:04 PM on November 21, 2012
I don't accept this is an appropriate modification of the metaphor because the point you're making, that I'm not being deprived of the money from my windowsill, isn't germane here. What was lost by the taking of the data was the relative privacy of the people whose data was publicized.
posted by fatbird at 11:04 PM on November 21, 2012
Digging around a little, Wired has some more details about the exploit and series of events. It's a little more involved than guessing a URL. The mechanism isn't specified there exactly, but iPads visiting AT&T's site provide an identifying number, ICC-ID. Could be a script call, or headers, or something else I dunno, but that's not terribly important. What is important in this case is that this was not a simple accessing of data at a given URL, but rather an active supplying of extra data from weev to AT&T's website, in addition it accessing the publicly available URL. I suspect that's the technicality that allowed him to be convicted of access without authorization.
posted by vibratory manner of working at 11:11 PM on November 21, 2012 [1 favorite]
posted by vibratory manner of working at 11:11 PM on November 21, 2012 [1 favorite]
Put another way, a successful SQL injection attack that spits out the user table is likewise a demonstration that the information was de facto publicly available, and yet we have no problem understanding that as crossing a virtual property line. In fact, by your definition, any security flaw at all becomes, for practical purposes, a public offering.
Exactly; I think this is the most interesting question to ask here... how do we draw the line? At what point does something become "hacking"?
I don't accept this is an appropriate modification of the metaphor because the point you're making, that I'm not being deprived of the money from my windowsill, isn't germane here. What was lost by the taking of the data was the relative privacy of the people whose data was publicized.
Hmm, that's true. But loss of privacy is still a bit more complex than simple theft of money.
withdrawing from the thread for now, don't want to control the conversation.
posted by destrius at 11:14 PM on November 21, 2012
Exactly; I think this is the most interesting question to ask here... how do we draw the line? At what point does something become "hacking"?
I don't accept this is an appropriate modification of the metaphor because the point you're making, that I'm not being deprived of the money from my windowsill, isn't germane here. What was lost by the taking of the data was the relative privacy of the people whose data was publicized.
Hmm, that's true. But loss of privacy is still a bit more complex than simple theft of money.
withdrawing from the thread for now, don't want to control the conversation.
posted by destrius at 11:14 PM on November 21, 2012
There has been a shift in recent years, I feel, regarding the onus for security.
It's commonly repeated that one shouldn't rely on "security through obscurity". The first reason was that it didn't work. The second reason was that when it didn't work, you done messed up and you had no-one to blame but yourself. People were therefore encouraged to implement strong security, as strong as possible, in order to create both a technological and legal barrier against intrusion.
But we've seen a number of cases in recent years where entities who relied on "security through obscurity", when compromised, avoided all responsibility and managed to get the attackers prosecuted. The lines have been blurred because of these prosecutions - accidental discovery that you can access something unintended suddenly becomes an unauthorised security breach - when there was no security. You see something you're not supposed to, suddenly you're the one who done messed up. The same goes for bugs - there is no pressure to fix bugs, because people who expose the bugs are ignored - might as well save some money and keep quite about the bug, don't bother fixing it, I'm sure noone will find it! So those who discover the bugs have little choice but to go public.
I kind of feel the same way about SQL injection attacks, since they've been mentioned. SQL injection attacks are old news. We know how to prevent them; sanitise your data, inwards and outwards. The SQL servers are just doing what they've been designed to do - process the data they are given as input. If your site is vulnerable to an SQL injection attack at this late stage, you are responsible as far as I can see, in the same way that a car manufacturer should be responsible for a stuck accelerator cable, not the person who experimented with putting their foot to the floor.
posted by Jimbob at 11:28 PM on November 21, 2012 [18 favorites]
It's commonly repeated that one shouldn't rely on "security through obscurity". The first reason was that it didn't work. The second reason was that when it didn't work, you done messed up and you had no-one to blame but yourself. People were therefore encouraged to implement strong security, as strong as possible, in order to create both a technological and legal barrier against intrusion.
But we've seen a number of cases in recent years where entities who relied on "security through obscurity", when compromised, avoided all responsibility and managed to get the attackers prosecuted. The lines have been blurred because of these prosecutions - accidental discovery that you can access something unintended suddenly becomes an unauthorised security breach - when there was no security. You see something you're not supposed to, suddenly you're the one who done messed up. The same goes for bugs - there is no pressure to fix bugs, because people who expose the bugs are ignored - might as well save some money and keep quite about the bug, don't bother fixing it, I'm sure noone will find it! So those who discover the bugs have little choice but to go public.
I kind of feel the same way about SQL injection attacks, since they've been mentioned. SQL injection attacks are old news. We know how to prevent them; sanitise your data, inwards and outwards. The SQL servers are just doing what they've been designed to do - process the data they are given as input. If your site is vulnerable to an SQL injection attack at this late stage, you are responsible as far as I can see, in the same way that a car manufacturer should be responsible for a stuck accelerator cable, not the person who experimented with putting their foot to the floor.
posted by Jimbob at 11:28 PM on November 21, 2012 [18 favorites]
On June 5, 2010, Daniel Spitler, aka "JacksonBrown", began discussing this vulnerability and possible ways to exploit it, including phishing, on an IRC channel.[8][31][32] Goatse Security constructed a PHP-based brute force script that would send HTTP requests with random ICC-IDs to the AT&T website until a legitimate ICC-ID is entered, which would return the email address corresponding to the ICC-ID.[27][30] This script was dubbed the "iPad 3G Account Slurper.
Apparently he is a member of Goatse Security. He wasn't just surfing the web and came across a page that was not intended for public viewing. They wrote a brute force script.
Metaphor:
While driving down a residential street, I aim my remote garage door opener at random garage doors to see if I can get one to open. One of the garage doors opens, I laugh at the idiot who didn't change the default security setting on his garage door. I leave his garage door wide open and drive off. Now everybody passing by can see into this garage, or enter it if they wish. I post this information on an IRC channel, for anyone else that may want to open this garage door. But I didn't do anything wrong! I didn't take anything out of the garage. The homeowner should have been more vigilant.
posted by JujuB at 11:33 PM on November 21, 2012 [7 favorites]
Apparently he is a member of Goatse Security. He wasn't just surfing the web and came across a page that was not intended for public viewing. They wrote a brute force script.
Metaphor:
While driving down a residential street, I aim my remote garage door opener at random garage doors to see if I can get one to open. One of the garage doors opens, I laugh at the idiot who didn't change the default security setting on his garage door. I leave his garage door wide open and drive off. Now everybody passing by can see into this garage, or enter it if they wish. I post this information on an IRC channel, for anyone else that may want to open this garage door. But I didn't do anything wrong! I didn't take anything out of the garage. The homeowner should have been more vigilant.
posted by JujuB at 11:33 PM on November 21, 2012 [7 favorites]
Could be a script call, or headers, or something else I dunno, but that's not terribly important. What is important in this case is that this was not a simple accessing of data at a given URL, but rather an active supplying of extra data from weev to AT&T's website, in addition it accessing the publicly available URL.
Still seems like a very close call to me. Would modifying part of a GET request count as a URL or not? What about a POST request? It sounds to me like they were spoofing headers - again, not a difficult thing, just part of the data your browser sends to a server every time you hit a page.
posted by Jimbob at 11:36 PM on November 21, 2012
Still seems like a very close call to me. Would modifying part of a GET request count as a URL or not? What about a POST request? It sounds to me like they were spoofing headers - again, not a difficult thing, just part of the data your browser sends to a server every time you hit a page.
posted by Jimbob at 11:36 PM on November 21, 2012
If your site is vulnerable to an SQL injection attack at this late stage, you are responsible as far as I can see
Agreed, but responsibility isn't zero-sum. My failure to secure my website from SQL injection attacks counts as a sort of malpractice or negligence; I don't see that immunizing someone using an SQL injection to steal the user data. If I leave my car unlocked and the keys inside, it doesn't immunize you from charges of car theft if you get in and drive off. My responsibility for failing to practice elementary car security causes the insurance company to refuse to cover my loss.
posted by fatbird at 11:40 PM on November 21, 2012 [3 favorites]
Agreed, but responsibility isn't zero-sum. My failure to secure my website from SQL injection attacks counts as a sort of malpractice or negligence; I don't see that immunizing someone using an SQL injection to steal the user data. If I leave my car unlocked and the keys inside, it doesn't immunize you from charges of car theft if you get in and drive off. My responsibility for failing to practice elementary car security causes the insurance company to refuse to cover my loss.
posted by fatbird at 11:40 PM on November 21, 2012 [3 favorites]
That's not really an appropriate metaphor for this case though... its more like if you leave a stack of cash sitting on your windowsill, and somebody comes along and takes a photograph of it, then posts the photo on twitter saying "hey this guy left a stack of cash sitting on his windowsill!". That's obviously a shitty thing to do, but again, is it illegal? And which part is illegal, taking the photo or publicising about it on the internet? In weev's case, he's being charged for taking the photo.
Or rather, it's your bank leaving a stack of peoples' cash on the windowsill, and ignoring emails saying "this is bad" until publicly shamed into doing its job.
3. he could pull that personal information, and give it to a newspaper to prove there's an issue, so that the newspaper can publicize it so that interested parties can follow up/so the storage facility guy will be shamed into fixing the problem.
3a. He show a newspaper there's an issue without invading people's privacy, beyond a few investigatory test cases.
posted by sebastienbailard at 11:45 PM on November 21, 2012 [1 favorite]
Or rather, it's your bank leaving a stack of peoples' cash on the windowsill, and ignoring emails saying "this is bad" until publicly shamed into doing its job.
3. he could pull that personal information, and give it to a newspaper to prove there's an issue, so that the newspaper can publicize it so that interested parties can follow up/so the storage facility guy will be shamed into fixing the problem.
3a. He show a newspaper there's an issue without invading people's privacy, beyond a few investigatory test cases.
posted by sebastienbailard at 11:45 PM on November 21, 2012 [1 favorite]
While driving down a residential street, I aim my remote garage door opener at random garage doors to see if I can get one to open.
These metahphors are hard, aren't they? What I see is, private information (email address) was bring broadcast based on unverified, unencrypted input data (a simple number that anyone could generate). If a website is sending out private information about me, it had better be because (a) I'd explicitly given it permission to, or (b) I'd logged in using secret, identifying information telling it who I was - a password, a key.
What if I could log into your Metafilter account without typing in your password, just by entering your user name? Who's to blame, me or mathowie?
It's more like being able to open any garage door with any garage door opener because the garage door manufacturer hadn't actually bothered implementing an actual security signal. But, again, evidence suggests the main aim was lulz, and getting the problem fixed a secondary concern, at best.
posted by Jimbob at 11:46 PM on November 21, 2012 [1 favorite]
These metahphors are hard, aren't they? What I see is, private information (email address) was bring broadcast based on unverified, unencrypted input data (a simple number that anyone could generate). If a website is sending out private information about me, it had better be because (a) I'd explicitly given it permission to, or (b) I'd logged in using secret, identifying information telling it who I was - a password, a key.
What if I could log into your Metafilter account without typing in your password, just by entering your user name? Who's to blame, me or mathowie?
It's more like being able to open any garage door with any garage door opener because the garage door manufacturer hadn't actually bothered implementing an actual security signal. But, again, evidence suggests the main aim was lulz, and getting the problem fixed a secondary concern, at best.
posted by Jimbob at 11:46 PM on November 21, 2012 [1 favorite]
Nothing quite like seeing Microsoft acknowledge Goatse.
posted by zippy at 11:47 PM on November 21, 2012 [2 favorites]
posted by zippy at 11:47 PM on November 21, 2012 [2 favorites]
If we are going to continue the money=private information analogy. This is like your bank forgetting to lock their vault door. This guy points out that the vault door is unlocked to the bank, but they ignore him. Then this guy takes the money. He certainly still committed a crime, even if the security was lax. The bank should also be held responsible for the security flaw and should probably be sued by it's customers for negligence. But the guy taking the money would still be a thief.
posted by Authorized User at 12:00 AM on November 22, 2012 [1 favorite]
posted by Authorized User at 12:00 AM on November 22, 2012 [1 favorite]
If we are going to continue the money=private information analogy. This is like your bank forgetting to lock their vault door. This guy points out that the vault door is unlocked to the bank, but they ignore him. Then this guy takes the money. He certainly still committed a crime, even if the security was lax. The bank should also be held responsible for the security flaw and should probably be sued by it's customers for negligence. But the guy taking the money would still be a thief.
But is his crime taking the money, or walking into the vault?
posted by destrius at 12:06 AM on November 22, 2012
But is his crime taking the money, or walking into the vault?
posted by destrius at 12:06 AM on November 22, 2012
Jimbob: It sounds to me like they were spoofing headers - again, not a difficult thing, just part of the data your browser sends to a server every time you hit a page.
Yeah, I agree, but it is a little muddier than the article in the main link makes it sound. It mentioned the context of the CFAA:
you had to have an explicit user account and password. It was therefore easy to tell whether access was authorized or not.
In this case, there is an explicit user account linked to the ICC-ID, used as a user ID by AT&T. The script identified itself as a given user ID, and got access to the linked account. That doesn't feel that off from the context mentioned in the linked article.
On the other hand, I do with the points about uneven enforcement, and I've got no doubt that the CFAA really is vague in a way that enables and exacerbates this uneven enforcement. We do need better support for security researchers, including clearer guidelines for staying on the right side of the law, and safeguards that keep that line in place even if you embarrass a powerful organization in the process. In best form, security researchers are basically a form of whistleblower.
This particular case I'm not really worked up by one way or the other.
posted by vibratory manner of working at 12:07 AM on November 22, 2012
Yeah, I agree, but it is a little muddier than the article in the main link makes it sound. It mentioned the context of the CFAA:
you had to have an explicit user account and password. It was therefore easy to tell whether access was authorized or not.
In this case, there is an explicit user account linked to the ICC-ID, used as a user ID by AT&T. The script identified itself as a given user ID, and got access to the linked account. That doesn't feel that off from the context mentioned in the linked article.
On the other hand, I do with the points about uneven enforcement, and I've got no doubt that the CFAA really is vague in a way that enables and exacerbates this uneven enforcement. We do need better support for security researchers, including clearer guidelines for staying on the right side of the law, and safeguards that keep that line in place even if you embarrass a powerful organization in the process. In best form, security researchers are basically a form of whistleblower.
This particular case I'm not really worked up by one way or the other.
posted by vibratory manner of working at 12:07 AM on November 22, 2012
What if I could log into your Metafilter account without typing in your password, just by entering your user name? Who's to blame, me or mathowie?
Well, both of you. You, for trying to access my account by typing in my username, which is clearly a conscious act of yours to impersonate me; and mathowie for having such lax security. Doesn't mean the consequences should be the same, or that mathowie's inattention to security exhonerates your clearly probing intent.
As a practical matter, if you do it once, find it's possible, and notify mathowie, causing him to fix it, then your transgression is very small in the balance. If you proceed to log into everyone's account to modify their profile to say "I love jimbob", then your transgression is cumulatively large.
posted by fatbird at 12:09 AM on November 22, 2012 [1 favorite]
Well, both of you. You, for trying to access my account by typing in my username, which is clearly a conscious act of yours to impersonate me; and mathowie for having such lax security. Doesn't mean the consequences should be the same, or that mathowie's inattention to security exhonerates your clearly probing intent.
As a practical matter, if you do it once, find it's possible, and notify mathowie, causing him to fix it, then your transgression is very small in the balance. If you proceed to log into everyone's account to modify their profile to say "I love jimbob", then your transgression is cumulatively large.
posted by fatbird at 12:09 AM on November 22, 2012 [1 favorite]
Or even better let us assume that the bank(AT&T) is very naive and just gives out money(private data) to anyone stating an account number(icc-id). It would still be illegal to deliberately use this massive security flaw to get money that is not yours.
posted by Authorized User at 12:17 AM on November 22, 2012
posted by Authorized User at 12:17 AM on November 22, 2012
ok ok you guys, metaphors are fun, let me try one:
It's like AT&T is a large corporation selling a telecommunications device, and each device has a unique serial number. They set things up so that you go to a website at an address containing a serial number for one such device, and the website just assumes you must be the person who bought that device and sends back a bunch of personal information about the real buyer.
In this scenario, weev is like a guy who was just goofing around on the internet one day and noticed this odd situation, and like any curious young engineer might, he fiddled around with the website until it sent him a whole bunch of peoples' personal info. He found it hilarious and egregious that such an important telecom company would do such a thing, so he told them so and then went to the press when they didn't seem to be interested in doing anything about it.
It's not like someone leaving a pile of money on their windowsill, or a bank leaving its vault door open. Those situations suggest that weev went someplace that was culturally or explicitly off limits. The email address == cash equivalence is pretty tenuous to begin with, but if we're going to equate a list of email addresses with a stack of cash it's more like the bank left the stack of cash out on the table with the deposit slips in open boxes marked with account numbers, or like the homeowner left it by the sidewalk next to the cardboard box full of free garage sale remnants. The information was on a public-facing web server, all he did was guess URLs in a way that most people who understand filesystems and web servers even a little bit have done. I certainly have.
posted by contraption at 12:18 AM on November 22, 2012 [7 favorites]
It's like AT&T is a large corporation selling a telecommunications device, and each device has a unique serial number. They set things up so that you go to a website at an address containing a serial number for one such device, and the website just assumes you must be the person who bought that device and sends back a bunch of personal information about the real buyer.
In this scenario, weev is like a guy who was just goofing around on the internet one day and noticed this odd situation, and like any curious young engineer might, he fiddled around with the website until it sent him a whole bunch of peoples' personal info. He found it hilarious and egregious that such an important telecom company would do such a thing, so he told them so and then went to the press when they didn't seem to be interested in doing anything about it.
It's not like someone leaving a pile of money on their windowsill, or a bank leaving its vault door open. Those situations suggest that weev went someplace that was culturally or explicitly off limits. The email address == cash equivalence is pretty tenuous to begin with, but if we're going to equate a list of email addresses with a stack of cash it's more like the bank left the stack of cash out on the table with the deposit slips in open boxes marked with account numbers, or like the homeowner left it by the sidewalk next to the cardboard box full of free garage sale remnants. The information was on a public-facing web server, all he did was guess URLs in a way that most people who understand filesystems and web servers even a little bit have done. I certainly have.
posted by contraption at 12:18 AM on November 22, 2012 [7 favorites]
What was the intent?
posted by roboton666 at 12:18 AM on November 22, 2012
posted by roboton666 at 12:18 AM on November 22, 2012
But is his crime taking the money, or walking into the vault?
In this case what has been lost from the victims is privacy, so the simple act of access is already an issue.
posted by Authorized User at 12:25 AM on November 22, 2012
In this case what has been lost from the victims is privacy, so the simple act of access is already an issue.
posted by Authorized User at 12:25 AM on November 22, 2012
or like the homeowner left it by the sidewalk next to the cardboard box full of free garage sale remnants.
Still not your money to keep. And this guy deliberately and systematically asked for e-mail addresses with an icc-id not his own, so it's a bit more involved than just stumbling on things.
posted by Authorized User at 12:32 AM on November 22, 2012
Still not your money to keep. And this guy deliberately and systematically asked for e-mail addresses with an icc-id not his own, so it's a bit more involved than just stumbling on things.
posted by Authorized User at 12:32 AM on November 22, 2012
Personally, I think it should be a legal requirement for anyone handling personal data to have a policy to respond to people finding leaks and holes, alongside all the other Data Protection Act stuff (or your country's equivalent). I also think that going public with this sort of data should be prosecuted to the full extent of the law - if the organisation who has the hole/leak haven't been informed first. If they have, and they're doing nothing (as seems to me was the case) then I think you have a moral obligation to go public (though not necessarily in the way that the fellow in this instance did), and I think the fact that you informed the organisation and they did nothing should be a legal defence against prosecution.
posted by Dysk at 12:43 AM on November 22, 2012 [2 favorites]
posted by Dysk at 12:43 AM on November 22, 2012 [2 favorites]
AT&T maintained that the two did not contact it directly about the vulnerability and learned about the problem only from a “business customer.”
From the Wired article.
posted by Authorized User at 12:54 AM on November 22, 2012
From the Wired article.
posted by Authorized User at 12:54 AM on November 22, 2012
Discussion about this by infosec researchers on DailyDave.
I think that weev is kind of an asshole is pretty given already; the question is more to do with whether the laws under which he was arrested are appropriate, and the consequences of the rulings in this case.
posted by destrius at 1:01 AM on November 22, 2012
I think that weev is kind of an asshole is pretty given already; the question is more to do with whether the laws under which he was arrested are appropriate, and the consequences of the rulings in this case.
posted by destrius at 1:01 AM on November 22, 2012
Jimbob: "We know how to prevent them; sanitise your data, inwards and outwards."
Apparently we don't if you're chastising us to "santize" data. Per bobby-tables.net:
posted by pwnguin at 1:28 AM on November 22, 2012 [4 favorites]
Apparently we don't if you're chastising us to "santize" data. Per bobby-tables.net:
There is only one way to avoid Bobby Tables attacks:
That's it. Don't try to escape invalid characters. Don't try to do it yourself. Learn how to use parameterized statements. Always, every single time.
- Use parameterized SQL calls.
The strip gets one thing crucially wrong. The answer is not to "sanitize your database inputs" yourself. It is prone to error.
posted by pwnguin at 1:28 AM on November 22, 2012 [4 favorites]
I think that weev is kind of an asshole is pretty given already
Well, YEAH. Goatse Security.
And bonus points to their logo designer for remembering the wedding ring.
posted by radwolf76 at 1:30 AM on November 22, 2012 [1 favorite]
Well, YEAH. Goatse Security.
And bonus points to their logo designer for remembering the wedding ring.
posted by radwolf76 at 1:30 AM on November 22, 2012 [1 favorite]
Kill all the whitehats! That'll scare them blackhats straight!
posted by CautionToTheWind at 1:33 AM on November 22, 2012 [1 favorite]
posted by CautionToTheWind at 1:33 AM on November 22, 2012 [1 favorite]
And bonus points to their logo designer for remembering the wedding ring.
I am not going to fact-check this statement.
posted by destrius at 1:34 AM on November 22, 2012 [3 favorites]
I am not going to fact-check this statement.
posted by destrius at 1:34 AM on November 22, 2012 [3 favorites]
fatbird: Failing to completely secure a server is a failure to completely secure it, not an implicit grant of permission to access it.
But the problem is, what defines an implicit grant of permission? Is it okay to read a MeFi post? Almost certainly. But, as the article is pointing out, if article 31337 is something open to the public, but typing 31338 results in an article that's not yet linked from the home page, is that implicit permission? As wierdo says, is the MeFi Deleted Posts script unauthorized access? By this argument, it might be.
Myself, I think that if a computer lets you do something through the normal interfaces, that's implicit permission. If you're sending your name as a SQL injection attack, then that's not a normal interface, but if you're just typing in normal stuff, even if it's normal stuff that the company didn't think you'd type in, then that's implicit permission.
What's really bothersome about this verdict isn't that he was jailed, because releasing all that info publicly was clearly damaging, and of no merit whatsoever. Some jail time for that would be entirely appropriate. But it's really scary that he's been jailed under a law that's so nebulous that it can mean anything.
Laws are supposed to mean things. You're supposed to be able to know whether an action will be illegal before you take it, or the law is poorly written, or shouldn't exist at all. Conspiracy and identity fraud charges clearly don't apply here, and it is a very real wrong that he was convicted under these nebulous statutes, even if he otherwise deserved jail time. Because, the next time someone is convicted in the same way, they may not.
We shouldn't be able to throw people in jail just because we don't like them, or because they did something we don't care for. If an action wasn't clearly and obviously illegal at the time it was taken, no punishment should ensue, no matter how pissed off that makes us. If they couldn't put the guy in jail for the real harm he did, releasing the names, then they shouldn't be able to just make shit up to throw him in jail. Again, the next time they abuse those powers, it may be for someone far less deserving -- an entirely nonviolent Occupy protestor, say.
We have a specific Constitutional prohibition against ex post facto laws, and having statutes that are so nebulous that you can't tell whether or not you're breaking them ahead of time is more or less the same thing.
posted by Malor at 1:36 AM on November 22, 2012 [1 favorite]
But the problem is, what defines an implicit grant of permission? Is it okay to read a MeFi post? Almost certainly. But, as the article is pointing out, if article 31337 is something open to the public, but typing 31338 results in an article that's not yet linked from the home page, is that implicit permission? As wierdo says, is the MeFi Deleted Posts script unauthorized access? By this argument, it might be.
Myself, I think that if a computer lets you do something through the normal interfaces, that's implicit permission. If you're sending your name as a SQL injection attack, then that's not a normal interface, but if you're just typing in normal stuff, even if it's normal stuff that the company didn't think you'd type in, then that's implicit permission.
What's really bothersome about this verdict isn't that he was jailed, because releasing all that info publicly was clearly damaging, and of no merit whatsoever. Some jail time for that would be entirely appropriate. But it's really scary that he's been jailed under a law that's so nebulous that it can mean anything.
Laws are supposed to mean things. You're supposed to be able to know whether an action will be illegal before you take it, or the law is poorly written, or shouldn't exist at all. Conspiracy and identity fraud charges clearly don't apply here, and it is a very real wrong that he was convicted under these nebulous statutes, even if he otherwise deserved jail time. Because, the next time someone is convicted in the same way, they may not.
We shouldn't be able to throw people in jail just because we don't like them, or because they did something we don't care for. If an action wasn't clearly and obviously illegal at the time it was taken, no punishment should ensue, no matter how pissed off that makes us. If they couldn't put the guy in jail for the real harm he did, releasing the names, then they shouldn't be able to just make shit up to throw him in jail. Again, the next time they abuse those powers, it may be for someone far less deserving -- an entirely nonviolent Occupy protestor, say.
We have a specific Constitutional prohibition against ex post facto laws, and having statutes that are so nebulous that you can't tell whether or not you're breaking them ahead of time is more or less the same thing.
posted by Malor at 1:36 AM on November 22, 2012 [1 favorite]
I'd be really interested to hear people's opinions on one aspect of this that hasn't had much discussion. As mentioned, an option for the ethical hacker, once a vuln has been found and not fixed after reporting, is to go to a publication and pass the information on.
Here's a scenario that happens, exactly as I'm going to describe, on a semi-regular basis.
The publication receives this information with what looks like proof - say, a pastebin file apparently full of live user ID. The next move is obvious - contact the company responsible for that user ID. But it's a weekend, and nobody's answering their phones or emails. That's assuming you can find the contact information, which can be surprisingly hard even for journalists.
The publication is online and can publish at will, 24/7. What does it do now? Each step is difficult to call.
Test the information to ascertain whether it is what it seems. But this involves using the user ID to access the service - a crime. Perhaps its possible to find other ways to contact the people in the pastebin file, perhaps it isn't. If there are many IDs, it'll be impracticable to contact them all.
If the information looks good, then what? Publish a story saying there's been a leak, knowing that this will publicise the existence of the hacked information even if its exact location isn't revealed? Not publish until contact has been made with the company, knowing that the info is out there and someone else probably will publish anyway?
You be the editor. What do you do?
posted by Devonian at 1:48 AM on November 22, 2012
Here's a scenario that happens, exactly as I'm going to describe, on a semi-regular basis.
The publication receives this information with what looks like proof - say, a pastebin file apparently full of live user ID. The next move is obvious - contact the company responsible for that user ID. But it's a weekend, and nobody's answering their phones or emails. That's assuming you can find the contact information, which can be surprisingly hard even for journalists.
The publication is online and can publish at will, 24/7. What does it do now? Each step is difficult to call.
Test the information to ascertain whether it is what it seems. But this involves using the user ID to access the service - a crime. Perhaps its possible to find other ways to contact the people in the pastebin file, perhaps it isn't. If there are many IDs, it'll be impracticable to contact them all.
If the information looks good, then what? Publish a story saying there's been a leak, knowing that this will publicise the existence of the hacked information even if its exact location isn't revealed? Not publish until contact has been made with the company, knowing that the info is out there and someone else probably will publish anyway?
You be the editor. What do you do?
posted by Devonian at 1:48 AM on November 22, 2012
If you're sending your name as a SQL injection attack, then that's not a normal interface, but if you're just typing in normal stuff, even if it's normal stuff that the company didn't think you'd type in, then that's implicit permission.
Is someone else's unique identifier normal stuff?
If someone is convicted of trespassing after he walked through an open gate, it does not make walking through open gates illegal. Similarly, that this guy was convicted for unauthorized access does not make all access illegal. When using computers, just as in real life you have to make judgment calls on whether the thing you are doing is right or not, and it's definitely not right "just because the computer makes it possible". This guy was clearly fully cognizent that he was not supposed to be accessing this information and yet he did.
Is there a standard way for computer security people to deal with finding exploits, and did Goatse security deal with this in this way?
posted by Authorized User at 2:03 AM on November 22, 2012
Is someone else's unique identifier normal stuff?
If someone is convicted of trespassing after he walked through an open gate, it does not make walking through open gates illegal. Similarly, that this guy was convicted for unauthorized access does not make all access illegal. When using computers, just as in real life you have to make judgment calls on whether the thing you are doing is right or not, and it's definitely not right "just because the computer makes it possible". This guy was clearly fully cognizent that he was not supposed to be accessing this information and yet he did.
Is there a standard way for computer security people to deal with finding exploits, and did Goatse security deal with this in this way?
posted by Authorized User at 2:03 AM on November 22, 2012
wait wait wait
so this was just a bunch of email addresses being leaked? not even the passwords, just the addresses? like, there was no money being moved around?
if so, isn't the whole "theft" analogy a bit much
i'd bring up the drugs but then someone might start talking about how they're illegal for a reason etc. etc. and i'd have to be depressed
posted by This, of course, alludes to you at 2:13 AM on November 22, 2012
so this was just a bunch of email addresses being leaked? not even the passwords, just the addresses? like, there was no money being moved around?
if so, isn't the whole "theft" analogy a bit much
i'd bring up the drugs but then someone might start talking about how they're illegal for a reason etc. etc. and i'd have to be depressed
posted by This, of course, alludes to you at 2:13 AM on November 22, 2012
fatbird, here's where I explain how computers and networks work, again. (Last time it was 802.11)
If I explicitly set up a Web server, explicitly place documents or resources on it, and explicitly give those files out when asked for, it's hard to simply call this "a failure to completely secure" let alone how it isn't exactly "an implicit grant of permission to access it". You are right on one thing though, it isn't an implicit grant; it's an explicit one.
posted by vsync at 2:22 AM on November 22, 2012 [2 favorites]
- You enter an address into your browser
- Your browser contacts the server and says "may I have this page please"
- The server may respond in one of several ways:
- Yes, here it is
- No
- No, but if you give me a password I might let you
If I explicitly set up a Web server, explicitly place documents or resources on it, and explicitly give those files out when asked for, it's hard to simply call this "a failure to completely secure" let alone how it isn't exactly "an implicit grant of permission to access it". You are right on one thing though, it isn't an implicit grant; it's an explicit one.
posted by vsync at 2:22 AM on November 22, 2012 [2 favorites]
No. You have legal responsibilities beyond just doing whatever the computer lets you do. This data was obviously not meant to be public and only meant to be accessed by devices with a certain id. And this weev guy clearly knew both of these things. The fact that it was accidentally (and idiotically) made ridiculously easy to access is not a defense for accessing this information for the purpose of shaming AT&T.
posted by Authorized User at 3:19 AM on November 22, 2012
posted by Authorized User at 3:19 AM on November 22, 2012
1. You walk up to a door and try the handle.
2. Your door contacts the lock (well, you know, artistic licence) and says "could you open please?"
3. The door may respond in one of several ways:
When you find my door unlocked, you need to make more than a token effort to let me know. If I'm not answering my phone, why would telling random strangers that my door is unlocked be the next logical step?
posted by pipeski at 3:28 AM on November 22, 2012 [1 favorite]
2. Your door contacts the lock (well, you know, artistic licence) and says "could you open please?"
3. The door may respond in one of several ways:
- Yes, I'm unlocked - you can open the door
- No, I'm locked
- No, but if you have a key, I might be persuaded to open
When you find my door unlocked, you need to make more than a token effort to let me know. If I'm not answering my phone, why would telling random strangers that my door is unlocked be the next logical step?
posted by pipeski at 3:28 AM on November 22, 2012 [1 favorite]
Goatse Security.
Goatse-curity: We're All About Spotting Wide-Open Holes.
posted by Mr. Bad Example at 3:32 AM on November 22, 2012
Goatse-curity: We're All About Spotting Wide-Open Holes.
posted by Mr. Bad Example at 3:32 AM on November 22, 2012
If I'm not answering my phone, why would telling random strangers that my door is unlocked be the next logical step?
Because you might be a behemoth telco who's incapable of sorting out even basic customer billing enquiries without hanging up on the twice before finally connecting them to the subcontinent. You ever tried finding an actual real number for someone useful in a company like this? Call AT&T now, see if you can get their phone tree to connect you with whoever's responsible for this exploit. It seems they made their customers email addresses easier to find than their own!
Unfortunately, recent history has shown stunts are required to alert companies of security problems. Subtle blackmail tends to get security holes fixed quicker than politely sending emails. But, as I said above, this clearly wasn't weev's primary aim. He screwed up. But his wasn't the biggest screw up.
posted by Jimbob at 3:40 AM on November 22, 2012 [3 favorites]
Because you might be a behemoth telco who's incapable of sorting out even basic customer billing enquiries without hanging up on the twice before finally connecting them to the subcontinent. You ever tried finding an actual real number for someone useful in a company like this? Call AT&T now, see if you can get their phone tree to connect you with whoever's responsible for this exploit. It seems they made their customers email addresses easier to find than their own!
Unfortunately, recent history has shown stunts are required to alert companies of security problems. Subtle blackmail tends to get security holes fixed quicker than politely sending emails. But, as I said above, this clearly wasn't weev's primary aim. He screwed up. But his wasn't the biggest screw up.
posted by Jimbob at 3:40 AM on November 22, 2012 [3 favorites]
I think that weev is kind of an asshole is pretty given already
But not just for his hacking behavior. He's also supposedly a bit of a racist loon. (I haven't listened to the podcasts those accusations are based on.)
posted by pracowity at 4:25 AM on November 22, 2012
But not just for his hacking behavior. He's also supposedly a bit of a racist loon. (I haven't listened to the podcasts those accusations are based on.)
posted by pracowity at 4:25 AM on November 22, 2012
This isn't a hard question at all. weev reported to both AT&T and Gawker that he knowingly gained unauthorized access to information on their servers. He confessed to the crime. The only logical way out of that is to somehow argue that he was wrong, he thought he found a security issue but the system is actually designed to leak email addresses.
posted by AlsoMike at 4:47 AM on November 22, 2012
posted by AlsoMike at 4:47 AM on November 22, 2012
To my mind, the new security flaw disclosure protocol should be as follows:
1) Create a succinct, well-commented proof-of-concept version of the "hack" (at whatever level this is a hack) which works.
2) Include #1 in a text document, along with some suggestions for fixes, if possible.
3) Append an addendum that, as reporting security flaws to business owners gets you sued and/or put in jail, security flaws will be reported to 4chan until a sort of "Good Samaritan" law is passed wherein people pointing out the flaws to the company cannot be sued or prosecuted and the company has thirty days to implement some kind of work-around or fix.
4) Disclose security flaw on 4chan.
posted by adipocere at 5:30 AM on November 22, 2012 [1 favorite]
1) Create a succinct, well-commented proof-of-concept version of the "hack" (at whatever level this is a hack) which works.
2) Include #1 in a text document, along with some suggestions for fixes, if possible.
3) Append an addendum that, as reporting security flaws to business owners gets you sued and/or put in jail, security flaws will be reported to 4chan until a sort of "Good Samaritan" law is passed wherein people pointing out the flaws to the company cannot be sued or prosecuted and the company has thirty days to implement some kind of work-around or fix.
4) Disclose security flaw on 4chan.
posted by adipocere at 5:30 AM on November 22, 2012 [1 favorite]
Isn't the failure to secure their customer's personal data also a crime?
posted by srboisvert at 6:03 AM on November 22, 2012
posted by srboisvert at 6:03 AM on November 22, 2012
can we all just agree that inconvieniencing or embarrassing large companies is a crime
posted by This, of course, alludes to you at 7:17 AM on November 22, 2012 [5 favorites]
posted by This, of course, alludes to you at 7:17 AM on November 22, 2012 [5 favorites]
pipeski the fault with your analogue is the door controls an explicitly public place. To stretch your analogy say the door of a mall. This isn't exact of course because each of the 100 million files stores on this web server in this business park has it's own URL door but the AT&T has made each of the doors explicitly public. If I send mailings to all my subscribers advertising my home address as a bookstore/coffee shop I shouldn't be surprised when people a) keep walking up and trying my front door and b) wander in and start reading in books in my foyer if the door isn't locked. And I shouldn't be surprised even if I really meant to advertise my business across town and it's the printer who transposed the addresses. And I shouldn't be surprised just because I only wanted 100 million of my closest friends to know I was running a bookstore/coffee shop out of my home.
posted by Mitheral at 7:28 AM on November 22, 2012 [1 favorite]
posted by Mitheral at 7:28 AM on November 22, 2012 [1 favorite]
How do I think you should act if you discover a security flaw.
First: Have you already perchance committed a crime while discovering this security flaw or feel that it is likely that you will be accused of a crime even though you feel you haven't. Either contact a lawyer and come up with a strategy to report yourself to the police and contact whomever you hacked to perhaps reach an agreement
or just lay low, stay quiet, do nothing, get rid of evidence and hope for the best
If you feel this is not necessary
Contact the flawed party(or parties, I guess) with information about the security flaw. Escalate until you receive a response of some kind. If all else fails, send a notarized letter or similar.
If you feel that the security flaw provides great risk to other people such as customers and the flawed party if not responding appropriately you can either contact the relevant authorities (police, medical board, consumer protection agency, privacy ombudsman etc.) or if you feel that this is overkill publicize the existence of the flaw as best you can without exposing any details, being careful not to be defamatory or revealing things you don't want to reveal, if nobody gives a shit, then that's hardly your problem.
That's it. The only way to involve 4chan is to browse it for lulz while waiting to see what happens.
Should the security flaw later lead to a massive catastrophe while the company failed to heed your advice, hope that you will be hired as an expert witness for the civil suit against the company by the people who suffered damages. If nothing as dramatic as this happens, you might not be remembered as an elite hacker but if you were doing the whole thing for the fame in it, you should probably try to sell some kind of hacking reality show to Discovery Channel.
posted by Authorized User at 7:45 AM on November 22, 2012 [1 favorite]
First: Have you already perchance committed a crime while discovering this security flaw or feel that it is likely that you will be accused of a crime even though you feel you haven't. Either contact a lawyer and come up with a strategy to report yourself to the police and contact whomever you hacked to perhaps reach an agreement
or just lay low, stay quiet, do nothing, get rid of evidence and hope for the best
If you feel this is not necessary
Contact the flawed party(or parties, I guess) with information about the security flaw. Escalate until you receive a response of some kind. If all else fails, send a notarized letter or similar.
If you feel that the security flaw provides great risk to other people such as customers and the flawed party if not responding appropriately you can either contact the relevant authorities (police, medical board, consumer protection agency, privacy ombudsman etc.) or if you feel that this is overkill publicize the existence of the flaw as best you can without exposing any details, being careful not to be defamatory or revealing things you don't want to reveal, if nobody gives a shit, then that's hardly your problem.
That's it. The only way to involve 4chan is to browse it for lulz while waiting to see what happens.
Should the security flaw later lead to a massive catastrophe while the company failed to heed your advice, hope that you will be hired as an expert witness for the civil suit against the company by the people who suffered damages. If nothing as dramatic as this happens, you might not be remembered as an elite hacker but if you were doing the whole thing for the fame in it, you should probably try to sell some kind of hacking reality show to Discovery Channel.
posted by Authorized User at 7:45 AM on November 22, 2012 [1 favorite]
pipeski the fault with your analogue is the door controls an explicitly public place. To stretch your analogy say the door of a mall. This isn't exact of course because each of the 100 million files stores on this web server in this business park has it's own URL door but the AT&T has made each of the doors explicitly public. If I send mailings to all my subscribers advertising my home address as a bookstore/coffee shop I shouldn't be surprised when people a) keep walking up and trying my front door and b) wander in and start reading in books in my foyer if the door isn't locked. And I shouldn't be surprised even if I really meant to advertise my business across town and it's the printer who transposed the addresses. And I shouldn't be surprised just because I only wanted 100 million of my closest friends to know I was running a bookstore/coffee shop out of my home.
A mall has many different kinds of doors. Some lead to public areas, some lead to public areas that are only open some of the time, some lead to bathrooms which are kind of public but also quite private, some lead to offices where you might either be expected or be trespassing. Some malls may even have doors that lead to private residences. One has to judge which door is which from the context, such as possible signs, what the door looks like and what kind of an area it seems to lead into. One should not assume that any door that is unlocked, accidentally or deliberately, leads into an area where one is welcome.
And definitely if it seems that a bunch of doors leading to areas that you know are private one should not then go through 112000 of them and see what's behind them.
posted by Authorized User at 7:53 AM on November 22, 2012
A mall has many different kinds of doors. Some lead to public areas, some lead to public areas that are only open some of the time, some lead to bathrooms which are kind of public but also quite private, some lead to offices where you might either be expected or be trespassing. Some malls may even have doors that lead to private residences. One has to judge which door is which from the context, such as possible signs, what the door looks like and what kind of an area it seems to lead into. One should not assume that any door that is unlocked, accidentally or deliberately, leads into an area where one is welcome.
And definitely if it seems that a bunch of doors leading to areas that you know are private one should not then go through 112000 of them and see what's behind them.
posted by Authorized User at 7:53 AM on November 22, 2012
I'm pretty sure it was here on mefi that someone posted a link to a reddit (?) thread where people were posting links to directories where yet other people were storing their movies and digital books and mp3 files. These other people had not intended these places to be public, or necessarily even open, but they were careless or ignorant. Did whoever posted the link to the reddit thread violate the law? Did I, by clicking on the link to the reddit thread? By clicking the links in the reddit thread? I don't know the answers to these questions.
posted by rtha at 8:13 AM on November 22, 2012
posted by rtha at 8:13 AM on November 22, 2012
And definitely if it seems that a bunch of doors leading to areas that you know are private one should not then go through 112000 of them and see what's behind them.
What should the law be then? Its illegal to go through an open door when you know you're not supposed to? How do you prove whether or not the defendant knew he was not supposed to? How do you prove there was sufficient information about which doors are public and which are private? It might be more clear-cut in the case we're discussing, but one can easily imagine lots of scenarios where things might be a lot more vague.
Did whoever posted the link to the reddit thread violate the law? Did I, by clicking on the link to the reddit thread?
This is a good example of a vague situation. How would you know whether those directories were meant to be private or not? If AT&T had accidentally left a file containing all the emails on a "private" directory like this, would I be committing a crime by accessing the file? The way the law was used in this case seems to suggest I am.
posted by destrius at 8:27 AM on November 22, 2012
What should the law be then? Its illegal to go through an open door when you know you're not supposed to? How do you prove whether or not the defendant knew he was not supposed to? How do you prove there was sufficient information about which doors are public and which are private? It might be more clear-cut in the case we're discussing, but one can easily imagine lots of scenarios where things might be a lot more vague.
Did whoever posted the link to the reddit thread violate the law? Did I, by clicking on the link to the reddit thread?
This is a good example of a vague situation. How would you know whether those directories were meant to be private or not? If AT&T had accidentally left a file containing all the emails on a "private" directory like this, would I be committing a crime by accessing the file? The way the law was used in this case seems to suggest I am.
posted by destrius at 8:27 AM on November 22, 2012
fatbird, here's where I explain how computers and networks work, again. (Last time it was 802.11)Okay, vsync, so here's the URL I try on this site: http://www.metafilter.com/../../../etc/shadowWe have computers to automate these processes and they follow our instructions just as an employee at my front desk might follow my instructions on what information to give out to different visitors.
- You enter an address into your browser
- Your browser contacts the server and says "may I have this page please"
- The server may respond in one of several ways:
- Yes, here it is
- No
- No, but if you give me a password I might let you
If I explicitly set up a Web server, explicitly place documents or resources on it, and explicitly give those files out when asked for, it's hard to simply call this "a failure to completely secure" let alone how it isn't exactly "an implicit grant of permission to access it". You are right on one thing though, it isn't an implicit grant; it's an explicit one.
If it works because they're using an older version of Apache that hasn't patched all its directory traversal holes, is that an explicit grant of permission to retrieve the shadow password file?
posted by fatbird at 8:30 AM on November 22, 2012 [1 favorite]
And have we done this before, vsync? Because your response was really patronizing.
posted by fatbird at 8:30 AM on November 22, 2012
posted by fatbird at 8:30 AM on November 22, 2012
My main conclusion from this thread: computer security metaphors involving the physical world don't really work
posted by destrius at 8:34 AM on November 22, 2012 [3 favorites]
posted by destrius at 8:34 AM on November 22, 2012 [3 favorites]
> Still not your money to keep.
See, that's where the analogy kinda breaks down. weev didn't take something away from the people whose account information he viewed, he learned something about them. Much as the big copyright concerns might want us think otherwise, stealing someone's money and looking at their email address are two very different classes of infraction.
Let's try this: you walk into a public bank lobby and find a large book on a dictionary stand off in a quiet corner of the waiting area. You open it up and find that it contains a list of all the bank's customers by account number, showing their name and home address and other identifying information. "That's weird," you think, "I wonder if these people know they're being exposed in this way." So, you flip through the book and write down a bunch of the info, then go to the press with it after you can't find anybody at the bank branch who is willing to do anything about this apparent breach of privacy. Should you be prosecuted and sent to jail for "stealing" the information that was in the book because you flipped it open to pages corresponding to account numbers that were not your own? Because you wandered around a public area and stumbled across something that shouldn't have been made public, and that you should've guessed wasn't left out on purpose?
posted by contraption at 9:00 AM on November 22, 2012 [1 favorite]
See, that's where the analogy kinda breaks down. weev didn't take something away from the people whose account information he viewed, he learned something about them. Much as the big copyright concerns might want us think otherwise, stealing someone's money and looking at their email address are two very different classes of infraction.
Let's try this: you walk into a public bank lobby and find a large book on a dictionary stand off in a quiet corner of the waiting area. You open it up and find that it contains a list of all the bank's customers by account number, showing their name and home address and other identifying information. "That's weird," you think, "I wonder if these people know they're being exposed in this way." So, you flip through the book and write down a bunch of the info, then go to the press with it after you can't find anybody at the bank branch who is willing to do anything about this apparent breach of privacy. Should you be prosecuted and sent to jail for "stealing" the information that was in the book because you flipped it open to pages corresponding to account numbers that were not your own? Because you wandered around a public area and stumbled across something that shouldn't have been made public, and that you should've guessed wasn't left out on purpose?
posted by contraption at 9:00 AM on November 22, 2012 [1 favorite]
Seems like the worst case scenario is omitted from davejay's list:
1. he could move on with his life.
2. he could pull that personal information, and use it to contact the exposed people anonymously so that they can follow up with the storage facility guy.
3. he could pull that personal information, and give it to a newspaper to prove there's an issue, so that the newspaper can publicize it so that interested parties can follow up/so the storage facility guy will be shamed into fixing the problem.
4. he could pull that personal information, and post it somewhere so that it can be accessed by folks who don't even know what the double key press is, or where the storage facility is.
5. He could quietly sell the exploit on the black market.
posted by fartknocker at 9:00 AM on November 22, 2012
1. he could move on with his life.
2. he could pull that personal information, and use it to contact the exposed people anonymously so that they can follow up with the storage facility guy.
3. he could pull that personal information, and give it to a newspaper to prove there's an issue, so that the newspaper can publicize it so that interested parties can follow up/so the storage facility guy will be shamed into fixing the problem.
4. he could pull that personal information, and post it somewhere so that it can be accessed by folks who don't even know what the double key press is, or where the storage facility is.
5. He could quietly sell the exploit on the black market.
posted by fartknocker at 9:00 AM on November 22, 2012
What should the law be then? Its illegal to go through an open door when you know you're not supposed to? How do you prove whether or not the defendant knew he was not supposed to? How do you prove there was sufficient information about which doors are public and which are private?
Well, that's the matter for the courts and of course varies in different jurisdictions. But trespassing is hardly an alien concept and definitely does not require for the front door to be locked.
posted by Authorized User at 9:01 AM on November 22, 2012 [2 favorites]
Well, that's the matter for the courts and of course varies in different jurisdictions. But trespassing is hardly an alien concept and definitely does not require for the front door to be locked.
posted by Authorized User at 9:01 AM on November 22, 2012 [2 favorites]
Let's try this: you walk into a public bank lobby and find a large book on a dictionary stand off in a quiet corner of the waiting area. You open it up and find that it contains a list of all the bank's customers by account number, showing their name and home address and other identifying information. "That's weird," you think, "I wonder if these people know they're being exposed in this way." So, you flip through the book and write down a bunch of the info, then go to the press with it after you can't find anybody at the bank branch who is willing to do anything about this apparent breach of privacy. Should you be prosecuted and sent to jail for "stealing" the information that was in the book because you flipped it open to pages corresponding to account numbers that were not your own? Because you wandered around a public area and stumbled across something that shouldn't have been made public, and that you should've guessed wasn't left out on purpose?
Ah yes. Given enough variables, the case would become muddy indeed. But let's add some relevant information to the analogy.
Let's try this: you walk into a public bank lobby and find a large book on a dictionary stand off in a quiet corner of the waiting area. You open it up and find that it contains a list of all the bank's customers by account number, showing their name and home address and other identifying information. "That's weird," you think, "I wonder if these people know they're being exposed in this way." So, you flip through the book and write down 112000 customers info, then go to the press with it in order to cause damage to the bank and gain fame for yourself after you allegedly can't find anybody at the bank branch who is willing to do anything about this apparent breach of privacy. Should you be prosecuted and sent to jail for "stealing" the information that was in the book because you flipped it open to pages corresponding to account numbers that were not your own? Because you wandered around a public area and stumbled across something that shouldn't have been made public, and that you KNEW wasn't left out on purpose?
Because these are the facts in this case. And knowledge and intent are specific things mentioned in the law under which this guy was convicted. So they are very much relevant to the case.
posted by Authorized User at 9:09 AM on November 22, 2012
Ah yes. Given enough variables, the case would become muddy indeed. But let's add some relevant information to the analogy.
Let's try this: you walk into a public bank lobby and find a large book on a dictionary stand off in a quiet corner of the waiting area. You open it up and find that it contains a list of all the bank's customers by account number, showing their name and home address and other identifying information. "That's weird," you think, "I wonder if these people know they're being exposed in this way." So, you flip through the book and write down 112000 customers info, then go to the press with it in order to cause damage to the bank and gain fame for yourself after you allegedly can't find anybody at the bank branch who is willing to do anything about this apparent breach of privacy. Should you be prosecuted and sent to jail for "stealing" the information that was in the book because you flipped it open to pages corresponding to account numbers that were not your own? Because you wandered around a public area and stumbled across something that shouldn't have been made public, and that you KNEW wasn't left out on purpose?
Because these are the facts in this case. And knowledge and intent are specific things mentioned in the law under which this guy was convicted. So they are very much relevant to the case.
posted by Authorized User at 9:09 AM on November 22, 2012
I just have to say that Authorized User is eponysterical. And completely wrong, but eponysterical.
There is no way to know whether a given URL is public or not except by the web server telling you after the fact. Take the common google 'hack' of certain webcams, for example. Many are in fact intended for public consumption, while others are not yet have no password protection. Did I commit a crime when I accessed one whose public/nonpublic status was not made explicit?
How about when I used to download MP3s from public SMB shares back in the late 90s?
The Internet is not a street and a web server is not your window sill. Don't pretend it is and we'll all be better off.
posted by wierdo at 9:17 AM on November 22, 2012 [1 favorite]
There is no way to know whether a given URL is public or not except by the web server telling you after the fact. Take the common google 'hack' of certain webcams, for example. Many are in fact intended for public consumption, while others are not yet have no password protection. Did I commit a crime when I accessed one whose public/nonpublic status was not made explicit?
How about when I used to download MP3s from public SMB shares back in the late 90s?
The Internet is not a street and a web server is not your window sill. Don't pretend it is and we'll all be better off.
posted by wierdo at 9:17 AM on November 22, 2012 [1 favorite]
AU, he wasn't convicted for disclosing the data (it's not illegal in the us), he was convicted for accessing a website that wasn't secured.
stupid gprs making me miss comments
posted by wierdo at 9:21 AM on November 22, 2012
stupid gprs making me miss comments
posted by wierdo at 9:21 AM on November 22, 2012
AU, he wasn't convicted for disclosing the data (it's not illegal in the us), he was convicted for accessing a website that wasn't secured.
Ah that is correct. The leaking and personal gain is only relevant in regards to sentencing.
There is no way to know whether a given URL is public or not except by the web server telling you after the fact.
The URLs in question are comprised of a unique identifier that these guys knew were tied into specific ipads. Furthermore they also knew that this information was supposed to be private and they intentionally tried to access lots more of them.
Security through an easily guessable string is still security in the eyes of the law.
Here is the complaint against Spitler and Auernheimer
Later that day, defendants Spitler and Auernheimer and other Goatse Security members discussed who in the press had disclosed the data breach to AT&T, since, contrary to the Gawker Article, neither defendant nor anyone from Goatse Security had. Indeed, defendant Auernheimer admitted as much to "Nstyr:
Nstyr: you DID call tech support right?
Auernheimer: totally but not really
Nstyr: lol
Auernheimer: i dont fuckin care i hope they sue me
So there goes out any pretense of white hattery.
posted by Authorized User at 9:53 AM on November 22, 2012 [4 favorites]
Ah that is correct. The leaking and personal gain is only relevant in regards to sentencing.
There is no way to know whether a given URL is public or not except by the web server telling you after the fact.
The URLs in question are comprised of a unique identifier that these guys knew were tied into specific ipads. Furthermore they also knew that this information was supposed to be private and they intentionally tried to access lots more of them.
Security through an easily guessable string is still security in the eyes of the law.
Here is the complaint against Spitler and Auernheimer
Later that day, defendants Spitler and Auernheimer and other Goatse Security members discussed who in the press had disclosed the data breach to AT&T, since, contrary to the Gawker Article, neither defendant nor anyone from Goatse Security had. Indeed, defendant Auernheimer admitted as much to "Nstyr:
Nstyr: you DID call tech support right?
Auernheimer: totally but not really
Nstyr: lol
Auernheimer: i dont fuckin care i hope they sue me
So there goes out any pretense of white hattery.
posted by Authorized User at 9:53 AM on November 22, 2012 [4 favorites]
Whether or not they were white hatting is irrelevant, IMO. And all web addresses are unique and guessable strings by design. Sometimes they're even seeming nonsense. There is no way to tell a priori.
AT&T chose to make this information publicly accessible. Why again should someone else take the fall for them? Oh, right, because people don't understand how the Internet works. And apparently people should be subject to criminal penalties because we don't like them.
There is a mechanism to inform people that particular content is only available to authorized users. at&t chose not to use it. They could have even done it without requiring a password or temporary secret, but they didn't. To me it seems like they're upset that they accidentally donated a book to the library and had the first person to check it out prosecuted.
posted by wierdo at 10:27 AM on November 22, 2012 [2 favorites]
AT&T chose to make this information publicly accessible. Why again should someone else take the fall for them? Oh, right, because people don't understand how the Internet works. And apparently people should be subject to criminal penalties because we don't like them.
There is a mechanism to inform people that particular content is only available to authorized users. at&t chose not to use it. They could have even done it without requiring a password or temporary secret, but they didn't. To me it seems like they're upset that they accidentally donated a book to the library and had the first person to check it out prosecuted.
posted by wierdo at 10:27 AM on November 22, 2012 [2 favorites]
If these guys were truly black hats, we would never have heard about this at all. All those iPad owners would simply have had their identities stolen, credit cards maxed out or whatever else folks with genuinely bad intentions could do to them, and then they'd have been lumped into the growing pool of unsolved cyber crime victims.
The only reason we're talking about this is because AT&T's feelings were hurt.
posted by fartknocker at 10:29 AM on November 22, 2012 [2 favorites]
The only reason we're talking about this is because AT&T's feelings were hurt.
posted by fartknocker at 10:29 AM on November 22, 2012 [2 favorites]
The group revealed the security flaw to Gawker Media after AT&T had been notified
The above quote is incorrect information from Auernheimer's Wikipedia page. Here's the truth, according to Ars Technica in January 2011:
Auernheimer later admitted that he did not contact AT&T as he had told Gawker Media, but said, "i dont f**kin care i hope they sue me."
Auernheimer never contacted AT&T before spreading news of the bug in IRC and then contacting the press. That seems an important thing to keep clear.
posted by mediareport at 10:29 AM on November 22, 2012 [3 favorites]
The above quote is incorrect information from Auernheimer's Wikipedia page. Here's the truth, according to Ars Technica in January 2011:
Auernheimer later admitted that he did not contact AT&T as he had told Gawker Media, but said, "i dont f**kin care i hope they sue me."
Auernheimer never contacted AT&T before spreading news of the bug in IRC and then contacting the press. That seems an important thing to keep clear.
posted by mediareport at 10:29 AM on November 22, 2012 [3 favorites]
Seems like the worst case scenario is omitted from davejay's list:
Agreed, I was just trying to collect good faith potential responses starting with "do nothing" up to what this guy did. There are certainly other bad faith options.
posted by davejay at 10:34 AM on November 22, 2012
Agreed, I was just trying to collect good faith potential responses starting with "do nothing" up to what this guy did. There are certainly other bad faith options.
posted by davejay at 10:34 AM on November 22, 2012
Thankfully, mediareport, being a jackass isn't illegal. FWIW, I believe in full disclosure, but only after the company has had time to respond. Notification should not be necessary to avoid prosecution for accessing a URL, though.
posted by wierdo at 10:40 AM on November 22, 2012
posted by wierdo at 10:40 AM on November 22, 2012
I'm not arguing that point, wierdo; I was just clarifying some outdated info posted in the 2nd comment here that seems to have taken root in some folks' minds. They did not bother to contact AT&T before going public. (I went ahead and changed Auernheimer's Wikipedia page to include a reference documenting that.)
posted by mediareport at 11:00 AM on November 22, 2012
posted by mediareport at 11:00 AM on November 22, 2012
Whether or not they were white hatting is irrelevant, IMO.
Yes, I was just correcting a misconception prevalent in the thread.
And all web addresses are unique and guessable strings by design. Sometimes they're even seeming nonsense. There is no way to tell a priori.
These guys very specifically knew that they were accessing URLs created by appending the unique identifiers of ipad devices. Are you really saying that after testing their script a couple of times they didn't know they were accessing private information? Remember they're not under trial for trying out a different ICC-ID and figuring out that there is a gaping security hole but rather using that security hole to gain access to private information over 9000 times.
To me it seems like they're upset that they accidentally donated a book to the library and had the first person to check it out prosecuted.
Well the first person to check it out intentionally checked it out knowing it contained private information and that the book was obviously in the library by accident. Knowledge and intent. This case is not about some innocent guy accidentally stumbling upon some proprietary information, it's about a guy who deliberately went out to find private information on at&t's system and found it and then knowingly accessed it a lot. And knowingly exploiting someone else's mistake, be it accidental money transfer, accidental release of information or whatever is not legal and should not be.
posted by Authorized User at 11:20 AM on November 22, 2012 [1 favorite]
Yes, I was just correcting a misconception prevalent in the thread.
And all web addresses are unique and guessable strings by design. Sometimes they're even seeming nonsense. There is no way to tell a priori.
These guys very specifically knew that they were accessing URLs created by appending the unique identifiers of ipad devices. Are you really saying that after testing their script a couple of times they didn't know they were accessing private information? Remember they're not under trial for trying out a different ICC-ID and figuring out that there is a gaping security hole but rather using that security hole to gain access to private information over 9000 times.
To me it seems like they're upset that they accidentally donated a book to the library and had the first person to check it out prosecuted.
Well the first person to check it out intentionally checked it out knowing it contained private information and that the book was obviously in the library by accident. Knowledge and intent. This case is not about some innocent guy accidentally stumbling upon some proprietary information, it's about a guy who deliberately went out to find private information on at&t's system and found it and then knowingly accessed it a lot. And knowingly exploiting someone else's mistake, be it accidental money transfer, accidental release of information or whatever is not legal and should not be.
posted by Authorized User at 11:20 AM on November 22, 2012 [1 favorite]
I disagree with Authorized User on the prosecution. It's absurd to charge these guys with a crime here, and I hope the case goes all the way to the Supreme Court which then restricts the idiotic application of a ridiculously outdated 1986 law.
posted by mediareport at 11:29 AM on November 22, 2012
posted by mediareport at 11:29 AM on November 22, 2012
The number of pages that were accessed is irrelevant. If it's illegal, one access is illegal in and of itself. weev is in fact being prosecuted for accessing a webpage. I've already explained why this is problematic: there is no way to know whether a particular resource is public before requesting it. If weev's actions are criminal, most anyone who has ever typed a URL manually is guilty of the same thing.
posted by wierdo at 11:29 AM on November 22, 2012 [2 favorites]
posted by wierdo at 11:29 AM on November 22, 2012 [2 favorites]
The number of pages that were accessed is irrelevant. If it's illegal, one access is illegal in and of itself.
It's only illegal if done with knowledge and intent. Accessing thousands of pages systematically shows intent.
there is no way to know whether a particular resource is public before requesting it.
And like I've said a couple of times already, after a couple of hundred testruns, weev definitely knew that the resource he was requesting was private. In fact the only reason he was requesting was that it was private. We are not talking about accessing any given webpage here, we're talking about accessing this one specific system.
If weev's actions are criminal, most anyone who has ever typed a URL manually is guilty of the same thing.
You yourself said that there is no way to know whether a particular resource is public before requesting so no. Knowledge is key. Weev did know what he was accessing while a person typing a manual URL did not.
The fact that these guys used a laughably simple technique (typing URLs) to intentionally access things they knew they were unauthorized to access does not mean that they were or someone else will be guilty of the same crime just for typing URLs.
posted by Authorized User at 11:49 AM on November 22, 2012
It's only illegal if done with knowledge and intent. Accessing thousands of pages systematically shows intent.
there is no way to know whether a particular resource is public before requesting it.
And like I've said a couple of times already, after a couple of hundred testruns, weev definitely knew that the resource he was requesting was private. In fact the only reason he was requesting was that it was private. We are not talking about accessing any given webpage here, we're talking about accessing this one specific system.
If weev's actions are criminal, most anyone who has ever typed a URL manually is guilty of the same thing.
You yourself said that there is no way to know whether a particular resource is public before requesting so no. Knowledge is key. Weev did know what he was accessing while a person typing a manual URL did not.
The fact that these guys used a laughably simple technique (typing URLs) to intentionally access things they knew they were unauthorized to access does not mean that they were or someone else will be guilty of the same crime just for typing URLs.
posted by Authorized User at 11:49 AM on November 22, 2012
And like I've said a couple of times already, after a couple of hundred testruns, weev definitely knew that the resource he was requesting was private.
I don't think your definitions of "private" and "authorized" are the same as a lot of people here, Authorized User. To me "private" means hidden, inaccessible. The data weev accessed wasn't, by definition, private or inaccessible. It was blatantly open. "Authorized" access suggests to me the need to provide some kind of private identification, password, key. This wasn't required either. I can see how you can say they knew they were accessing something they weren't supposed to, but the information wasn't veiled in a shroud of privacy, nor was any kind of authorization challenge required to be tricked in order to access it.
posted by Jimbob at 11:58 AM on November 22, 2012 [1 favorite]
I don't think your definitions of "private" and "authorized" are the same as a lot of people here, Authorized User. To me "private" means hidden, inaccessible. The data weev accessed wasn't, by definition, private or inaccessible. It was blatantly open. "Authorized" access suggests to me the need to provide some kind of private identification, password, key. This wasn't required either. I can see how you can say they knew they were accessing something they weren't supposed to, but the information wasn't veiled in a shroud of privacy, nor was any kind of authorization challenge required to be tricked in order to access it.
posted by Jimbob at 11:58 AM on November 22, 2012 [1 favorite]
Yeah and I'm saying that the knowledge they had matters very much indeed.
posted by Authorized User at 12:09 PM on November 22, 2012 [1 favorite]
posted by Authorized User at 12:09 PM on November 22, 2012 [1 favorite]
Also the actual federal case very much stands on them providing false identification, namely the ICC-IDs of other iPads.
posted by Authorized User at 12:20 PM on November 22, 2012
posted by Authorized User at 12:20 PM on November 22, 2012
"Authorized" access suggests to me the need to provide some kind of private identification, password, key.
What is the difference between these examples and the ICC-IDs that were used to gain access? Even if a password or key is weak, you aren't authorized to use it to access data, are you? If you use a brute force attack to guess passwords and gain access, that would qualify as unauthorized access, right? In what way is this different? I don't see how it is.
posted by orme at 12:21 PM on November 22, 2012 [1 favorite]
What is the difference between these examples and the ICC-IDs that were used to gain access? Even if a password or key is weak, you aren't authorized to use it to access data, are you? If you use a brute force attack to guess passwords and gain access, that would qualify as unauthorized access, right? In what way is this different? I don't see how it is.
posted by orme at 12:21 PM on November 22, 2012 [1 favorite]
I can see your point - there is a gradient between changing ID numbers in a GET request to see what comes up (I do this all the time, and as others have said, there are popular scripts out there that do this to Metafilter to find deleted posts) and brute-forcing passwords, and I admit it's very hard to tell where on the line this falls. My main concern is that a greater percent of the responsibility, as far as I can see, lies in AT&T for their weak security.
There was an issue with Facebook - possibly even ongoing - where when people delete photos on their Facebook account, the actual image isn't deleted from their servers, and if you still have the URL recorded, you can still access the photo. Now, people going around and scraping these photos are acting immorally, probably. But the fault lies with Facebook. They're the ones allowing people no privacy.
My wife's business website was attacked last week - a site I put together for her. I'm not sure how they did it, it looks like some kind of PHP weakness that allowed the attacker to concatenate extra data onto the end of files in the web directory, so that every PHP script now contained a link to some "Make Money At Home!" site at the end of it. I haven't spent any time trying to track down the attacker. I've spent my time reading up on PHP and Unix permissions because clearly I fucked up by leaving the site unsecure.
posted by Jimbob at 2:06 PM on November 22, 2012
There was an issue with Facebook - possibly even ongoing - where when people delete photos on their Facebook account, the actual image isn't deleted from their servers, and if you still have the URL recorded, you can still access the photo. Now, people going around and scraping these photos are acting immorally, probably. But the fault lies with Facebook. They're the ones allowing people no privacy.
My wife's business website was attacked last week - a site I put together for her. I'm not sure how they did it, it looks like some kind of PHP weakness that allowed the attacker to concatenate extra data onto the end of files in the web directory, so that every PHP script now contained a link to some "Make Money At Home!" site at the end of it. I haven't spent any time trying to track down the attacker. I've spent my time reading up on PHP and Unix permissions because clearly I fucked up by leaving the site unsecure.
posted by Jimbob at 2:06 PM on November 22, 2012
My wife's business website was attacked last week - a site I put together for her. I'm not sure how they did it, it looks like some kind of PHP weakness that allowed the attacker to concatenate extra data onto the end of files in the web directory, so that every PHP script now contained a link to some "Make Money At Home!" site at the end of it. I haven't spent any time trying to track down the attacker. I've spent my time reading up on PHP and Unix permissions because clearly I fucked up by leaving the site unsecure.
I'm no expert in this area, but the vast majority of exploits like this that I've seen have taken advantage of SQL injection vulnerabilities. Assuming you're running your database on the same machine as your web server, this is often pretty easy. I'd recommend you do the following: don't let your database user or PHP user have write access to the web root, and use bound parameters as pwnguin described above - don't try to sanitize database input yourself.
posted by me & my monkey at 2:25 PM on November 22, 2012 [1 favorite]
I'm no expert in this area, but the vast majority of exploits like this that I've seen have taken advantage of SQL injection vulnerabilities. Assuming you're running your database on the same machine as your web server, this is often pretty easy. I'd recommend you do the following: don't let your database user or PHP user have write access to the web root, and use bound parameters as pwnguin described above - don't try to sanitize database input yourself.
posted by me & my monkey at 2:25 PM on November 22, 2012 [1 favorite]
orme writes "If you use a brute force attack to guess passwords and gain access, that would qualify as unauthorized access, right? In what way is this different? I don't see how it is."
Do you believe that the deleted posts script used here by a significant percentage of the user base is illegal? If not how does it differ from what went down in the AT&T case in your view?
Jimbob writes "There was an issue with Facebook - possibly even ongoing - where when people delete photos on their Facebook account, the actual image isn't deleted from their servers, and if you still have the URL recorded, you can still access the photo. Now, people going around and scraping these photos are acting immorally, probably. But the fault lies with Facebook. They're the ones allowing people no privacy."
And this is actually a feature of the web and web servers. I have an image directory at http://mitheral.ca/images. I dump stuff in there that I want to hot link to either on web sites or in email. Those files for the most part aren't indexed on my site or anywhere else. But I can link to them when I want and so can anyone else who has the link. Once I put those images up they stay up (barring a lack of funds in my hosting account) until I stop hosting them.
One of the fundamental problems here is both the people making laws and the general public have very little idea how the internet works. They access Facebook and Twitter from their phones and look up movie times and transit maps and wikipedia but they don't know how that information gets to them besides some vague awareness of something the nerds call the Internet. And they certainly got grok that putting stuff at port 80 on a server facing the internet is the same as publishing it. A publishing venue that has essentially zero marginal cost. Sadly I don't see the ratio of the knowledgeable to the ignorant tipping the other way.
posted by Mitheral at 3:29 PM on November 22, 2012 [1 favorite]
Do you believe that the deleted posts script used here by a significant percentage of the user base is illegal? If not how does it differ from what went down in the AT&T case in your view?
Jimbob writes "There was an issue with Facebook - possibly even ongoing - where when people delete photos on their Facebook account, the actual image isn't deleted from their servers, and if you still have the URL recorded, you can still access the photo. Now, people going around and scraping these photos are acting immorally, probably. But the fault lies with Facebook. They're the ones allowing people no privacy."
And this is actually a feature of the web and web servers. I have an image directory at http://mitheral.ca/images. I dump stuff in there that I want to hot link to either on web sites or in email. Those files for the most part aren't indexed on my site or anywhere else. But I can link to them when I want and so can anyone else who has the link. Once I put those images up they stay up (barring a lack of funds in my hosting account) until I stop hosting them.
One of the fundamental problems here is both the people making laws and the general public have very little idea how the internet works. They access Facebook and Twitter from their phones and look up movie times and transit maps and wikipedia but they don't know how that information gets to them besides some vague awareness of something the nerds call the Internet. And they certainly got grok that putting stuff at port 80 on a server facing the internet is the same as publishing it. A publishing venue that has essentially zero marginal cost. Sadly I don't see the ratio of the knowledgeable to the ignorant tipping the other way.
posted by Mitheral at 3:29 PM on November 22, 2012 [1 favorite]
I'm no expert in this area, but the vast majority of exploits like this that I've seen have taken advantage of SQL injection vulnerabilities.
It's also entirely possible a drupal/wordpress theme was downloaded and used to accomplish this. At one point wordpress theme hacks got so bad that the top few Google results for "wordpress theme" were galleries consisting only of hacked themes.
You'd think it'd be pretty easy to spot these things, but the list of "potentially bad" things PHP is quite large, and there are some annoying things that can be done to mask them. Like base64 encoding the worst bits so simple pattern matching won't find it. I have an idea to use Hidden Markov models to ferret out "randomized" data buried in PHP, but not the time or practice to implement it. Certainly, anything that doesn't match javascript, HTML or PHP is suspect.
What I'm saying is, you should probably archive the site; not because it's gonna be used as evidence against some non-punishable fiend in Ukraine, but because you might learn a thing or two figuring out how they do it.
posted by pwnguin at 3:29 PM on November 22, 2012 [2 favorites]
It's also entirely possible a drupal/wordpress theme was downloaded and used to accomplish this. At one point wordpress theme hacks got so bad that the top few Google results for "wordpress theme" were galleries consisting only of hacked themes.
You'd think it'd be pretty easy to spot these things, but the list of "potentially bad" things PHP is quite large, and there are some annoying things that can be done to mask them. Like base64 encoding the worst bits so simple pattern matching won't find it. I have an idea to use Hidden Markov models to ferret out "randomized" data buried in PHP, but not the time or practice to implement it. Certainly, anything that doesn't match javascript, HTML or PHP is suspect.
What I'm saying is, you should probably archive the site; not because it's gonna be used as evidence against some non-punishable fiend in Ukraine, but because you might learn a thing or two figuring out how they do it.
posted by pwnguin at 3:29 PM on November 22, 2012 [2 favorites]
the difference between this and the deleted posts script is that weev provided AT&T with a string intended to identify a person (well, a person's device). That's of a different type than guessing or inferring a string referring to a page not otherwise accessible. Like Authorized User said, it's providing false identification, that's the legal difference here.
posted by vibratory manner of working at 4:32 PM on November 22, 2012
posted by vibratory manner of working at 4:32 PM on November 22, 2012
I have a lot of sympathy with the argument that incrementing a number in a public URL is not hacking, even though the author may not have intended to "publish" the information in the new URL. But conceptually, every web server is a computer designed to respond with information after being supplied with the proper input.
So is there any clear distinction between someone who accesses http://example.com/2.html because he knows that http://example.com/1.html is a valid address; and someone who accesses http://example.com/lucien.snorgasbottom?mypassw0rd because he knows that http://example.com/john.smith?1234 is a valid address and he has access to a list of users and a list of common passwords? Surely a URL is a URL. But I think it's clear that the second person is breaching access controls, weak as they are. I don't know where to draw the line between the two cases, but it doesn't have anything to do with the URL. I'd say it depends on the mindset of the person accessing the information, and proving that would be a matter for the court.
posted by Joe in Australia at 5:38 PM on November 22, 2012
So is there any clear distinction between someone who accesses http://example.com/2.html because he knows that http://example.com/1.html is a valid address; and someone who accesses http://example.com/lucien.snorgasbottom?mypassw0rd because he knows that http://example.com/john.smith?1234 is a valid address and he has access to a list of users and a list of common passwords? Surely a URL is a URL. But I think it's clear that the second person is breaching access controls, weak as they are. I don't know where to draw the line between the two cases, but it doesn't have anything to do with the URL. I'd say it depends on the mindset of the person accessing the information, and proving that would be a matter for the court.
posted by Joe in Australia at 5:38 PM on November 22, 2012
fatbird:
(Where I would make a distinction is that using credentials obtained from that file might be an issue. But we're not talking about providing any credentials, legally or illegally obtained, or even spoofed.)
When I described the sequence of events, I wasn't making an analogy. It's literally what happens and the protocols are explicitly designed that every step of the process is a request for something, and it's explicitly designed so that the server can say "no".
If you want to use technology for your business there are many ways you can go about it even without putting the server on the public Internet. But if you do, you should make sure you don't happily say "sure" to anyone that asks for any document on any of your servers. AT&T did a sloppy job and sadly the typical reaction when something like that is pointed out is to blame the person that pointed it out.
Now I wouldn't necessarily choose the path that weev chose but do think for a moment what would have happened if he chose the "responsible" path. AT&T would have fixed (clumsily patched, most likely) the specific thing that he found. Customers would have no idea their accounts might be better served by another vendor with more attention to detail. And it leads to a spiraling worsening of the entire industry, where it's hard to tell clients what's required to build a service and do it right "because look how quickly and cheaply everyone else just throws stuff up". Yet the second a bug is found it is the fault of the person who built it no matter the constraints they were under.
I'm against glossing over what actually happens in situations like these, #1 because you can't just charge people with crimes for making you look bad and #2 because lessening the responsibility of implementors to do their job right makes it harder for me as a professional to stand by my ethical obligations when doing this work for clients.
posted by vsync at 5:58 PM on November 22, 2012
Okay, vsync, so here's the URL I try on this site: http://www.metafilter.com/../../../etc/shadowI would say yes, honestly. Maybe they didn't mean to but they installed software with instructions to serve a response to that request. I think much of this is a consequence of people expecting more and more things from technology faster and faster without always thinking them through, and especially without wanting to spend the time or money to assure what they think they want ensured.
If it works because they're using an older version of Apache that hasn't patched all its directory traversal holes, is that an explicit grant of permission to retrieve the shadow password file?
(Where I would make a distinction is that using credentials obtained from that file might be an issue. But we're not talking about providing any credentials, legally or illegally obtained, or even spoofed.)
When I described the sequence of events, I wasn't making an analogy. It's literally what happens and the protocols are explicitly designed that every step of the process is a request for something, and it's explicitly designed so that the server can say "no".
If you want to use technology for your business there are many ways you can go about it even without putting the server on the public Internet. But if you do, you should make sure you don't happily say "sure" to anyone that asks for any document on any of your servers. AT&T did a sloppy job and sadly the typical reaction when something like that is pointed out is to blame the person that pointed it out.
Now I wouldn't necessarily choose the path that weev chose but do think for a moment what would have happened if he chose the "responsible" path. AT&T would have fixed (clumsily patched, most likely) the specific thing that he found. Customers would have no idea their accounts might be better served by another vendor with more attention to detail. And it leads to a spiraling worsening of the entire industry, where it's hard to tell clients what's required to build a service and do it right "because look how quickly and cheaply everyone else just throws stuff up". Yet the second a bug is found it is the fault of the person who built it no matter the constraints they were under.
I'm against glossing over what actually happens in situations like these, #1 because you can't just charge people with crimes for making you look bad and #2 because lessening the responsibility of implementors to do their job right makes it harder for me as a professional to stand by my ethical obligations when doing this work for clients.
posted by vsync at 5:58 PM on November 22, 2012
I would say yes, honestly. Maybe they didn't mean to
Where you're talking about explicitly granting permission, these two sentences contradict each other. You can't explicitly do something that you don't intend to do.
When I described the sequence of events, I wasn't making an analogy. It's literally what happens and the protocols are explicitly designed that every step of the process is a request for something, and it's explicitly designed so that the server can say "no".
It's worth pointing out that directory traversal holes are not holes in the web server themselves. There's no a priori reason to disallow directory traversal, and if you've properly limited the access of the web server process, no danger in it either. Where it results in a security breach, the failure is virtually always with the sysadmin failing to properly isolate the web server process.
The reason web servers prevent directory traversal is because a best practice has grown up around web serving that you should only potentially access anything within a clearly demarcated root because it's conceptually easier to secure a zone. It's convenient for the web server to say 'no' to directory traversals, but not necessary.
I'm not disagreeing that anyone making anything available on the Internet should go to great lengths to lock it down so that it does exactly what it's supposed to do, and no more. But all the high-minded technologists here pointing the finger at AT&T are, essentially, blaming the victim. When I looked at directory traversal holes again to refresh my memory, I found many almost current examples of PHP code allowing the same thing; pwnguin mentioned Wordpress theme hacks. Not only do you need to trust the Apache guys, the PHP guys, and the Wordpress guys that they've written secure software, but the guy from whom you bought a theme as well.
There's a lot of links in that chain, and saying "if it's available by URL, then it's your fault for not securing it properly" is creating an absurdly high standard of strict liability for security breaches, such that no one who hacked your server would ever be found guilty.
Again, I don't mean to exonerate AT&T, but I don't want exonerate Weev either. He was clearly exploiting a failure in security to harvest the personal information of others. No matter how incompetent AT&T's web guys were, Weev was clearly doing something wrong, that he knew was wrong, and that was easily demonstrated in court that he knew to be wrong--which is why he was convicted. Responsibility isn't zero sum.
posted by fatbird at 6:48 PM on November 22, 2012 [1 favorite]
Where you're talking about explicitly granting permission, these two sentences contradict each other. You can't explicitly do something that you don't intend to do.
When I described the sequence of events, I wasn't making an analogy. It's literally what happens and the protocols are explicitly designed that every step of the process is a request for something, and it's explicitly designed so that the server can say "no".
It's worth pointing out that directory traversal holes are not holes in the web server themselves. There's no a priori reason to disallow directory traversal, and if you've properly limited the access of the web server process, no danger in it either. Where it results in a security breach, the failure is virtually always with the sysadmin failing to properly isolate the web server process.
The reason web servers prevent directory traversal is because a best practice has grown up around web serving that you should only potentially access anything within a clearly demarcated root because it's conceptually easier to secure a zone. It's convenient for the web server to say 'no' to directory traversals, but not necessary.
I'm not disagreeing that anyone making anything available on the Internet should go to great lengths to lock it down so that it does exactly what it's supposed to do, and no more. But all the high-minded technologists here pointing the finger at AT&T are, essentially, blaming the victim. When I looked at directory traversal holes again to refresh my memory, I found many almost current examples of PHP code allowing the same thing; pwnguin mentioned Wordpress theme hacks. Not only do you need to trust the Apache guys, the PHP guys, and the Wordpress guys that they've written secure software, but the guy from whom you bought a theme as well.
There's a lot of links in that chain, and saying "if it's available by URL, then it's your fault for not securing it properly" is creating an absurdly high standard of strict liability for security breaches, such that no one who hacked your server would ever be found guilty.
Again, I don't mean to exonerate AT&T, but I don't want exonerate Weev either. He was clearly exploiting a failure in security to harvest the personal information of others. No matter how incompetent AT&T's web guys were, Weev was clearly doing something wrong, that he knew was wrong, and that was easily demonstrated in court that he knew to be wrong--which is why he was convicted. Responsibility isn't zero sum.
posted by fatbird at 6:48 PM on November 22, 2012 [1 favorite]
The problem with saying that a part of the URL can be considered an access control is that there's no way for someone to know ahead of time whether or not they're doing something illegal if they generate a URL and try it.
For example, is http://www.example.com/foobar2000 a URL referencing software that one can freely download, or is foobar2000 actually a password intended to secure access to example.com? Without examining the configuration of the server and any scripts involved, you can't tell. That's why many of us are saying this is an untenable situation.
posted by wierdo at 7:40 PM on November 22, 2012 [1 favorite]
For example, is http://www.example.com/foobar2000 a URL referencing software that one can freely download, or is foobar2000 actually a password intended to secure access to example.com? Without examining the configuration of the server and any scripts involved, you can't tell. That's why many of us are saying this is an untenable situation.
posted by wierdo at 7:40 PM on November 22, 2012 [1 favorite]
wierdo: Are you claiming that the defendants were ignorant as to the the nature of the ICC-ID as a personal identification string and it's role as a
ridiculously weak authorisation scheme? Because otherwise I just don't see how the fact that without context figuring this out would be impossible is relevant.
posted by Authorized User at 8:00 PM on November 22, 2012
ridiculously weak authorisation scheme? Because otherwise I just don't see how the fact that without context figuring this out would be impossible is relevant.
posted by Authorized User at 8:00 PM on November 22, 2012
whether or not they're doing something illegal if they generate a URL and try it
Given that the general use of URLs is that they're provided to be clicked on, if you're generating URLs, then you're already across a line as to normal usage. In no sense can serially generating URLs be considered an intended access method for the target of the URL. If a URL is offered on a web page, that constitutes an explicit offer of the target of the URL. If finding the target requires groping around a path-segment-space, then you're already outside the zone of explicitly offered targets, and it's on you if you break the law.
posted by fatbird at 8:08 PM on November 22, 2012
Given that the general use of URLs is that they're provided to be clicked on, if you're generating URLs, then you're already across a line as to normal usage. In no sense can serially generating URLs be considered an intended access method for the target of the URL. If a URL is offered on a web page, that constitutes an explicit offer of the target of the URL. If finding the target requires groping around a path-segment-space, then you're already outside the zone of explicitly offered targets, and it's on you if you break the law.
posted by fatbird at 8:08 PM on November 22, 2012
What's the use case for generating URLs that you're thinking of, weirdo?
posted by fatbird at 8:09 PM on November 22, 2012
posted by fatbird at 8:09 PM on November 22, 2012
No. Generating URLs is not abnormal usage. Definitely not. And this case does not mean it's illegal in general or even specifically in this case. The key act was knowingly and intentionally using other people's identification codes. Context matters, whoever is writing the deleted posts script for Metafilter knows what the numbers refer to and that they do not represent an identification scheme of some kind.
posted by Authorized User at 8:16 PM on November 22, 2012
posted by Authorized User at 8:16 PM on November 22, 2012
fatbird
posted by vsync at 11:53 PM on November 22, 2012 [1 favorite]
Where you're talking about explicitly granting permission, these two sentences contradict each other. You can't explicitly do something that you don't intend to do.As someone who tends at times toward social awkwardness I can assure you that it is entirely possible, and becoming aware of and avoiding that behavior can make the world a better place.
It's worth pointing out that directory traversal holes[...irrelevant tangent elided]Not trying to be snarky here but I really don't see where that side-note was going. I agree with you that proper permissions on the server are essential, otherwise you merely have a brittle crunchy shell around gooey nougat. But whether directory traversals are a security problem per se or merely a risk seems entirely irrelevant to the question at hand.
But all the high-minded technologists here pointing the finger at AT&T are, essentially, blaming the victim.Oh please. This isn't rape. The rape of the lock, maybe. There's not even a victim here except AT&T's pride.
I found many almost current examples of PHP code allowing the same thingThe vast majority of PHP code is quite bad. The language makes it easy to write insecure code, and in fact hard to write secure code.
pwnguin mentioned Wordpress theme hacks. Not only do you need to trust the Apache guys, the PHP guys, and the Wordpress guys that they've written secure software, but the guy from whom you bought a theme as well.Yes you do. Pretending otherwise is merely a delusion, hurts the progress of the industry, and generates a false sense of security which leads to innocents trusting untrustworthy third parties with far more than they should, and getting hurt.
There's a lot of links in that chain, and saying "if it's available by URL, then it's your fault for not securing it properly" is creating an absurdly high standard of strict liability for security breaches, such that no one who hacked your server would ever be found guilty.Untrue. If you follow the easy-to-use protocols for granting/denying access which are already helpfully provided to you by the protocol, and someone uses a set of credentials that aren't theirs, that's very clear-cut. I'd even generously consider that triggering an exploit might be considered in the same way. But asking for something and being given it is by no means the same thing.
posted by vsync at 11:53 PM on November 22, 2012 [1 favorite]
Given that the general use of URLs is that they're provided to be clicked on, if you're generating URLs, then you're already across a line as to normal usage.This is a blatant untruth.
posted by vsync at 11:54 PM on November 22, 2012
Oh please. This isn't rape. The rape of the lock, maybe.
What on earth are you talking about?
posted by Authorized User at 12:28 AM on November 23, 2012 [1 favorite]
What on earth are you talking about?
posted by Authorized User at 12:28 AM on November 23, 2012 [1 favorite]
You know, Pope's poem? It's a joke, with a play on the word "lock" (hair or security mechanism).
posted by pracowity at 1:10 AM on November 23, 2012 [1 favorite]
posted by pracowity at 1:10 AM on November 23, 2012 [1 favorite]
Yeah ok. Still don't understand why this is being compared to rape all of a sudden, but maybe that's poetry as well.
posted by Authorized User at 1:24 AM on November 23, 2012 [1 favorite]
posted by Authorized User at 1:24 AM on November 23, 2012 [1 favorite]
rape got brought in because 'victim blaming' is a thing that happens most often in rape cases or other similar situations, so there's a strong association there.
posted by vibratory manner of working at 1:30 AM on November 23, 2012 [1 favorite]
posted by vibratory manner of working at 1:30 AM on November 23, 2012 [1 favorite]
Authorized User: I think most people here actually do agree with that you're saying, that in this case weev's actions are probably illegal and should be prosecuted. What I think is being argued here is that many times, people who stumble upon or otherwise find similar security issues find themselves being arrested and charged under such laws even if they follow all the proper practices regarding disclosure. Even if weev didn't actually harvest all the addresses but just reported to AT&T and then the media, if he caused any embarrassment to AT&T in the process I wouldn't be surprised if he got arrested anyway.
And that's the issue here, that the law shouldn't be such that things like that could happen.
posted by destrius at 3:02 AM on November 23, 2012
And that's the issue here, that the law shouldn't be such that things like that could happen.
posted by destrius at 3:02 AM on November 23, 2012
An ICCID is not an identification number in any usual sense of the term. It has never been a secret code. It's more like a serial number or phone number, not like your voicemail PIN. Where can a line be drawn here? I see no reasonable point that doesn't involve the site owner having to have some level of actual security such that the server says no.
If, after the server says no, a person then continues to attempt access when they are not an authorized user, I can be reasonably sure there was intent. The server said no and they went after the information anyway. That's not what happened here. AT&T could have taken steps to secure access but did not.
If the volume of requests alone (short of a DoS) can make something illegal, we're all fucked anyway. Rudeness should not be criminalized.
posted by wierdo at 8:53 AM on November 23, 2012 [1 favorite]
If, after the server says no, a person then continues to attempt access when they are not an authorized user, I can be reasonably sure there was intent. The server said no and they went after the information anyway. That's not what happened here. AT&T could have taken steps to secure access but did not.
If the volume of requests alone (short of a DoS) can make something illegal, we're all fucked anyway. Rudeness should not be criminalized.
posted by wierdo at 8:53 AM on November 23, 2012 [1 favorite]
An ICCID is not an identification number in any usual sense of the term. It has never been a secret code. It's more like a serial number or phone number, not like your voicemail PIN.
Yeah. Fraudulently spoofing your caller ID is also a crime. As is pretending to be someone else by giving their social security number. An ICCID is a serial number identifying a single device. Also it's very much extremely silly to use someone's SSN as a security feature. (To the point where banks and such organization should be held liable if they do .)
If the volume of requests alone (short of a DoS) can make something illegal, we're all fucked anyway. Rudeness should not be criminalized.
I agree. That's why the volume of requests alone would not be enough to make accessing these a crime. People want to keep reducing this case to a single simple rule, with the extreme example being the quote in the title of this thread. That's not how the law works.
And as a sort of statement as this thread winds down: These guys will probably end up spending time in jail and pay a large amount of money in damages. I think this is insane. An appropriate punishment in my opinion would be a modest fine and a nominal sum of damages, like say, 1200 dollars which works out to one cent per e-mail harvested. They definitely should not be made to pay for the costs of fixing the security hole. Furthermore I think there should be a system where companies such as at&t are held responsible for lax IT security practices.
posted by Authorized User at 9:30 AM on November 23, 2012
Yeah. Fraudulently spoofing your caller ID is also a crime. As is pretending to be someone else by giving their social security number. An ICCID is a serial number identifying a single device. Also it's very much extremely silly to use someone's SSN as a security feature. (To the point where banks and such organization should be held liable if they do .)
If the volume of requests alone (short of a DoS) can make something illegal, we're all fucked anyway. Rudeness should not be criminalized.
I agree. That's why the volume of requests alone would not be enough to make accessing these a crime. People want to keep reducing this case to a single simple rule, with the extreme example being the quote in the title of this thread. That's not how the law works.
And as a sort of statement as this thread winds down: These guys will probably end up spending time in jail and pay a large amount of money in damages. I think this is insane. An appropriate punishment in my opinion would be a modest fine and a nominal sum of damages, like say, 1200 dollars which works out to one cent per e-mail harvested. They definitely should not be made to pay for the costs of fixing the security hole. Furthermore I think there should be a system where companies such as at&t are held responsible for lax IT security practices.
posted by Authorized User at 9:30 AM on November 23, 2012
And also, sorry for dominating the thread. I had a quiet day and greatly enjoy playing devil's advocate.
posted by Authorized User at 10:02 AM on November 23, 2012
posted by Authorized User at 10:02 AM on November 23, 2012
Spoofing my ANI is indeed a crime, but only if I'm doing it in furtherance of another crime. Otherwise, lock my ass up. Very rarely is the ANI sent to the person receiving a call from me the ANI of the particular device I'm using at the time.
And no, the IMEI identifies the device, the ICCID identifies the SIM, the IMSI identifies the user, and Ki authenticates the user.
Yet another example of how this result is stupid: On many auto manufacturer's websites, you can view the service history of a vehicle with nothing more than the VIN. Very handy if you're thinking of buying a car and want to know what warranty service has been necessary. At no time in the process do you have to claim to be the owner of the car. By providing the VIN to a car I don't own, I'm apparently breaking the law. That makes no sense to me.
posted by wierdo at 10:16 AM on November 23, 2012
And no, the IMEI identifies the device, the ICCID identifies the SIM, the IMSI identifies the user, and Ki authenticates the user.
Yet another example of how this result is stupid: On many auto manufacturer's websites, you can view the service history of a vehicle with nothing more than the VIN. Very handy if you're thinking of buying a car and want to know what warranty service has been necessary. At no time in the process do you have to claim to be the owner of the car. By providing the VIN to a car I don't own, I'm apparently breaking the law. That makes no sense to me.
posted by wierdo at 10:16 AM on November 23, 2012
So was your intent when accessing these vin records to access data you were not authorized and did you know that by providing that vin you would gain access to this data?
posted by Authorized User at 10:35 AM on November 23, 2012
posted by Authorized User at 10:35 AM on November 23, 2012
And if the answer to one of these questions then apparently you are not breaking the law.
posted by Authorized User at 10:44 AM on November 23, 2012
posted by Authorized User at 10:44 AM on November 23, 2012
dasein, it's the same guy. (I've met him.)
posted by madcaptenor at 2:43 PM on November 23, 2012
posted by madcaptenor at 2:43 PM on November 23, 2012
Authorized User writes "
Yeah. Fraudulently spoofing your caller ID is also a crime. "
Anyone got a cite for this? As far as I know caller ID is like the sex field on our profile and anything goes. It's specifically designed to allow owners of private PBXs to set the origination to anything they please.
Now spoofing ANI is probably a crime because it is used for billing.
posted by Mitheral at 5:22 PM on November 23, 2012
Yeah. Fraudulently spoofing your caller ID is also a crime. "
Anyone got a cite for this? As far as I know caller ID is like the sex field on our profile and anything goes. It's specifically designed to allow owners of private PBXs to set the origination to anything they please.
Now spoofing ANI is probably a crime because it is used for billing.
posted by Mitheral at 5:22 PM on November 23, 2012
Fraud is illegal, and doing something in furtherance of fraud is illegal, so spoofing your Caller ID or ANI in furtherance of fraud is illegal.
AU, the manufacturer's site I'm thinking of does not explicitly state that you can register with a VIN only if you are the owner, nor does it explicitly prohibit registering with someone else's VIN. It's not a great analogy, though, because the functionality requires you to be a registered user. at&t has much less to stand on, IMO, since the pages were accessible to any old joe with the URL.
posted by wierdo at 7:21 PM on November 23, 2012
AU, the manufacturer's site I'm thinking of does not explicitly state that you can register with a VIN only if you are the owner, nor does it explicitly prohibit registering with someone else's VIN. It's not a great analogy, though, because the functionality requires you to be a registered user. at&t has much less to stand on, IMO, since the pages were accessible to any old joe with the URL.
posted by wierdo at 7:21 PM on November 23, 2012
You Are Committing A Crime Right Now
posted by the man of twists and turns at 2:48 AM on November 26, 2012
posted by the man of twists and turns at 2:48 AM on November 26, 2012
Forget Disclosure — Hackers Should Keep Security Holes to Themselves, by Andrew Auernheimer, aka weev.
posted by the man of twists and turns at 10:10 AM on November 30, 2012
posted by the man of twists and turns at 10:10 AM on November 30, 2012
The Rise and Fall of Jeremy Hammond: Enemy of the State. As a devastating series of cyberattacks struck the heart of the national-security establishment, the Feds set out to destroy the legendary hacker and radical anarchist by any means necessary
posted by homunculus at 11:44 AM on December 9, 2012
posted by homunculus at 11:44 AM on December 9, 2012
« Older Go Team Parasite! | The Ten Least Powerful People In Comics Newer »
This thread has been archived and is closed to new comments
posted by rebent at 10:11 PM on November 21, 2012 [5 favorites]