15 bits of crypto should be enough for anybody
May 16, 2008 10:01 PM Subscribe
On May 13, security advisories published by Debian and Ubuntu revealed that, for over a year, their OpenSSL libraries have had a major flaw in their CSPRNG, which is used by key generation functions in many widely-used applications, which caused the "random" numbers produced to be extremely predictable. [lolcat summary]
How bad is it? It's pretty bad. Understand that these keys are used not only for encryption, but also for authentication. The keyspace has been reduced to a mere 32,768 possibilities, and you can already download them all, along with tools to use them. Worse still, in the days before the issue became publicly known, there was a noticeable spike in the number of brute-force attacks on SSH servers, indicating that there has already been significant exploitation of this vulnerability.
Partial timeline of events: In May 2006, a bug led to a question which led to the fateful patch being applied to md_rand.c (in Debian's "unstable" development branch). In April 2007, Debian 4.0 "etch" and Ubuntu 7.04 were both released, which was the beginning of the inclusion of the buggy version of OpenSSL in officially-released distributions. The bug remained unfixed through the releases of Ubuntu 7.10 and 8.04. On May 7, 2008, the patch to fix the problem was committed to Debian's source repository, and on May 13 the issue was officially disclosed and updated packages were made available to users. (The patch's availability days before public disclosure of the bug appears to be a violation of Debian's policy.)
Here are some responses from Debian blogs, and two from an OpenSSL developer.
How bad is it? It's pretty bad. Understand that these keys are used not only for encryption, but also for authentication. The keyspace has been reduced to a mere 32,768 possibilities, and you can already download them all, along with tools to use them. Worse still, in the days before the issue became publicly known, there was a noticeable spike in the number of brute-force attacks on SSH servers, indicating that there has already been significant exploitation of this vulnerability.
Partial timeline of events: In May 2006, a bug led to a question which led to the fateful patch being applied to md_rand.c (in Debian's "unstable" development branch). In April 2007, Debian 4.0 "etch" and Ubuntu 7.04 were both released, which was the beginning of the inclusion of the buggy version of OpenSSL in officially-released distributions. The bug remained unfixed through the releases of Ubuntu 7.10 and 8.04. On May 7, 2008, the patch to fix the problem was committed to Debian's source repository, and on May 13 the issue was officially disclosed and updated packages were made available to users. (The patch's availability days before public disclosure of the bug appears to be a violation of Debian's policy.)
Here are some responses from Debian blogs, and two from an OpenSSL developer.
This sucks.
posted by IronLizard at 10:32 PM on May 16, 2008
posted by IronLizard at 10:32 PM on May 16, 2008
"Anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin."
-John von Neumann
posted by mullingitover at 10:35 PM on May 16, 2008 [16 favorites]
-John von Neumann
posted by mullingitover at 10:35 PM on May 16, 2008 [16 favorites]
more post should have lolcat summaries. even without it, would still be an excellent post. thanks.
posted by Dillonlikescookies at 10:44 PM on May 16, 2008 [2 favorites]
posted by Dillonlikescookies at 10:44 PM on May 16, 2008 [2 favorites]
Lots of crow to eat all around. Debian devs are chiefly at fault, but OpenSSL devs need to choke down some crow as well.
posted by Slithy_Tove at 11:01 PM on May 16, 2008
posted by Slithy_Tove at 11:01 PM on May 16, 2008
From the last link in the FPP:
Short version: "You get what you pay for."
posted by paulsc at 11:09 PM on May 16, 2008
"... The question is: what should we do to avoid this happening again? Firstly, if package maintainers think they are fixing a bug, then they should try to get it fixed upstream, not fix it locally. Had that been done in this case, there is no doubt none of this would have happened. Secondly, it seems clear that we (the OpenSSL team) need to find a way that people can reliably communicate with us in these kinds of cases.That's a salient point, for those championing OSS. Many of the core technology modules of any of the major free OSS projects come down to being the "products" and "responsibility" of a handful of volunteers, and when a cockup of this magnitude gets out, people whose jobs demand an accountability trail in the money economy don't forget them. Windows may suck, and i5/OS may be closed and inscrutable, but, by God, there is somebody to sue at the bottom of those responsibility chains.
The problem with the second is that there are a lot of people who think we should assist them, and OpenSSL is spectacularly underfunded compared to most other open source projects of its importance. No-one that I am aware of is paid by their employer to work full-time on it. Despite the widespread use of OpenSSL, almost no-one funds development on it. And, indeed, many commercial companies who absolutely depend on it refuse to even acknowledge publicly that they use it, despite the requirements of the licence, let alone contribute towards it in any way.
I welcome any suggestions to improve this situation. ..."
Short version: "You get what you pay for."
posted by paulsc at 11:09 PM on May 16, 2008
Lots of crow to eat all around. Debian devs are chiefly at fault, but OpenSSL devs need to choke down some crow as well.
Indeed. If you look at he "a question" link the deb developer asks about commenting out the two lines of code that caused the problem (well one was okay, the other caused the problem) and states "But I have no idea what effect this really has on the RNG. The only effect I see is that the pool might receive less entropy." The OpenSSL dev replies with "If it helps with debugging, I'm in favor of removing them."
So although the Debian guys have to take some heat, he was pretty open about the fact he wasn't certain about what he was doing, and the OpenSSL guys said to go right ahead.
posted by markr at 11:13 PM on May 16, 2008 [3 favorites]
Indeed. If you look at he "a question" link the deb developer asks about commenting out the two lines of code that caused the problem (well one was okay, the other caused the problem) and states "But I have no idea what effect this really has on the RNG. The only effect I see is that the pool might receive less entropy." The OpenSSL dev replies with "If it helps with debugging, I'm in favor of removing them."
So although the Debian guys have to take some heat, he was pretty open about the fact he wasn't certain about what he was doing, and the OpenSSL guys said to go right ahead.
posted by markr at 11:13 PM on May 16, 2008 [3 favorites]
Windows may suck, and i5/OS may be closed and inscrutable, but, by God, there is somebody to sue at the bottom of those responsibility chains.Have you read the Windows EULA? Or any software license, for that matter? They have fun things in them like:
EXCLUSION OF INCIDENTAL, CONSEQUENTIAL AND CERTAIN OTHER DAMAGES. TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, IN NO EVENT SHALL MICROSOFT OR ITS SUPPLIERS BE LIABLE FOR ANY SPECIAL, INCIDENTAL, PUNITIVE, INDIRECT, OR CONSEQUENTIAL DAMAGES WHATSOEVER (INCLUDING, BUT NOT LIMITED TO, DAMAGES FOR LOSS OF PROFITS OR CONFIDENTIAL OR OTHER INFORMATION, FOR BUSINESS INTERRUPTION, FOR PERSONAL INJURY, FOR LOSS OF PRIVACY, FOR FAILURE TO MEET ANY DUTY INCLUDING OF GOOD FAITH OR OF REASONABLE CARE, FOR NEGLIGENCE, AND FOR ANY OTHER PECUNIARY OR OTHER LOSS WHATSOEVER) ARISING OUT OF OR IN ANY WAY RELATED TO THE USE OF OR INABILITY TO USE THE SOFTWARE, THE PROVISION OF OR FAILURE TO PROVIDE SUPPORT OR OTHER SERVICES, INFORMATON, SOFTWARE, AND RELATED CONTENT THROUGH THE SOFTWARE OR OTHERWISE ARISING OUT OF THE USE OF THE SOFTWARE, OR OTHERWISE UNDER OR IN CONNECTION WITH ANY PROVISION OF THIS EULA, EVEN IN THE EVENT OF THE FAULT, TORT (INCLUDING NEGLIGENCE), MISREPRESENTATION, STRICT LIABILITY, BREACH OF CONTRACT OR BREACH OF WARRANTY OF MICROSOFT OR ANY SUPPLIER, AND EVEN IF MICROSOFT OR ANY SUPPLIER HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.Microsoft would certainly be embarrassed by a mistake of this magnitude and they might lose future sales because of it, but if you chose to sue them over it, you wouldn't see a dime.
posted by nmiell at 11:16 PM on May 16, 2008 [8 favorites]
Short version: "You get what you pay for."
Aw foo. This problem took a year to be discovered even though the source code is available ro public inspection. How much longer it'd take to discover it if it were part of a closed-source system, which could only be found through close observation or through the efforts of of privileged developers?
posted by JHarris at 11:18 PM on May 16, 2008 [2 favorites]
Aw foo. This problem took a year to be discovered even though the source code is available ro public inspection. How much longer it'd take to discover it if it were part of a closed-source system, which could only be found through close observation or through the efforts of of privileged developers?
posted by JHarris at 11:18 PM on May 16, 2008 [2 favorites]
Windows may suck, and i5/OS may be closed and inscrutable, but, by God, there is somebody to sue at the bottom of those responsibility chains.
Have you ever actually read a software license? You aren't going to get JACK from Microsoft or anyone if thre's a bug in their OS that causes you to lose critical data. This is pure hysterical handwaving with no basis in reality whatsoever.
Here's the raw truth: there is NOBODY at the bottom you can sue. Nobody. Period. At least, with open source, you can take responsibility for the code and fix it yourself. Alternately, you can also easily change vendors to someone else who can support your chosen code base. In the case of Linux, this would be moving to, say, Red Hat or Novell or Mandriva.
posted by Malor at 11:20 PM on May 16, 2008 [6 favorites]
Have you ever actually read a software license? You aren't going to get JACK from Microsoft or anyone if thre's a bug in their OS that causes you to lose critical data. This is pure hysterical handwaving with no basis in reality whatsoever.
Here's the raw truth: there is NOBODY at the bottom you can sue. Nobody. Period. At least, with open source, you can take responsibility for the code and fix it yourself. Alternately, you can also easily change vendors to someone else who can support your chosen code base. In the case of Linux, this would be moving to, say, Red Hat or Novell or Mandriva.
posted by Malor at 11:20 PM on May 16, 2008 [6 favorites]
Dammit, hit post too soon.
If you don't like how Microsoft is handling your bugs, well, too bad for you. There is no alternate vendor for Windows.
posted by Malor at 11:22 PM on May 16, 2008
If you don't like how Microsoft is handling your bugs, well, too bad for you. There is no alternate vendor for Windows.
posted by Malor at 11:22 PM on May 16, 2008
Wouldn't be entirely surprised if security problems this bad have happened in the commercial world and got silently patched (if at all) with no public disclosure.
What I'm saying, I guess, is that I can't say whether the commercial world is more accountable.
posted by zippy at 11:23 PM on May 16, 2008
What I'm saying, I guess, is that I can't say whether the commercial world is more accountable.
posted by zippy at 11:23 PM on May 16, 2008
Debian Random Cat, by the way, is awesome, both as a joke, and for being a picture full of GRAY TABBY KITTENS AW SNUGGLEWUGGINZ--
Ahem. I like it.
posted by JHarris at 11:24 PM on May 16, 2008 [1 favorite]
Ahem. I like it.
posted by JHarris at 11:24 PM on May 16, 2008 [1 favorite]
While John von Neumann was the baddest ass of them all, using that quote is somewhat disingenuous in this context — there are many terrific sources of entropy available on your computer. Lots of them are time-related: the precise timing between CPU interrupts being an awesome one (every time you hit a key, wiggle your mouse, use USB/serial/parallel, make a syscall, etc.), and clock instability is also golden. Electrical noise is also pretty good: the minute static in the connections and A/D converters on your sound card; the shittiest webcam with epoxy over the sensor will generate lots of sensor noise too.
OpenSSL uses a whole bunch of such tools available to generate seed entropy for random number generation. At some point the program is going to have to put the entropy into a region of memory, and the developers thought: "it would be wasteful if we wrote over this memory with zeroes before putting intentionally random data in it", and they were right.
That the debian 'maintainers' thought it would be a good idea to get rid of the USING UNINITIALIZED MEMORY warning itself wasn't all that bad, it's pretty consistent with their mission to wank as hard as is possible (see IceWeasel, debian-legal, etc.).
The fact that they managed to remove any and all use of entropy, so that everyone started from the same entropy seed value, is totally inexcusable.
It is in no way OpenSSL's fault that debian is full of rules-lawyering asshats.
posted by blasdelf at 11:43 PM on May 16, 2008 [8 favorites]
OpenSSL uses a whole bunch of such tools available to generate seed entropy for random number generation. At some point the program is going to have to put the entropy into a region of memory, and the developers thought: "it would be wasteful if we wrote over this memory with zeroes before putting intentionally random data in it", and they were right.
That the debian 'maintainers' thought it would be a good idea to get rid of the USING UNINITIALIZED MEMORY warning itself wasn't all that bad, it's pretty consistent with their mission to wank as hard as is possible (see IceWeasel, debian-legal, etc.).
The fact that they managed to remove any and all use of entropy, so that everyone started from the same entropy seed value, is totally inexcusable.
It is in no way OpenSSL's fault that debian is full of rules-lawyering asshats.
posted by blasdelf at 11:43 PM on May 16, 2008 [8 favorites]
"... Have you ever actually read a software license? You aren't going to get JACK from Microsoft or anyone if thre's a bug in their OS that causes you to lose critical data. This is pure hysterical handwaving with no basis in reality whatsoever. ..."
posted by Malor at 2:20 AM on May 17
Um, are you aware that Microsoft, among other software vendors, sells under terms other than the EULA you mentioned?
Like this one, for students of an educational institution in Texas:
"LIMITED WARRANTY
LIMITED WARRANTY. Microsoft warrants that (a) the SOFTWARE PRODUCT will perform substantially in accordance with the applicable user documentation published by Microsoft for a period of ninety (90) days from the date you first acquired the SOFTWARE PRODUCT, and (b) any Support Services provided by Microsoft shall be substantially as described in the applicable user documentation published by Microsoft, and Microsoft support engineers will make commercially reasonable efforts to solve any problem issues. Some states and jurisdictions do not allow limitations on duration of an implied warranty, so the above limitation may not apply to you. To the extent allowed by applicable law, implied warranties on the SOFTWARE PRODUCT, if any, are limited to ninety (90) days. Notwithstanding the foregoing, Microsoft under no circumstances warrants the media on which the SOFTWARE PRODUCT has been distributed to you. ..."
There are several corporate licenses and support agreements that go well beyond the Microsoft shrink wrap EULA, too. And of course, if some monetary payments have been made by any software vendor, in settlement of warranty claims, those payments are often conditional on an NDA covering the settlement.
The fact the dollars routinely change hands after software defects are discovered in commercial software, or that warranty work is done for large customers by major vendors without additional charge, without much public fanfare, ought not be too suprising, for anyone who has ever negotiated a few hundred seats of a software purchase, and support contracts.
posted by paulsc at 11:56 PM on May 16, 2008
posted by Malor at 2:20 AM on May 17
Um, are you aware that Microsoft, among other software vendors, sells under terms other than the EULA you mentioned?
Like this one, for students of an educational institution in Texas:
"LIMITED WARRANTY
LIMITED WARRANTY. Microsoft warrants that (a) the SOFTWARE PRODUCT will perform substantially in accordance with the applicable user documentation published by Microsoft for a period of ninety (90) days from the date you first acquired the SOFTWARE PRODUCT, and (b) any Support Services provided by Microsoft shall be substantially as described in the applicable user documentation published by Microsoft, and Microsoft support engineers will make commercially reasonable efforts to solve any problem issues. Some states and jurisdictions do not allow limitations on duration of an implied warranty, so the above limitation may not apply to you. To the extent allowed by applicable law, implied warranties on the SOFTWARE PRODUCT, if any, are limited to ninety (90) days. Notwithstanding the foregoing, Microsoft under no circumstances warrants the media on which the SOFTWARE PRODUCT has been distributed to you. ..."
There are several corporate licenses and support agreements that go well beyond the Microsoft shrink wrap EULA, too. And of course, if some monetary payments have been made by any software vendor, in settlement of warranty claims, those payments are often conditional on an NDA covering the settlement.
The fact the dollars routinely change hands after software defects are discovered in commercial software, or that warranty work is done for large customers by major vendors without additional charge, without much public fanfare, ought not be too suprising, for anyone who has ever negotiated a few hundred seats of a software purchase, and support contracts.
posted by paulsc at 11:56 PM on May 16, 2008
So, it's pretty clear to me how the problem occurred. What I'm curious about is, why did it get fixed? The fix was committed by the same maintainer, and the commit log indicates he understood the nature of the goof he was fixing. What brought it to light, and how can it (whatever it is) be made to happen more often?
zippy: You bet they do. And the fix gets put out in a point release or a patch with, at best, a vague mention of "... and some reliability and security improvements". Black-hats, on the other hand, have taken to analyzing patch releases in order to get specific hints as to how to attack.
posted by hattifattener at 12:01 AM on May 17, 2008
zippy: You bet they do. And the fix gets put out in a point release or a patch with, at best, a vague mention of "... and some reliability and security improvements". Black-hats, on the other hand, have taken to analyzing patch releases in order to get specific hints as to how to attack.
posted by hattifattener at 12:01 AM on May 17, 2008
The internet is full of people who are not qualified to answer your question, but are happy to help anyway. Sometimes they don't even know that they are not qualified. When these are the people doing peer review of security software, then that is the root cause. Not the patch itself.
How ... ?
Well it's kind of like people were spoofing the openssl developers. The openssl team published their contact information as a public mailing list, both in the source package and their web site, then apparently didn't read or maintain the list. And when someone actually needed to talk to them, his question actually got answered by "somebody on the internet," and he didn't know it. The person answering probably didn't realize that he wasn't qualified to answer either.
The fact that eventually people on the list would assume a position of expertise and leadership, that a network of trust would develop internal to the list is completely normal. The fact that then a question will never be answered with an "I don't know, I'll have to ask someone who knows more than me," should be entirely predictible to anyone familiar with the internet post 1995.
posted by cotterpin at 12:44 AM on May 17, 2008 [2 favorites]
How ... ?
Well it's kind of like people were spoofing the openssl developers. The openssl team published their contact information as a public mailing list, both in the source package and their web site, then apparently didn't read or maintain the list. And when someone actually needed to talk to them, his question actually got answered by "somebody on the internet," and he didn't know it. The person answering probably didn't realize that he wasn't qualified to answer either.
The fact that eventually people on the list would assume a position of expertise and leadership, that a network of trust would develop internal to the list is completely normal. The fact that then a question will never be answered with an "I don't know, I'll have to ask someone who knows more than me," should be entirely predictible to anyone familiar with the internet post 1995.
posted by cotterpin at 12:44 AM on May 17, 2008 [2 favorites]
i have to agree with this guy-
Debian shouldn't be messing with this code and "never fix a bug you don’t understand"
posted by bhnyc at 1:01 AM on May 17, 2008
Debian shouldn't be messing with this code and "never fix a bug you don’t understand"
posted by bhnyc at 1:01 AM on May 17, 2008
At first I thought this was limited to bad keys. But this quote from this message has me scared: "Since the nature of the crypto used in ssh cannot ensure confidentiality if either side uses weak random numbers[5] we have also randomized all user passwords in LDAP."
If, like me, you were feeling pleased with yourself for being too lazy to generate SSH client certs (instead using your shell account password each time), NOT SO FAST. The lack of entropy means your password could have been revealed each time you remotely logged into a Debian SSH server. And if you've logged into a server you yourself don't run, you can't be sure it suffers from this vulnerability, so you have to assume your password has been compromised.
And that means changing your password on that machine -- unless you used that same password elsewhere, in which case you have to change it everywhere. That sucks. I'd like to think I'm wrong and missing something here and over-reacting, but the Debian infrastructure guys have scrambled all the passwords in their directory!
I realize this isn't the best place to ask this question, but: Are any web servers vulnerable to this (when operated in a password-form-POST-over-HTTPS mode)? That would *really* suck.
> their mission to wank as hard as is possible
I laughed out loud.
Oh, and fantastic post, finite. Thank you.
posted by sdodd at 1:04 AM on May 17, 2008
If, like me, you were feeling pleased with yourself for being too lazy to generate SSH client certs (instead using your shell account password each time), NOT SO FAST. The lack of entropy means your password could have been revealed each time you remotely logged into a Debian SSH server. And if you've logged into a server you yourself don't run, you can't be sure it suffers from this vulnerability, so you have to assume your password has been compromised.
And that means changing your password on that machine -- unless you used that same password elsewhere, in which case you have to change it everywhere. That sucks. I'd like to think I'm wrong and missing something here and over-reacting, but the Debian infrastructure guys have scrambled all the passwords in their directory!
I realize this isn't the best place to ask this question, but: Are any web servers vulnerable to this (when operated in a password-form-POST-over-HTTPS mode)? That would *really* suck.
> their mission to wank as hard as is possible
I laughed out loud.
Oh, and fantastic post, finite. Thank you.
posted by sdodd at 1:04 AM on May 17, 2008
For a trip down memory lane, here's another famous RNG-related crypto bug:
NYT: Software Security Flaw Puts Shoppers on Internet at Risk
By JOHN MARKOFF
Published: September 19, 1995
posted by sdodd at 1:22 AM on May 17, 2008
NYT: Software Security Flaw Puts Shoppers on Internet at Risk
By JOHN MARKOFF
Published: September 19, 1995
posted by sdodd at 1:22 AM on May 17, 2008
I am not smart enough to grok the entire issue, but isn't it a bit mind-boggling to learn that if you want to contact the developer team for the openssl stuff you apparently need a decoder ring--the public information available to the great unwashed out there points to resources that aren't monitored by anyone on that team, apparently.
It's ironic that the openssl project is actual not very open at all, if you want to find them.
And while it is human nature to want to hide shameful things, checking this in a week before announcing that folks need to update their software, therefore giving the bad guys time to exploit this, deserves special mention. Doesn't it? Everyone is going to find out anyway, just "man up" and get it over with rather than making the problem even worse.
posted by maxwelton at 1:53 AM on May 17, 2008
It's ironic that the openssl project is actual not very open at all, if you want to find them.
And while it is human nature to want to hide shameful things, checking this in a week before announcing that folks need to update their software, therefore giving the bad guys time to exploit this, deserves special mention. Doesn't it? Everyone is going to find out anyway, just "man up" and get it over with rather than making the problem even worse.
posted by maxwelton at 1:53 AM on May 17, 2008
paulsc:There are several corporate licenses and support agreements that go well beyond the Microsoft shrink wrap EULA, too.
Isn't that what the commercial Linux vendors are selling to corporate clients?
posted by ghost of a past number at 2:05 AM on May 17, 2008
Isn't that what the commercial Linux vendors are selling to corporate clients?
posted by ghost of a past number at 2:05 AM on May 17, 2008
"Isn't that what the commercial Linux vendors are selling to corporate clients?"
posted by ghost of a past number at 5:05 AM on May 17
Sure. And that may be the source of some bitterness to the OpenSSL volunteers. Who, perhaps, in saying "I welcome any suggestions to improve this situation." may be saying "Pay us if you expect us to read every post to our dev mailing list. Otherwise, we're best effort volunteers."
I don't disagree with them, and I think OpenSSL is probably incorporated into a number of commercial distros, too.
posted by paulsc at 2:27 AM on May 17, 2008
posted by ghost of a past number at 5:05 AM on May 17
Sure. And that may be the source of some bitterness to the OpenSSL volunteers. Who, perhaps, in saying "I welcome any suggestions to improve this situation." may be saying "Pay us if you expect us to read every post to our dev mailing list. Otherwise, we're best effort volunteers."
I don't disagree with them, and I think OpenSSL is probably incorporated into a number of commercial distros, too.
posted by paulsc at 2:27 AM on May 17, 2008
It's ironic that the openssl project is actual not very open at all, if you want to find them.
This is the second significant OpenSSL bug this year that seems to boil down to the OpenSSL devs being challenging to work with, if you believe the (first) RedHat and (now) Debian dev accounts of things.
posted by rodgerd at 2:45 AM on May 17, 2008 [1 favorite]
This is the second significant OpenSSL bug this year that seems to boil down to the OpenSSL devs being challenging to work with, if you believe the (first) RedHat and (now) Debian dev accounts of things.
posted by rodgerd at 2:45 AM on May 17, 2008 [1 favorite]
OpenSSL is used in tons of prominent commercial software (often in violation of the advertising clause in it's license), and is linked against by nearly every Linux/BSD/OSX program that needs encryption. Hundreds of millions of installations, including every modern HTTPS webserver that's not IIS. I wouldn't trust someone that would use any other implementation, including patched ones like debian (with an exception for the wankery that is GnuTLS).
Yet I don't think anyone has been explicitly paid to work on it for 5 years — a prematurely terminated DARPA grant gave funds to both OpenBSD/OpenSSH and OpenSSL (they are separate projects). It's dormant because the project hasn't needed any constructive work in years — it's effectively complete. Every once in a while someone clever working for a company using it on a large scale (or a researcher) will notice a bug and fix it on the clock. Their patches get accepted and notices get sent out to update, especially if the bug exposes even a minor vulnerability.
That their patch was not greeted with the five-alarm BUG siren should have told the Debian developers that their 'fix' wasn't. The OpenSSL developers only let us down in that they weren't paying attention to the dormant list enough to flame the shit out of the Debian fucklers.
posted by blasdelf at 3:25 AM on May 17, 2008
Yet I don't think anyone has been explicitly paid to work on it for 5 years — a prematurely terminated DARPA grant gave funds to both OpenBSD/OpenSSH and OpenSSL (they are separate projects). It's dormant because the project hasn't needed any constructive work in years — it's effectively complete. Every once in a while someone clever working for a company using it on a large scale (or a researcher) will notice a bug and fix it on the clock. Their patches get accepted and notices get sent out to update, especially if the bug exposes even a minor vulnerability.
That their patch was not greeted with the five-alarm BUG siren should have told the Debian developers that their 'fix' wasn't. The OpenSSL developers only let us down in that they weren't paying attention to the dormant list enough to flame the shit out of the Debian fucklers.
posted by blasdelf at 3:25 AM on May 17, 2008
Yeah, it got noticed because there's been a week of heavy SSH scans going on and people went WTF? If you have a couple /16 network ranges and logging you're doing the facepalm right about three days ago.
posted by zengargoyle at 3:47 AM on May 17, 2008
posted by zengargoyle at 3:47 AM on May 17, 2008
blasdelf: Of course, the fact that this is a supposedly dormant list is well documented by the openssl guys, yes?
Like most security failures, this is a *process* failure more than anything else. It's mistaken to heap all the blame on the Debian developers, although they fully deserve their large share of it.
It's inevitable that distributions will need to patch upstream code. Debian should have proper code review procedures in place for *all* changes to core security-related code. No code review, no change. At the same time, the OpenSSL team has a responsibility to make clear that for their part either there is no process & distributors are on their own, or that there is one which is clearly documented. (Either is fine: clarity is more important than anything else.)
Having your documentation claim that a particular mailing list is the right place to ask questions about patches to openssl, then disclaiming any responsibility for questions asked, and answered on that list by developers with openssl.org addresses is not really acceptable.
posted by pharm at 4:21 AM on May 17, 2008
Like most security failures, this is a *process* failure more than anything else. It's mistaken to heap all the blame on the Debian developers, although they fully deserve their large share of it.
It's inevitable that distributions will need to patch upstream code. Debian should have proper code review procedures in place for *all* changes to core security-related code. No code review, no change. At the same time, the OpenSSL team has a responsibility to make clear that for their part either there is no process & distributors are on their own, or that there is one which is clearly documented. (Either is fine: clarity is more important than anything else.)
Having your documentation claim that a particular mailing list is the right place to ask questions about patches to openssl, then disclaiming any responsibility for questions asked, and answered on that list by developers with openssl.org addresses is not really acceptable.
posted by pharm at 4:21 AM on May 17, 2008
cotterpin: if you read the thread in question, you'll note that the Debian developers question was answered by people with openssl.org addresses, not exactly "somebody on the internet".
Personally I feel that Debian holds the majority of the blame in this instance. Not the specific Debian developer in question, but Debian as a whole. The OpenSSL team also bears some responsibilty for their woeful lack of clarity of process for dealing with this kind of issue. This should have been caught before it ever reached released code, and it wasn't.
posted by pharm at 4:29 AM on May 17, 2008
Personally I feel that Debian holds the majority of the blame in this instance. Not the specific Debian developer in question, but Debian as a whole. The OpenSSL team also bears some responsibilty for their woeful lack of clarity of process for dealing with this kind of issue. This should have been caught before it ever reached released code, and it wasn't.
posted by pharm at 4:29 AM on May 17, 2008
Voice of UniBlab: "SHAVE YOUR NECK! *tock* NECK NECK NECK!"
posted by quonsar at 4:31 AM on May 17, 2008 [1 favorite]
posted by quonsar at 4:31 AM on May 17, 2008 [1 favorite]
...but, by God, there is somebody to sue at the bottom of those responsibility chains.
The fact that nobody has ever done this is ironclad proof that all Microsoft products, ever, are completely and totally bug-free. Take that, you Linux weenies.
posted by swell at 6:25 AM on May 17, 2008 [1 favorite]
The fact that nobody has ever done this is ironclad proof that all Microsoft products, ever, are completely and totally bug-free. Take that, you Linux weenies.
posted by swell at 6:25 AM on May 17, 2008 [1 favorite]
Given that there was a change to RNG code, why didn't someone run the code and verify the entropy change's effect on the size of the keyspace? It would probably have taken less time than it took to write this sentence to see such a drastic reduction. I can see how the Debian developer might not have thought of this, but OpenSSL folks should have.
posted by tommasz at 6:53 AM on May 17, 2008
posted by tommasz at 6:53 AM on May 17, 2008
neustile, a good starting point for fixing the mess is on this livejournal post. Follow links to more info. #1 thing I did was remove all my suspect authorized_keys files. Unfortunately Debian testing still hasn't gotten all the fixes that were pushed to stable, so I'm still in limbo.
What I find most interesting about this is how big a failure of the open source process it is. We're told open source is better because many eyes review the code. But here was a one line broken patch that was broken for two years. No one caught it (at least, no one wearing a white hat).
Also, tommasz, interesting idea to have a test to measure the entropy. I'm not sure that's really easy to do in a practical way, though. It takes a lot of time to generate a statistical sample. And some of the entropy sources only give you a few bits of entropy every second. Plenty when you only need 1024 bits to generate a key, not nearly enough when you need a million bits to test the key generation.
This is a pretty big clusterfuck. Two years from now, systems will still be compromised because they have bad old keys.
posted by Nelson at 7:23 AM on May 17, 2008
What I find most interesting about this is how big a failure of the open source process it is. We're told open source is better because many eyes review the code. But here was a one line broken patch that was broken for two years. No one caught it (at least, no one wearing a white hat).
Also, tommasz, interesting idea to have a test to measure the entropy. I'm not sure that's really easy to do in a practical way, though. It takes a lot of time to generate a statistical sample. And some of the entropy sources only give you a few bits of entropy every second. Plenty when you only need 1024 bits to generate a key, not nearly enough when you need a million bits to test the key generation.
This is a pretty big clusterfuck. Two years from now, systems will still be compromised because they have bad old keys.
posted by Nelson at 7:23 AM on May 17, 2008
Saved by having generated all my SSH keys on Ubuntu installs older than this bug.
posted by Skorgu at 7:24 AM on May 17, 2008
posted by Skorgu at 7:24 AM on May 17, 2008
Remember "many eyes make bugs shallow?" Yeah, me too. It's just another one of those open source mantras that has about as much grounding in reality as any other religious chant. Another case in point, that lovely twenty-five year old BSD bug. Or that guy who backdoored a C compiler and it stayed that way for fourteen years.
Just because the source is there to look at does not mean that, somehow, everyone who downloads a piece of open source software magically becomes a developer, any more than possession of a paintbrush transforms you into a great artist.
This nicely illustrates a lot of open source issues - the whole WHEE, SHINY! mentality that gives a lot of attention to glamorous interfaces and not much love to the people who produce the packages around which all of that glittery Christmas paper is wrapped.
Finally, how hard would it be to I know to develop a standardized chip for a motherboard, to serve as a hardware RNG, maybe through a combination of thermal noise and a small wad of radioactive material. I know you're going, "Radioactive material, OMG!" but just about every house in America has at least two radioactive devices - they're called smoke alarms. I know true RNGs cost an awful lot right now (unless you're someone who has whipped them together with a webcam and a lava lamp), but I think the cost would approach reasonability after some real research dollars were spent on it on the commercial side (rather than some military-industrial bit of "Let's make a $10,000 Random-o-tron, we can sell them to the Army for a fortune!").
posted by adipocere at 8:11 AM on May 17, 2008
Just because the source is there to look at does not mean that, somehow, everyone who downloads a piece of open source software magically becomes a developer, any more than possession of a paintbrush transforms you into a great artist.
This nicely illustrates a lot of open source issues - the whole WHEE, SHINY! mentality that gives a lot of attention to glamorous interfaces and not much love to the people who produce the packages around which all of that glittery Christmas paper is wrapped.
Finally, how hard would it be to I know to develop a standardized chip for a motherboard, to serve as a hardware RNG, maybe through a combination of thermal noise and a small wad of radioactive material. I know you're going, "Radioactive material, OMG!" but just about every house in America has at least two radioactive devices - they're called smoke alarms. I know true RNGs cost an awful lot right now (unless you're someone who has whipped them together with a webcam and a lava lamp), but I think the cost would approach reasonability after some real research dollars were spent on it on the commercial side (rather than some military-industrial bit of "Let's make a $10,000 Random-o-tron, we can sell them to the Army for a fortune!").
posted by adipocere at 8:11 AM on May 17, 2008
Really excellent post.
Buffet of explanatory and expansive links, well-written summary, and great convo inside so far (even the Windows vs. Open Source chain rattling).
Strangely enough, this whole thing has made me more interested in Open Source operating software rather than less.
posted by batmonkey at 8:34 AM on May 17, 2008
Buffet of explanatory and expansive links, well-written summary, and great convo inside so far (even the Windows vs. Open Source chain rattling).
Strangely enough, this whole thing has made me more interested in Open Source operating software rather than less.
posted by batmonkey at 8:34 AM on May 17, 2008
And that means changing your password on that machine -- unless you used that same password elsewhere, in which case you have to change it everywhere. That sucks. I'd like to think I'm wrong and missing something here and over-reacting, but the Debian infrastructure guys have scrambled all the passwords in their directory!
Yup. Any password used to login in via ssh to or from a weak key'd machine. Any traffic sent over an https connection, with a certificate that was generated from a weak key machine including a commercially signed one). Or ldaps, or sftp, or even imaps and smtps. All that traffic, and any passwords in there are potentially compromised.
That said, all of the above cases only apply if
1) someone was packet logging the connection (man in the middle) and knew about the weakness already, or kept the log and can now break it. This is pretty unlikely for most people.
2) or the key and/or cert isn't replaced, and someone mitms you now with you still using a weak certificate created on ubuntu 7.04, 7.10 or 8.04 pre-patch (or the affected debian versions)
The biggest vulnerability by far is old authorised key files on openssh servers, even patched or non-debian ones - that's going to be a rich vein of break-ins for years, as it allows direct login access without password - no mitm required.
Thank fuck my certificate server at work is gentoo, and generates all my keys, otherwise I'd have been buying 20 new certs last week and forcing a password change on all systems on 2000 users at once. At least I can let the usual password change system stagger them as usual. I only had to replace the ubuntu ssh server keys and apply the patches.
posted by ArkhanJG at 8:48 AM on May 17, 2008
Yup. Any password used to login in via ssh to or from a weak key'd machine. Any traffic sent over an https connection, with a certificate that was generated from a weak key machine including a commercially signed one). Or ldaps, or sftp, or even imaps and smtps. All that traffic, and any passwords in there are potentially compromised.
That said, all of the above cases only apply if
1) someone was packet logging the connection (man in the middle) and knew about the weakness already, or kept the log and can now break it. This is pretty unlikely for most people.
2) or the key and/or cert isn't replaced, and someone mitms you now with you still using a weak certificate created on ubuntu 7.04, 7.10 or 8.04 pre-patch (or the affected debian versions)
The biggest vulnerability by far is old authorised key files on openssh servers, even patched or non-debian ones - that's going to be a rich vein of break-ins for years, as it allows direct login access without password - no mitm required.
Thank fuck my certificate server at work is gentoo, and generates all my keys, otherwise I'd have been buying 20 new certs last week and forcing a password change on all systems on 2000 users at once. At least I can let the usual password change system stagger them as usual. I only had to replace the ubuntu ssh server keys and apply the patches.
posted by ArkhanJG at 8:48 AM on May 17, 2008
paulsc: did you even read your own warranty text? That's a 90-day limited warranty; after 90 days, it means nothing. At 91 days, if you discover some critical error that makes it unsuitable for your purposes, Microsoft doesn't have to do jack. They might, depending on the problem, but they don't have to. It's entirely up to them.
Further, you handwave about mysterious hidden software settlements. You carefully dance around your absolute lack of evidence by claiming 'they're under NDA!'. Well, I think there are aliens on the dark side of the moon that will refund your Windows license fees if you don't like it. They're doing this as a research project, and of course you don't see anything about this in the mainstream media because of their potent mind-control rays.
In other words: as far as I can tell, you're making that shit up wholesale. Nobody, to my knowledge, has successfully sued Microsoft for any security breach in Windows. Harping about NDAs is irrelevant; without evidence, there's no reason to believe this has happened.
As far as I can tell, you're substituting imagination of what you think SHOULD happen with the actual world as it is. Microsoft has no more liability to you than the OpenSSL guys do, even though you paid Microsoft a lot more money.
posted by Malor at 8:50 AM on May 17, 2008
Further, you handwave about mysterious hidden software settlements. You carefully dance around your absolute lack of evidence by claiming 'they're under NDA!'. Well, I think there are aliens on the dark side of the moon that will refund your Windows license fees if you don't like it. They're doing this as a research project, and of course you don't see anything about this in the mainstream media because of their potent mind-control rays.
In other words: as far as I can tell, you're making that shit up wholesale. Nobody, to my knowledge, has successfully sued Microsoft for any security breach in Windows. Harping about NDAs is irrelevant; without evidence, there's no reason to believe this has happened.
As far as I can tell, you're substituting imagination of what you think SHOULD happen with the actual world as it is. Microsoft has no more liability to you than the OpenSSL guys do, even though you paid Microsoft a lot more money.
posted by Malor at 8:50 AM on May 17, 2008
Oh shit. It's not even free (for me) I had to buy a program to help me partition my hard drive. Before the ubuntu thingy came along. I don't know crap about any of this and now I'm wondering if I should stop using it which would suck cause I like it. You younguns who understand all this stuff are lucky.
posted by notreally at 8:52 AM on May 17, 2008
posted by notreally at 8:52 AM on May 17, 2008
Oh, I missed this:
Finally, how hard would it be to I know to develop a standardized chip for a motherboard, to serve as a hardware RNG,
Intel actually did this; my P3 motherboard has a RNG on it. They dropped it later, and I've always been curious about why. My batshitinsane paranoid side has wondered, a time or two, if maybe the government doesn't like mortals having that much easy randomness available.... :-)
posted by Malor at 8:54 AM on May 17, 2008
Finally, how hard would it be to I know to develop a standardized chip for a motherboard, to serve as a hardware RNG,
Intel actually did this; my P3 motherboard has a RNG on it. They dropped it later, and I've always been curious about why. My batshitinsane paranoid side has wondered, a time or two, if maybe the government doesn't like mortals having that much easy randomness available.... :-)
posted by Malor at 8:54 AM on May 17, 2008
blasdelf: Your bias is showing. This whole mess really has very little to do with Debian's software policies.
Debian certainly fucked up here but it's not like there haven't been much much worse fuck ups with RNGs in the past.
posted by public at 8:57 AM on May 17, 2008
Debian certainly fucked up here but it's not like there haven't been much much worse fuck ups with RNGs in the past.
posted by public at 8:57 AM on May 17, 2008
Is there one single link that a non-programmer could read about what happened? There are too many links to details here and I can't find an overview. Thanks!
posted by nowonmai at 9:22 AM on May 17, 2008
posted by nowonmai at 9:22 AM on May 17, 2008
blasdelf: Your bias is showing. This whole mess really has very little to do with Debian's software policies.
Not their policies, but it is a direct consequence of the cultural willingness on the part of Debian maintainers (and, to be fair, distribution maintainers in general, though Debian is a particularly bad offender) to essentially maintain every package as a fork of the upstream and apply whatever poorly-understood third-party patches tickle their fancy. The window manager I use is no longer available as a Debian package after the author changed the license to prevent Debian shipping obsolete versions with a half-dozen unmaintained modules and patches as part of the main package to users who then came to him for support.
I really don't understand what drives distros to do this — it certainly doesn't serve me as a user when I get something wildly different than what is documented. Is it really so difficult to restrict packaging changes to rearranging the directory structure to fit the distro's standards and calling it a day?
posted by enn at 9:25 AM on May 17, 2008 [1 favorite]
Not their policies, but it is a direct consequence of the cultural willingness on the part of Debian maintainers (and, to be fair, distribution maintainers in general, though Debian is a particularly bad offender) to essentially maintain every package as a fork of the upstream and apply whatever poorly-understood third-party patches tickle their fancy. The window manager I use is no longer available as a Debian package after the author changed the license to prevent Debian shipping obsolete versions with a half-dozen unmaintained modules and patches as part of the main package to users who then came to him for support.
I really don't understand what drives distros to do this — it certainly doesn't serve me as a user when I get something wildly different than what is documented. Is it really so difficult to restrict packaging changes to rearranging the directory structure to fit the distro's standards and calling it a day?
posted by enn at 9:25 AM on May 17, 2008 [1 favorite]
The window manager I use is no longer available as a Debian package after the author changed the license to prevent Debian shipping obsolete versions with a half-dozen unmaintained modules and patches as part of the main package to users who then came to him for support.
This is probably due to the maintainer of that specific package rather an overall distribution issue. Debian is run by volunteers and lots of less popular packages are not maintained as well. If someone was actively maintaining that package then the upstream author probably wouldn't have been so pissed off. As usual, this is FOSS, so you can always roll your own package or improve existing ones.
Distro's tend to maintain things this way because they want to provide a consistent and comprehensive platform and often this means integrating patches into software that frankly are just not sensible to integrate upstream or are patched downstream by the distro so they dont have to wait for updated releases. Sometimes just means maintainers do crazy stuff that's essentially a fork published under the same name. The solution to this is probably to publish your software under a copyrighted name, (e.g. Firefox vs IceCat) to force distros to either not publish modified software under the same name or to not modify your software.
It's not perfect, but it's usually an issue of simply not having enough eyes or enough communication between users and maintainers. This OpenSSL thing is a perfect example or poor communication leading to bugs. If the OpenSSL team were more transparent and easy to contact this would never have happened, but at the same time Debian should have realised this and not modified code or published patches that were not fully understood.
posted by public at 9:38 AM on May 17, 2008
This is probably due to the maintainer of that specific package rather an overall distribution issue. Debian is run by volunteers and lots of less popular packages are not maintained as well. If someone was actively maintaining that package then the upstream author probably wouldn't have been so pissed off. As usual, this is FOSS, so you can always roll your own package or improve existing ones.
Distro's tend to maintain things this way because they want to provide a consistent and comprehensive platform and often this means integrating patches into software that frankly are just not sensible to integrate upstream or are patched downstream by the distro so they dont have to wait for updated releases. Sometimes just means maintainers do crazy stuff that's essentially a fork published under the same name. The solution to this is probably to publish your software under a copyrighted name, (e.g. Firefox vs IceCat) to force distros to either not publish modified software under the same name or to not modify your software.
It's not perfect, but it's usually an issue of simply not having enough eyes or enough communication between users and maintainers. This OpenSSL thing is a perfect example or poor communication leading to bugs. If the OpenSSL team were more transparent and easy to contact this would never have happened, but at the same time Debian should have realised this and not modified code or published patches that were not fully understood.
posted by public at 9:38 AM on May 17, 2008
enn: There are lots of reasons why distributors patch packages. Bugs have to be fixed & not every upstream developer is responsive about fixing them in a timely fashion.
Indeed, the answer some of them give is "sorry, we don't support version 1.x any more, upgrade to the new shiny 2.x instead". Some distributions (and Debian is one of these) take the attitude that if the users want to stay with version 1.x then they should be able to do so, and so backport bugfixes to older releases.
That's before you get to thorny issues like 64-bit cleanliness, valgrind cleanliness and so on. Many developers can't be bothered with this stuff. Does that mean that distributions should ignore their 64-bit users, or developers using their systems who'd like to be able to debug their software under valgrind or purify without wading through endless spurious errors?
posted by pharm at 9:39 AM on May 17, 2008
Indeed, the answer some of them give is "sorry, we don't support version 1.x any more, upgrade to the new shiny 2.x instead". Some distributions (and Debian is one of these) take the attitude that if the users want to stay with version 1.x then they should be able to do so, and so backport bugfixes to older releases.
That's before you get to thorny issues like 64-bit cleanliness, valgrind cleanliness and so on. Many developers can't be bothered with this stuff. Does that mean that distributions should ignore their 64-bit users, or developers using their systems who'd like to be able to debug their software under valgrind or purify without wading through endless spurious errors?
posted by pharm at 9:39 AM on May 17, 2008
Does that mean that distributions should ignore their 64-bit users, or developers using their systems who'd like to be able to debug their software under valgrind or purify without wading through endless spurious errors?
Well, when the alternative is "make valgrind shut up by breaking SSL for two years," then, obviously, yes.
Some distributions (and Debian is one of these) take the attitude that if the users want to stay with version 1.x then they should be able to do so, and so backport bugfixes to older releases.
The threshold for re-arranging release cycles in this way (which is what you're doing; if you're putting 2.x fixes in 1.x, it's not really 1.x anymore, it's some probably-lightly-tested agglomeration of the 1.x and 2.x codebases that may or may not do what is expected) needs to be much higher than it is. Is adding support for pretty anti-aliased fonts to some package really a good reason for this multiplication of entities? Debian thinks so; I don't.
(This is, of course, without getting into cases where maintainers add entirely new systems, like the Common Lisp packages in Debian which are essentially unusable in that they attempt to hook Lisp package management and Debian package management together, it doesn't really work, the Lisp packages they package are years out of date, and they leave nasty init code skulking about in /etc even after they're removed that can easily be inadvertently loaded into one's own Lisp system and break things. But I digress.)
posted by enn at 9:59 AM on May 17, 2008
Well, when the alternative is "make valgrind shut up by breaking SSL for two years," then, obviously, yes.
Some distributions (and Debian is one of these) take the attitude that if the users want to stay with version 1.x then they should be able to do so, and so backport bugfixes to older releases.
The threshold for re-arranging release cycles in this way (which is what you're doing; if you're putting 2.x fixes in 1.x, it's not really 1.x anymore, it's some probably-lightly-tested agglomeration of the 1.x and 2.x codebases that may or may not do what is expected) needs to be much higher than it is. Is adding support for pretty anti-aliased fonts to some package really a good reason for this multiplication of entities? Debian thinks so; I don't.
(This is, of course, without getting into cases where maintainers add entirely new systems, like the Common Lisp packages in Debian which are essentially unusable in that they attempt to hook Lisp package management and Debian package management together, it doesn't really work, the Lisp packages they package are years out of date, and they leave nasty init code skulking about in /etc even after they're removed that can easily be inadvertently loaded into one's own Lisp system and break things. But I digress.)
posted by enn at 9:59 AM on May 17, 2008
Addendum: obviously it's somewhat silly to complain about this sort of thing since no one is forcing Debian and its derivatives on me and some people seem to like this approach to distro packaging. I'm mainly venting because I don't know of any alternatives; if someone is aware of a Linux distribution with a more conservative and cautious packaging culture I would love to hear about it.
posted by enn at 10:10 AM on May 17, 2008
posted by enn at 10:10 AM on May 17, 2008
Well, when the alternative is "make valgrind shut up by breaking SSL for two years," then, obviously, yes.
Ah, a low blow.
The fact remains that some patching is inevitable. The only question is how much. Different people are going to have different preferences of course, but this is hardly news.
if someone is aware of a Linux distribution with a more conservative and cautious packaging culture I would love to hear about it.
Slackware?
posted by pharm at 10:21 AM on May 17, 2008
Ah, a low blow.
The fact remains that some patching is inevitable. The only question is how much. Different people are going to have different preferences of course, but this is hardly news.
if someone is aware of a Linux distribution with a more conservative and cautious packaging culture I would love to hear about it.
Slackware?
posted by pharm at 10:21 AM on May 17, 2008
nowonami: without going into too much detail, a debian (a core linux distro that has several spinoffs, including ubuntu) programmer fixed a fairly harmless problem in openssl two years ago. By mistake, he also commented out another very similar line that was responsible for much of the entropy - randomness - that openssl uses to generate keys. These keys are the basis behind much of the encryption people use on the internet to secure communications between computers.
The classic example is the bank website; they use a https page, with a https certificate. That certificate (the private key part anyway) is basically just a very large secret number, created by openssl. when you send traffic backwards and forwards to the bank encrypted; since the bank is the only one who knows the secret number, they're the only one that can decrypt the traffic you send them. Now, normally that number is virtually impossible to guess by brute force because of the large random source it was created from. The dodgy version of openssl only generated from a small range of keys, because of the mistake, so if the bank created their certificate using the broken openssl, the key (secret number) is going to be one of a much smaller number of possibilities. Someone with a list of many of those keys, and able to listen in on the connection between your computer and the bank's website computer can use it to break the encryption on the bank page quietly and listen in to everything in that supposedly secure connection - like your bank details and login passwords.
That particular example is pretty unlikely - your bank is almost certainly still a safe place to visit, not least because they'll have replaced the broken keys and replaced the vulnerable certificate by now. However, it's possible using the same exploit to break into ssh servers (remote terminal) and vpn servers running debian/ubuntu that haven't been fixed. Anyone running ubuntu at home, with openssh installed for example, should replace their keys and patch the softtware. Anyone who runs their own secure mail server and webserver based on ubuntu this last couple of years will probably have to recreate their certificates. Given the huge number of places using debian and ubuntu, and the amount of secure services that have to be redone, it's a big problem. Sysadmins were working like stink on this during the week; many are probably having a long weekend still working on replacing and updating everything. Any system that's poorly maintained will still be vulnerable, and given the number of places the bad certificates could be, it's going to take a long time for the impact of this to be over.
posted by ArkhanJG at 10:37 AM on May 17, 2008 [1 favorite]
The classic example is the bank website; they use a https page, with a https certificate. That certificate (the private key part anyway) is basically just a very large secret number, created by openssl. when you send traffic backwards and forwards to the bank encrypted; since the bank is the only one who knows the secret number, they're the only one that can decrypt the traffic you send them. Now, normally that number is virtually impossible to guess by brute force because of the large random source it was created from. The dodgy version of openssl only generated from a small range of keys, because of the mistake, so if the bank created their certificate using the broken openssl, the key (secret number) is going to be one of a much smaller number of possibilities. Someone with a list of many of those keys, and able to listen in on the connection between your computer and the bank's website computer can use it to break the encryption on the bank page quietly and listen in to everything in that supposedly secure connection - like your bank details and login passwords.
That particular example is pretty unlikely - your bank is almost certainly still a safe place to visit, not least because they'll have replaced the broken keys and replaced the vulnerable certificate by now. However, it's possible using the same exploit to break into ssh servers (remote terminal) and vpn servers running debian/ubuntu that haven't been fixed. Anyone running ubuntu at home, with openssh installed for example, should replace their keys and patch the softtware. Anyone who runs their own secure mail server and webserver based on ubuntu this last couple of years will probably have to recreate their certificates. Given the huge number of places using debian and ubuntu, and the amount of secure services that have to be redone, it's a big problem. Sysadmins were working like stink on this during the week; many are probably having a long weekend still working on replacing and updating everything. Any system that's poorly maintained will still be vulnerable, and given the number of places the bad certificates could be, it's going to take a long time for the impact of this to be over.
posted by ArkhanJG at 10:37 AM on May 17, 2008 [1 favorite]
Wow, I had no idea this was such a big deal. I run Ubuntu on my laptop and on a server I use for backups/VPN at work, and I just installed the security updates when they became available on Update Manager, and was presented with a notice that I should test my keys with the included utilities (openssl-vulnkey and openvpn-vulnkey). Thankfully, none of my keys were compromised, and I always use public key auth with SSH, but I replaced them all anyway. The fix took about 2 minutes (although having to do this on 50 servers with 200 users would be a bitch) and the implementation of it was well-explained at the time the updates were installed.
32,768 possible keys? Holy shit, that's really bad. At least the update was released quickly after it was discovered, and on Ubuntu at least, it's made really clear that YOU NEED TO CHECK YOUR KEYS when it's installed.
posted by DecemberBoy at 10:38 AM on May 17, 2008
32,768 possible keys? Holy shit, that's really bad. At least the update was released quickly after it was discovered, and on Ubuntu at least, it's made really clear that YOU NEED TO CHECK YOUR KEYS when it's installed.
posted by DecemberBoy at 10:38 AM on May 17, 2008
DecemberBoy - public key auth with SSH is the most vulnerable - if there's a weak key in the authorized_key list (and when it's an average 2^15 keyspace, that's really frakking weak) then a blackhat has free access to the ssh (or vpn) server using that key until it's removed, even if the server itself is patched.
You did the right thing by revoking all keys and re-issuing with a patched version of ssl - the blacklist doesn't have all known keys on it yet, only the most common as there's still some variation between systems. Don't forget to re-do any server certificates for any other secure services that might have been created on those boxes.
posted by ArkhanJG at 10:56 AM on May 17, 2008
You did the right thing by revoking all keys and re-issuing with a patched version of ssl - the blacklist doesn't have all known keys on it yet, only the most common as there's still some variation between systems. Don't forget to re-do any server certificates for any other secure services that might have been created on those boxes.
posted by ArkhanJG at 10:56 AM on May 17, 2008
Skorgu: Saved by having generated all my SSH keys on Ubuntu installs older than this bug.
But you used them on newer versions, right? DSA keys need a good PRNG to be used safely, so unless you've only ever used RSA keys, you might like to consider now a good time to update them.
posted by Freaky at 11:23 AM on May 17, 2008
But you used them on newer versions, right? DSA keys need a good PRNG to be used safely, so unless you've only ever used RSA keys, you might like to consider now a good time to update them.
posted by Freaky at 11:23 AM on May 17, 2008
Thanks ArkhanJG; I should have phrased my question better though. I'm interested in the narrative of how somebody in a downstream role can be throwing out chunks of somebody else's code that he doesn't understand, in a package that's known to be of critical importance, and nobody notices; nobody with the appropriate expertise is consulted.
Going far beyond that, is this kind of thing normal? Is there institutionalized lack of communication in the Debian pipeline? How about other distros? Or Microsoft/Apple/other closed venues? What about the open-source stuff that Apple uses?
posted by nowonmai at 11:25 AM on May 17, 2008
Going far beyond that, is this kind of thing normal? Is there institutionalized lack of communication in the Debian pipeline? How about other distros? Or Microsoft/Apple/other closed venues? What about the open-source stuff that Apple uses?
posted by nowonmai at 11:25 AM on May 17, 2008
Ah, sorry. These two blog posts cover the progress of the problem - you should be able to just skip over the programming detail.
As regards yout last set of questions. Yes. There is a lot institutionalized lack of communication between all sorts of projects, not least distros and upstream, microsoft internally, apple, webkit and khtml. The fact that any projects manage to communicate effectively without flamewars, pissing matches or bunfights over authority is a source of amazement to me, given the number of their colleagues that fail to do so. It's no different than any other business or hobby communications really - how many big business departments do you know that get on really well with other departments? It's no different in software. Big business software developlment gets it doubly.
posted by ArkhanJG at 11:45 AM on May 17, 2008
As regards yout last set of questions. Yes. There is a lot institutionalized lack of communication between all sorts of projects, not least distros and upstream, microsoft internally, apple, webkit and khtml. The fact that any projects manage to communicate effectively without flamewars, pissing matches or bunfights over authority is a source of amazement to me, given the number of their colleagues that fail to do so. It's no different than any other business or hobby communications really - how many big business departments do you know that get on really well with other departments? It's no different in software. Big business software developlment gets it doubly.
posted by ArkhanJG at 11:45 AM on May 17, 2008
nowonmai, Debian is well known for being a bit ideological in their approach to open source software. They are fanatical with backports, which I think fundamentally misunderstands the concept of version control, and they spend a lot of energy on meaningless things for reasons of philosophy. (Changing "Firefox" to "iceweasel" because it isn't "truly open source," for example.)
It's the Stallman of distros, without Stallman's understanding of and skill for designing technical systems.
As for Microsoft and other closed vendors? There are probably fifty bugs this bad lurking around in major systems that we will never know about. Look at the metafile exploit. God knows how long that was known to black hats. Hell, you could call Windows' terrible user/administrator "security" a catastrophic bug, and that hasn't been fixed at all. Ever. Nor will it be.
posted by sonic meat machine at 11:47 AM on May 17, 2008
It's the Stallman of distros, without Stallman's understanding of and skill for designing technical systems.
As for Microsoft and other closed vendors? There are probably fifty bugs this bad lurking around in major systems that we will never know about. Look at the metafile exploit. God knows how long that was known to black hats. Hell, you could call Windows' terrible user/administrator "security" a catastrophic bug, and that hasn't been fixed at all. Ever. Nor will it be.
posted by sonic meat machine at 11:47 AM on May 17, 2008
Freaky: Yes, that's a part of this that I think isn't getting good enough exposure. Unlike RSA, a DSS/DSA implementation can leak key material during normal operation if it has a bad RNG. (I think this warrants all of half a paragraph in Schneier). That's a good enough reason for me to stop using DSA. :/
posted by hattifattener at 11:51 AM on May 17, 2008
posted by hattifattener at 11:51 AM on May 17, 2008
* Don't add uninitialised data.
It strikes me as something akin to buttle/tuttle from Brazil.
posted by mrzarquon at 12:24 PM on May 17, 2008
It strikes me as something akin to buttle/tuttle from Brazil.
posted by mrzarquon at 12:24 PM on May 17, 2008
(Changing "Firefox" to "iceweasel" because it isn't "truly open source," for example.)
It was an even dumber reason than that, if I remember right - Mozilla Corp issued guidelines for use of the NAME "Firefox", which Debian objected to, so they forked it just to change the name. And it's even a really moronic name that they changed it to - come on, "Iceweasel"? Not that "Firefox" is that great a name to begin with, but at least Mozilla has an excuse in that it was their third choice after they were legally forced to stop using both "Phoenix" and "Firebird".
Don't get me wrong, Debian is a great distribution that has provided the basis for other great distributions (Ubuntu and Knoppmyth being two that I personally run), and I used Debian stable on any Linux server I administered for quite a few years, but for chrissake, keep your retarded political ideology out of my software. I've never liked open source zealots. Use it because it's good and because the model works, not because it fits into your ignorant college-sophomore-Maoist worldview.
posted by DecemberBoy at 12:53 PM on May 17, 2008
It was an even dumber reason than that, if I remember right - Mozilla Corp issued guidelines for use of the NAME "Firefox", which Debian objected to, so they forked it just to change the name. And it's even a really moronic name that they changed it to - come on, "Iceweasel"? Not that "Firefox" is that great a name to begin with, but at least Mozilla has an excuse in that it was their third choice after they were legally forced to stop using both "Phoenix" and "Firebird".
Don't get me wrong, Debian is a great distribution that has provided the basis for other great distributions (Ubuntu and Knoppmyth being two that I personally run), and I used Debian stable on any Linux server I administered for quite a few years, but for chrissake, keep your retarded political ideology out of my software. I've never liked open source zealots. Use it because it's good and because the model works, not because it fits into your ignorant college-sophomore-Maoist worldview.
posted by DecemberBoy at 12:53 PM on May 17, 2008
"... As far as I can tell, you're substituting imagination of what you think SHOULD happen with the actual world as it is. Microsoft has no more liability to you than the OpenSSL guys do, even though you paid Microsoft a lot more money."
posted by Malor at 11:50 AM on May 17
As far as I can tell, you're substituting what you think Microsoft EULAs and terms mean, for what Microsoft actually does, which is to sell a lot of different products, in a lot of different jurisdictions, under a whole bunch of different terms, including warranty terms. Moreover, by going ballistic about Microsoft EULAs, you're overlooking actual practice for the rest of the software industry, which is germane to my original comment. Here, for example, is yet another Texas educational institution, with yet a different warranty term on Office 2007 Academic Edition media-less installs:
"LIMITED WARRANTY
A. LIMITED WARRANTY. If you follow the instructions, the software
will perform substantially as described in the Microsoft materials that
you receive in or with the software.
B. TERM OF WARRANTY; WARRANTY RECIPIENT; LENGTH OF ANY IMPLIED
WARRANTIES. THE LIMITED WARRANTY COVERS THE SOFTWARE FOR ONE YEAR AFTER
ACQUIRED BY THE FIRST USER. IF YOU RECEIVE SUPPLEMENTS, UPDATES, OR
REPLACEMENT SOFTWARE DURING THAT YEAR, THEY WILL BE COVERED FOR THE
REMAINDER OF THE WARRANTY OR 30 DAYS, WHICHEVER IS LONGER. If the first
user transfers the software, the remainder of the warranty will apply to
the recipient.
TO THE EXTENT PERMITTED BY LAW, ANY IMPLIED WARRANTIES, GUARANTEES OR
CONDITIONS LAST ONLY DURING THE TERM OF THE LIMITED WARRANTY. Some
states do not allow limitations on how long an implied warranty lasts, so
these limitations may not apply to you. They also might not apply to you
because some countries may not allow limitations on how long an implied
warranty, guarantee or condition lasts. ..."
So, that's a one year term on this product from Microsoft's own hand, which could be readily extended by Texas courts on implied warranty terms, if it ever came to that, as Texas is one of those states where implied warranties are pretty squishy. I keep finding Texas institutions for you, because they are easy to find, and because Texas has a history of implied warranty decisions for consumer products that have tended to favor consumers, although not always. There would be a lot to argue in any such proceeding, and the downsides for any software maker are such that they'd far prefer to avoid getting such a case in front of a judge.
In 2000, the Federal Trade Commission held a forum about Warranty Protection for High Tech Products and Services. Nothing came of it directly, but a signal to the high tech industries that the FTC might consider joining consumer suits under Moss-Magnuson, and that has been enough, along with the opposition of a number of state Attorneys General, to chill UCITA.
And in any case, whatever warranty recovery users might get from Microsoft or any other commercial vendor, is more than any you can get from the OpenSSL stalwarts.
My point about NDAs isn't "hand wavy" as you put it, and it's not specific to Microsoft or any other vendor. Few bona fide security patch issues ever go to trial, because a commercial software maker that let things get that far out of hand wouldn't be in business long after discovery motions got going. In the money economy, it's in everybody's interest that patches are made for security flaws, and released as needed, and that they are applied, in an economic manner. Corporate customers generally get this work done under software maintenance agreements, but where the value of the work needed exceeds the value of such individual contracts to the software company, the fact that such work is done any way, as it is, frequently, by many vendors, is indicative of their desire to make good on their obligations, beyond immediate profit horizons. That's because, in the money economy, intangibles like reputation and customer loyalty have real value.
As a former corporate IT manager for nearly 15 years, I've seen product upgrades provided at no charge, training classes provided at no charge, consulting hours thrown in at no charge, additional seats of software given at no charge, deals cut on upgrade hardware, and even money paid by software companies to clients, when things went badly wrong. The last is not an everyday result, but it has happened when the sums involved were less than the probable costs of litigation, and the results could be held back from public disclosure. You can take that at face value, or call me a liar, but my experience is hardly unique. These are common trade practices for most software companies, when faced with quality issues, including security problems.
But even price concessions and services don't always sway commercial customers, when issues of software quality are at stake, as this article covering the Justice Department 2004 hearings of the Oracle takeover of Peoplesoft illustrated:
So your "point" about nobody suing Microsoft successfully is kind of a brain dead red herring. Why would Microsoft go through a trial to lose on the facts, when it could provide remedies for its products, at less cost than its legal expenses might be, with much less damage to its market reputation, than a policy of waiting to be sued for damages would elicit? Furthermore, as Microsoft moves into embedded products and markets, it's EULA practices may not insulate it, and those EULA practices may be successfully challenged elsewhere. But, if somebody wants remuneration for software flaws, there is somebody to sue in court, or to pursue by other means, at the bottom of many commercial responsibility chains, with economic interests of their own. Not so with much OSS software; in many cases, from the authors of critical modules of functionality, there is nothing but the personal property enclosed in a few one bedroom apartments to be had.
But this is all kind of far from the discussion of particular crypto problem described in the FPP, other than to point out that the OSS software model breaks on just such issues of culpability. OpenSSL says this one isn't really their bug; Debian maintainers say it is, Ubuntu maintainers roll a patch and push it out to the masses. But there is nobody to take to court, really, and the costs for any remediation are being borne by the people using the products, because, there is literally no other value pool from which consideration might be extracted.
posted by paulsc at 1:26 PM on May 17, 2008 [1 favorite]
posted by Malor at 11:50 AM on May 17
As far as I can tell, you're substituting what you think Microsoft EULAs and terms mean, for what Microsoft actually does, which is to sell a lot of different products, in a lot of different jurisdictions, under a whole bunch of different terms, including warranty terms. Moreover, by going ballistic about Microsoft EULAs, you're overlooking actual practice for the rest of the software industry, which is germane to my original comment. Here, for example, is yet another Texas educational institution, with yet a different warranty term on Office 2007 Academic Edition media-less installs:
"LIMITED WARRANTY
A. LIMITED WARRANTY. If you follow the instructions, the software
will perform substantially as described in the Microsoft materials that
you receive in or with the software.
B. TERM OF WARRANTY; WARRANTY RECIPIENT; LENGTH OF ANY IMPLIED
WARRANTIES. THE LIMITED WARRANTY COVERS THE SOFTWARE FOR ONE YEAR AFTER
ACQUIRED BY THE FIRST USER. IF YOU RECEIVE SUPPLEMENTS, UPDATES, OR
REPLACEMENT SOFTWARE DURING THAT YEAR, THEY WILL BE COVERED FOR THE
REMAINDER OF THE WARRANTY OR 30 DAYS, WHICHEVER IS LONGER. If the first
user transfers the software, the remainder of the warranty will apply to
the recipient.
TO THE EXTENT PERMITTED BY LAW, ANY IMPLIED WARRANTIES, GUARANTEES OR
CONDITIONS LAST ONLY DURING THE TERM OF THE LIMITED WARRANTY. Some
states do not allow limitations on how long an implied warranty lasts, so
these limitations may not apply to you. They also might not apply to you
because some countries may not allow limitations on how long an implied
warranty, guarantee or condition lasts. ..."
So, that's a one year term on this product from Microsoft's own hand, which could be readily extended by Texas courts on implied warranty terms, if it ever came to that, as Texas is one of those states where implied warranties are pretty squishy. I keep finding Texas institutions for you, because they are easy to find, and because Texas has a history of implied warranty decisions for consumer products that have tended to favor consumers, although not always. There would be a lot to argue in any such proceeding, and the downsides for any software maker are such that they'd far prefer to avoid getting such a case in front of a judge.
In 2000, the Federal Trade Commission held a forum about Warranty Protection for High Tech Products and Services. Nothing came of it directly, but a signal to the high tech industries that the FTC might consider joining consumer suits under Moss-Magnuson, and that has been enough, along with the opposition of a number of state Attorneys General, to chill UCITA.
And in any case, whatever warranty recovery users might get from Microsoft or any other commercial vendor, is more than any you can get from the OpenSSL stalwarts.
My point about NDAs isn't "hand wavy" as you put it, and it's not specific to Microsoft or any other vendor. Few bona fide security patch issues ever go to trial, because a commercial software maker that let things get that far out of hand wouldn't be in business long after discovery motions got going. In the money economy, it's in everybody's interest that patches are made for security flaws, and released as needed, and that they are applied, in an economic manner. Corporate customers generally get this work done under software maintenance agreements, but where the value of the work needed exceeds the value of such individual contracts to the software company, the fact that such work is done any way, as it is, frequently, by many vendors, is indicative of their desire to make good on their obligations, beyond immediate profit horizons. That's because, in the money economy, intangibles like reputation and customer loyalty have real value.
As a former corporate IT manager for nearly 15 years, I've seen product upgrades provided at no charge, training classes provided at no charge, consulting hours thrown in at no charge, additional seats of software given at no charge, deals cut on upgrade hardware, and even money paid by software companies to clients, when things went badly wrong. The last is not an everyday result, but it has happened when the sums involved were less than the probable costs of litigation, and the results could be held back from public disclosure. You can take that at face value, or call me a liar, but my experience is hardly unique. These are common trade practices for most software companies, when faced with quality issues, including security problems.
But even price concessions and services don't always sway commercial customers, when issues of software quality are at stake, as this article covering the Justice Department 2004 hearings of the Oracle takeover of Peoplesoft illustrated:
"... In a similar situation with Lawson Software, PeopleSoft did not try to match Lawson's prices but won the account anyway, Wilmington said. That's because Lawson's products are viewed as less adequate by many potential customers, he added. Oracle has argued the Lawson is an up-and-coming competitor. ..."
So your "point" about nobody suing Microsoft successfully is kind of a brain dead red herring. Why would Microsoft go through a trial to lose on the facts, when it could provide remedies for its products, at less cost than its legal expenses might be, with much less damage to its market reputation, than a policy of waiting to be sued for damages would elicit? Furthermore, as Microsoft moves into embedded products and markets, it's EULA practices may not insulate it, and those EULA practices may be successfully challenged elsewhere. But, if somebody wants remuneration for software flaws, there is somebody to sue in court, or to pursue by other means, at the bottom of many commercial responsibility chains, with economic interests of their own. Not so with much OSS software; in many cases, from the authors of critical modules of functionality, there is nothing but the personal property enclosed in a few one bedroom apartments to be had.
But this is all kind of far from the discussion of particular crypto problem described in the FPP, other than to point out that the OSS software model breaks on just such issues of culpability. OpenSSL says this one isn't really their bug; Debian maintainers say it is, Ubuntu maintainers roll a patch and push it out to the masses. But there is nobody to take to court, really, and the costs for any remediation are being borne by the people using the products, because, there is literally no other value pool from which consideration might be extracted.
posted by paulsc at 1:26 PM on May 17, 2008 [1 favorite]
Here, for example, is yet another Texas educational institution, with yet a different warranty term on Office 2007 Academic Edition media-less installs
Out of curiosity, is there a reason both your examples are from education?
But there is nobody to take to court, really, and the costs for any remediation are being borne by the people using the products, because, there is literally no other value pool from which consideration might be extracted.
Why was consideration owed in the first place? They didn't pay anything for the software, which (i'm guessing) was explicitly provided on an as-is basis.
posted by ghost of a past number at 3:45 PM on May 17, 2008
Out of curiosity, is there a reason both your examples are from education?
But there is nobody to take to court, really, and the costs for any remediation are being borne by the people using the products, because, there is literally no other value pool from which consideration might be extracted.
Why was consideration owed in the first place? They didn't pay anything for the software, which (i'm guessing) was explicitly provided on an as-is basis.
posted by ghost of a past number at 3:45 PM on May 17, 2008
I love the little digs of "in the money economy", as if open source wasn't making plenty of the stuff. Sheesh, it's pretty obvious you've made up your mind already.
All software warranties I'm aware of, Microsoft's included, expressly limit their potential damages to the price you paid for the software. They might think your business was worth enough to give you more than that, but they don't have to, and if you spent the ungodly sums to sue them, that's all you'd get -- a refund. That's it. There's nothing more there to get, becaues it was in the contract you accepted to use the software that their liability is limited.
You get exactly the same thing from an open source vendor; a full refund of the price you paid. If that price was zero, well, that's pretty much irrelevant. Whether it's two guys in garages or the richest software company in the world, you, the IT manager of XYZ corporation, get exactly the same thing: a refund.
In regards to the iceweasel thing:
Debian's not as brain-dead as it's being painted. Not at all. There was a good reason they did it; Mozilla says that if you call it Firefox, you can only release code they approve for it, period. Debian's security team wasn't willing to accept that limitation, because they need the ability to roll out security patches pronto. (Debian has an excellent security-issue response team.) By the terms of the Mozilla license, if they call it Firefox, they have to wait for approval before releasing patches, and they just didn't feel they could do that.
So, they repackage it as 'iceweasel', and put an alias to firefox in the packaging system. That way, they can patch any way they like, and you still install "firefox" -- which, as a virtual package, grabs iceweasel. Net impact to usability: zero. You still get, install, and upgrade firefox normally. Net impact to distribution: in exchange for having one tiny virtual package, they can avoid the copyright issues entirely, and support their users the way they wish.
Despite the bashing here, it's an elegant solution to a copyright issue. All sides involved get what they need: users get firefox, the security team can push patches, and Mozilla retains control over the Firefox web browser name and codebase.
posted by Malor at 3:53 PM on May 17, 2008 [3 favorites]
All software warranties I'm aware of, Microsoft's included, expressly limit their potential damages to the price you paid for the software. They might think your business was worth enough to give you more than that, but they don't have to, and if you spent the ungodly sums to sue them, that's all you'd get -- a refund. That's it. There's nothing more there to get, becaues it was in the contract you accepted to use the software that their liability is limited.
You get exactly the same thing from an open source vendor; a full refund of the price you paid. If that price was zero, well, that's pretty much irrelevant. Whether it's two guys in garages or the richest software company in the world, you, the IT manager of XYZ corporation, get exactly the same thing: a refund.
In regards to the iceweasel thing:
Debian's not as brain-dead as it's being painted. Not at all. There was a good reason they did it; Mozilla says that if you call it Firefox, you can only release code they approve for it, period. Debian's security team wasn't willing to accept that limitation, because they need the ability to roll out security patches pronto. (Debian has an excellent security-issue response team.) By the terms of the Mozilla license, if they call it Firefox, they have to wait for approval before releasing patches, and they just didn't feel they could do that.
So, they repackage it as 'iceweasel', and put an alias to firefox in the packaging system. That way, they can patch any way they like, and you still install "firefox" -- which, as a virtual package, grabs iceweasel. Net impact to usability: zero. You still get, install, and upgrade firefox normally. Net impact to distribution: in exchange for having one tiny virtual package, they can avoid the copyright issues entirely, and support their users the way they wish.
Despite the bashing here, it's an elegant solution to a copyright issue. All sides involved get what they need: users get firefox, the security team can push patches, and Mozilla retains control over the Firefox web browser name and codebase.
posted by Malor at 3:53 PM on May 17, 2008 [3 favorites]
enn: I share your frustrations, and my only solution is to use ports/portage to avoid binary packages (and the politics) entirely.
I use Gentoo's portage system on the majority of the machines I administrate (~30, including one that 100 machines netboot from) — because everything is compiled on your machine, from the original upstream source tarball/repo, any patching happens on your machine too. The culture generally discourages it for everything but filesystem layout and ugo ownership stuff. Full documentation of the changes is on your machine too, as a side effect of the distribution model. It's gotten pretty conservative starting a few years ago, the ricer crowd is mostly subjugated, and bleeding-edge packages now go into overlays.
I generally avoid binary distributions, but I happily use OpenWRT a bunch, and have Debian on one machine too slow for compiling everything (or mounting the filesystem elsewhere) to make sense every time I want to install something. I butt heads with both apt and the package maintainers constantly — I ended up having to maintain all my haskell stuff manually, as the stuff in even the unstable repos was both old and broken.
Binary distributions are even worse with fewer users — the now-dead 'Fink' apt-for-OSX system was terrible — the maintainers would put their personal config files in as the defaults in many packages.
posted by blasdelf at 4:20 PM on May 17, 2008
I use Gentoo's portage system on the majority of the machines I administrate (~30, including one that 100 machines netboot from) — because everything is compiled on your machine, from the original upstream source tarball/repo, any patching happens on your machine too. The culture generally discourages it for everything but filesystem layout and ugo ownership stuff. Full documentation of the changes is on your machine too, as a side effect of the distribution model. It's gotten pretty conservative starting a few years ago, the ricer crowd is mostly subjugated, and bleeding-edge packages now go into overlays.
I generally avoid binary distributions, but I happily use OpenWRT a bunch, and have Debian on one machine too slow for compiling everything (or mounting the filesystem elsewhere) to make sense every time I want to install something. I butt heads with both apt and the package maintainers constantly — I ended up having to maintain all my haskell stuff manually, as the stuff in even the unstable repos was both old and broken.
Binary distributions are even worse with fewer users — the now-dead 'Fink' apt-for-OSX system was terrible — the maintainers would put their personal config files in as the defaults in many packages.
posted by blasdelf at 4:20 PM on May 17, 2008
Oh, one more thing:
Hell, you could call Windows' terrible user/administrator "security" a catastrophic bug, and that hasn't been fixed at all. Ever. Nor will it be.
Windows has an excellent security model, which is mostly ignored by users, because the overall culture, as people upgraded from Win98, was of having total control over the computer at all times. Both users and programs make stupid assumptions about what they can and can't do. To make things easier, most people just run as an administrator, bypassing the entire security layer. But that's not Microsoft's fault.
It also doesn't mean the security model is bad, just that it's not being used. In Vista, it's finally being enforced to some degree with UAC, which is causing a lot of squawking. But it's a needed step. The fact that it's even causing people pain is a damning indictment, not of Microsoft, but of the software development community as a whole, which has almost entirely ignored security.
As a long time administrator of both Linux and Windows, including some quite large installations, I think the Windows security model is actually stronger. It's much finer-grained, and very, very thorough. It's just bypassed so often that it looks weaker.
Unix/Linux security, in comparison, is extremely 'chunky': you have root with all powers at all times, and then user/group/other, read/write/execute, and various system-level permissions. (ie, which memory you can see, what system information can you access, what resources you can consume... like that.) This isn't terribly granular, and describing complex access permissions in that environment can be very painful. Things that are easy to 'say' in Windows security can be quite difficult to express in standard Unix notation. There are ACL systems in Unix, but none are as standardized as good old NTFS.
Overall, I much prefer using Unix-style systems whenever possible, but it's important to see and understand the merits of the competition. I don't think it's terribly fair to bash Windows security; it's hardly ever used, so it's not like it can protect people.
posted by Malor at 4:20 PM on May 17, 2008
Hell, you could call Windows' terrible user/administrator "security" a catastrophic bug, and that hasn't been fixed at all. Ever. Nor will it be.
Windows has an excellent security model, which is mostly ignored by users, because the overall culture, as people upgraded from Win98, was of having total control over the computer at all times. Both users and programs make stupid assumptions about what they can and can't do. To make things easier, most people just run as an administrator, bypassing the entire security layer. But that's not Microsoft's fault.
It also doesn't mean the security model is bad, just that it's not being used. In Vista, it's finally being enforced to some degree with UAC, which is causing a lot of squawking. But it's a needed step. The fact that it's even causing people pain is a damning indictment, not of Microsoft, but of the software development community as a whole, which has almost entirely ignored security.
As a long time administrator of both Linux and Windows, including some quite large installations, I think the Windows security model is actually stronger. It's much finer-grained, and very, very thorough. It's just bypassed so often that it looks weaker.
Unix/Linux security, in comparison, is extremely 'chunky': you have root with all powers at all times, and then user/group/other, read/write/execute, and various system-level permissions. (ie, which memory you can see, what system information can you access, what resources you can consume... like that.) This isn't terribly granular, and describing complex access permissions in that environment can be very painful. Things that are easy to 'say' in Windows security can be quite difficult to express in standard Unix notation. There are ACL systems in Unix, but none are as standardized as good old NTFS.
Overall, I much prefer using Unix-style systems whenever possible, but it's important to see and understand the merits of the competition. I don't think it's terribly fair to bash Windows security; it's hardly ever used, so it's not like it can protect people.
posted by Malor at 4:20 PM on May 17, 2008
Holy shit. Unbelievably massive dicking-around with absolutely critical code. This kind of carelessness really raises questions of what other holes will have crept in.
I myself am lazy with security advisories, and I probably wouldn't have heard of this for a long time if it wasn't for Mefi, so thank you, finite, for posting this, because my keysare were weak. Fortunately I've only ever used or permitted one-time passwords* for inbound ssh connections, so my data should be safe (not that I have much to protect).
There are some details in this thread that I'd like verified.
ArkhanJG:
* Thusly my keylogger paranoia turns out to have brought unexpected benefits!
posted by Anything at 5:45 PM on May 17, 2008
I myself am lazy with security advisories, and I probably wouldn't have heard of this for a long time if it wasn't for Mefi, so thank you, finite, for posting this, because my keys
There are some details in this thread that I'd like verified.
ArkhanJG:
Yup. Any password used to login in via ssh to or from a weak key'd machine. Any traffic sent over an https connection, with a certificate that was generated from a weak key machine including a commercially signed one). Or ldaps, or sftp, or even imaps and smtps. All that traffic, and any passwords in there are potentially compromised.How accurate is this? I don't know what kinds of key exchange protocol ssh hosts generally use by default, and I'm not sure if I'm reading it right, but RFC 4432, describing RSA SSH key exchange, says that it's specifically the server who generates the public key that's used for setting up the session. Assuming that this is the method used, doesn't that mean that only connections to a weak ssh host are vulnerable?
Skorgu:Freaky:Saved by having generated all my SSH keys on Ubuntu installs older than this bug.
But you used them on newer versions, right? DSA keys need a good PRNG to be used safely, so unless you've only ever used RSA keys, you might like to consider now a good time to update them.Isn't RSA similarly vulnerable? The abovementioned RFC 4432 speaks of the shared secret being encrypted by a 'transient RSA public key', which I assume is generated per-session, which would leave your sessions up for grabs even if the now-screwed server's hostkey was generated before May 2006.
* Thusly my keylogger paranoia turns out to have brought unexpected benefits!
posted by Anything at 5:45 PM on May 17, 2008
I read it in one of the (many) security alerts last week that ssh'ing from a weak client machine (i.e. one with a crap PRNG) may also have been a spoofable session - I haven't found it again yet.
That said, reading the material below - if a strong pre-existing RSA public key is used on a weak server *and* a weak client is used, it may be vulnerable to a session replay attack, thus compromising a public key that wouldn't otherwise be vulnerable.
I don't know what SSH'ing from a weak client using a strong key on a secure host would imply, whether the weakness of the challenge material created by the client with low entropy is enough to compromise the session or the key, I'm afraid my understanding isn't that good! Perhaps I should be hired to work on debian patches.
as far as RSA goes...
I have put my securely generated public SSH user key onto a Debian system. Should I replace it?
This depends. If your key is an RSA key, it is not compromitted simply by putting the public key onto a server and authenticating against it. The SSH 2.0 protocol, as described in RFCs 4252 and 4253, part of the token being signed as challenge by the user is the “session identifier”, which is a hash from the key exchange. This effectively prevents replay attacks of authentication processes done using a non-vulnerable SSH key, because the random material used as challenge is not only controlled by the vulnerable SSH host, but also by the non-vulnerable client. Thus, the data your SSH key has to sign as a challenge is not vulnerable to the weak PRNG of the SSH server, and thus cannot compromise your key.
This is however not true for DSA keys. DSA has a weakness when used in the Diffie-Hellmann key exchange process, rendering it basically uneffective. If the attacker gets hold of the random number used by the Debian SSH server in the key exchange process, this can be used to calculate the private DSA key from the public key with a complexity of 2^16, being 65'536.
posted by ArkhanJG at 12:06 AM on May 18, 2008
That said, reading the material below - if a strong pre-existing RSA public key is used on a weak server *and* a weak client is used, it may be vulnerable to a session replay attack, thus compromising a public key that wouldn't otherwise be vulnerable.
I don't know what SSH'ing from a weak client using a strong key on a secure host would imply, whether the weakness of the challenge material created by the client with low entropy is enough to compromise the session or the key, I'm afraid my understanding isn't that good! Perhaps I should be hired to work on debian patches.
as far as RSA goes...
I have put my securely generated public SSH user key onto a Debian system. Should I replace it?
This depends. If your key is an RSA key, it is not compromitted simply by putting the public key onto a server and authenticating against it. The SSH 2.0 protocol, as described in RFCs 4252 and 4253, part of the token being signed as challenge by the user is the “session identifier”, which is a hash from the key exchange. This effectively prevents replay attacks of authentication processes done using a non-vulnerable SSH key, because the random material used as challenge is not only controlled by the vulnerable SSH host, but also by the non-vulnerable client. Thus, the data your SSH key has to sign as a challenge is not vulnerable to the weak PRNG of the SSH server, and thus cannot compromise your key.
This is however not true for DSA keys. DSA has a weakness when used in the Diffie-Hellmann key exchange process, rendering it basically uneffective. If the attacker gets hold of the random number used by the Debian SSH server in the key exchange process, this can be used to calculate the private DSA key from the public key with a complexity of 2^16, being 65'536.
posted by ArkhanJG at 12:06 AM on May 18, 2008
Malor:
By the terms of the Mozilla license, if they call it Firefox, they have to wait for approval before releasing patches, and they just didn't feel they could do that.
You're omitting the issue of non-Free artwork, which was a key factor in Debian dropping the 'Firefox' name. That's, IMO, what people are referring to when they criticise Debian over the naming issue, not the downstream security patches you refer to.
I believe that Debian's own artwork is itself non-Free, which adds fuel to the fire, though personally I have a sneaking admiration for idealists like Debian, RMS etc. despite being a pragmatist myself.
posted by Busy Old Fool at 1:43 AM on May 18, 2008 [1 favorite]
By the terms of the Mozilla license, if they call it Firefox, they have to wait for approval before releasing patches, and they just didn't feel they could do that.
You're omitting the issue of non-Free artwork, which was a key factor in Debian dropping the 'Firefox' name. That's, IMO, what people are referring to when they criticise Debian over the naming issue, not the downstream security patches you refer to.
I believe that Debian's own artwork is itself non-Free, which adds fuel to the fire, though personally I have a sneaking admiration for idealists like Debian, RMS etc. despite being a pragmatist myself.
posted by Busy Old Fool at 1:43 AM on May 18, 2008 [1 favorite]
LOLCATS / I haz no liability for consequential, indirect, or incidental damages.
posted by ryanrs at 5:21 AM on May 18, 2008
posted by ryanrs at 5:21 AM on May 18, 2008
Busy Old Fool: the non-free artwork was a side issue that could easily have been worked around one way or another. The real sticking point was the Mozilla Corp's insistence on right of prior inspection of any patches before release. The original disagreement was indeed over the artwork as I recall things, but once the Mozilla management got involved they started asserting their rights over all the other stuff as well & things went downhill from there.
The ironic part is that at the same time, Ubuntu was releasing firefox packages which were more heavily patched than the Debian ones without any complaint from the Mozilla people.
posted by pharm at 6:09 AM on May 18, 2008
The ironic part is that at the same time, Ubuntu was releasing firefox packages which were more heavily patched than the Debian ones without any complaint from the Mozilla people.
posted by pharm at 6:09 AM on May 18, 2008
I see this issue is much bigger than I originally thought. I'm not very security-minded, but when I read the question and saw the bit about un-initialized memory, I realized that would be a useful source of entropy. How did the questioner realize that he was dealing with code that would use unusual techniques to get entropy, and not realize that un-initialized memory could be very important?
I would like to thank finite for the post and for the delightful tags "math owie".
posted by Monochrome at 6:15 AM on May 18, 2008
I would like to thank finite for the post and for the delightful tags "math owie".
posted by Monochrome at 6:15 AM on May 18, 2008
pharm:
My impression from reading the official bug report discussion I linked to above is that patching was a major issue, but at least theoretically solvable. For example, other distributions were submitting patches for approval and one branch had been kept alive this way after Mozilla dropped support for it. Part of the problem was Debian's culture of 'patch, don't upgrade' for older releases coming up against Mozilla's culture which favours the opposite.
The artwork issue on the other hand was clearly intractable given the policies of the two bodies and was seen as the real sticking point by Debian folks (1,2). Admittedly there's a bunch of history which precedes the discussion I'm linking to, but it is afaik the point at which matters came to a head.
Getting back to the important issues, I rabidly agree with your point above that this is a systemic failure and am concerned that the majority of the comment I've seen around the web focuses on blame assigning. Have you seen any discussions of realistic ways to stop this reoccurring? If a security flaw of this magnitude isn't a wake-up call to change procedures, then nothing is.
(The irony of FireFox on Ubuntu crashing repeatedly on me while I was writing this comment is not lost on me.)
posted by Busy Old Fool at 10:13 AM on May 18, 2008
My impression from reading the official bug report discussion I linked to above is that patching was a major issue, but at least theoretically solvable. For example, other distributions were submitting patches for approval and one branch had been kept alive this way after Mozilla dropped support for it. Part of the problem was Debian's culture of 'patch, don't upgrade' for older releases coming up against Mozilla's culture which favours the opposite.
The artwork issue on the other hand was clearly intractable given the policies of the two bodies and was seen as the real sticking point by Debian folks (1,2). Admittedly there's a bunch of history which precedes the discussion I'm linking to, but it is afaik the point at which matters came to a head.
Getting back to the important issues, I rabidly agree with your point above that this is a systemic failure and am concerned that the majority of the comment I've seen around the web focuses on blame assigning. Have you seen any discussions of realistic ways to stop this reoccurring? If a security flaw of this magnitude isn't a wake-up call to change procedures, then nothing is.
(The irony of FireFox on Ubuntu crashing repeatedly on me while I was writing this comment is not lost on me.)
posted by Busy Old Fool at 10:13 AM on May 18, 2008
bof: Yes, the lack of discussion of the systemic issues concerns me too. I'm hoping that there's more going on behind the scenes within the Debian community at least.
I think one of the reasons that the firefox branding issue wasn't resolved in a more positive fashion was that things came to a head shortly before the Etch release & there just wasn't time to thrash out a better solution.
posted by pharm at 12:09 PM on May 18, 2008
I think one of the reasons that the firefox branding issue wasn't resolved in a more positive fashion was that things came to a head shortly before the Etch release & there just wasn't time to thrash out a better solution.
posted by pharm at 12:09 PM on May 18, 2008
From what I can see, there were two critical errors here. The first was when the enthusiastic Debian bug-fixer asked the published development mailing list for OpenSSL whether it was safe to remove the two lines in question, and the people on the list said it should be okay. The second was committing the patch before warning everyone about the problem; this should have been much better coordinated.
The second problem's easy to fix, but the first is harder. As a distro code maintainer, how can you be sure you've gotten a correct, authoritative answer about code changes in a package you maintain? This guy really, really shouldn't have been messing around with crypto code without guru-level understanding, but he was too ignorant to know that. He thought he was asking the maintainers whether it was safe to make these changes. As it turns out, he wasn't.... but, really, what could he have done differently? He thought he noticed a bug, he approached the official mailing list about it, he was told his proposed change should be okay, and went ahead and implemented it. How can open source projects prevent spurious answers of this sort?
I suspect the answer may be in signing emails.... if you don't get back a signed email from one of the official creators of the package, as defined by what public keys they distribute with the package, then you keep digging until you do. It strikes me that this would be particularly critical with crypto packages, which have some of the most painful and difficult-to-understand code in all of computing.
But can all the security- and crypto-related package creators be talked into this kind of setup? That's rather questionable.
posted by Malor at 1:40 PM on May 18, 2008
The second problem's easy to fix, but the first is harder. As a distro code maintainer, how can you be sure you've gotten a correct, authoritative answer about code changes in a package you maintain? This guy really, really shouldn't have been messing around with crypto code without guru-level understanding, but he was too ignorant to know that. He thought he was asking the maintainers whether it was safe to make these changes. As it turns out, he wasn't.... but, really, what could he have done differently? He thought he noticed a bug, he approached the official mailing list about it, he was told his proposed change should be okay, and went ahead and implemented it. How can open source projects prevent spurious answers of this sort?
I suspect the answer may be in signing emails.... if you don't get back a signed email from one of the official creators of the package, as defined by what public keys they distribute with the package, then you keep digging until you do. It strikes me that this would be particularly critical with crypto packages, which have some of the most painful and difficult-to-understand code in all of computing.
But can all the security- and crypto-related package creators be talked into this kind of setup? That's rather questionable.
posted by Malor at 1:40 PM on May 18, 2008
Oh, and one final thought: if you're not a mathematician, don't fuck with crypto code.
posted by Malor at 3:02 PM on May 18, 2008
posted by Malor at 3:02 PM on May 18, 2008
But you used them on newer versions, right? DSA keys need a good PRNG to be used safely, so unless you've only ever used RSA keys, you might like to consider now a good time to update them.
DSA keys and all SSH version 1 keys are prohibited by policy. There are some other, odd weaknesses in SSH's DSA implementation that made me ban 'em a few years ago.
As an aside, Debian's holier-than-thou culture is biting them in the ass and I'm not shedding a tear. Management by flamewar turns out not to be such a great idea. We're right in the sweet spot where Ubuntu is just the right mix of new packages and sane defaults to be perfect for us. If I had lots more machines to worry about I'd go with Gentoo.
Malor: s/if you're not a mathematician, //g
posted by Skorgu at 6:03 PM on May 18, 2008
DSA keys and all SSH version 1 keys are prohibited by policy. There are some other, odd weaknesses in SSH's DSA implementation that made me ban 'em a few years ago.
As an aside, Debian's holier-than-thou culture is biting them in the ass and I'm not shedding a tear. Management by flamewar turns out not to be such a great idea. We're right in the sweet spot where Ubuntu is just the right mix of new packages and sane defaults to be perfect for us. If I had lots more machines to worry about I'd go with Gentoo.
Malor: s/if you're not a mathematician, //g
posted by Skorgu at 6:03 PM on May 18, 2008
Monochrome: The uninitialized memory is not actually an important source of entropy (but it can't hurt). It gave a false alarm in valgrind, so the debian guy asked on the openssl-dev list, and an openssl maintainer said it was ok to remove. Uninitialized memory might have useful entropy, but you can't count on it having any entropy at all, so this is very much opportunistic and optional; removing it is not a problem. The problem is that the line being removed is the one that all entropy is added to the pool with, not just the uninitialized buffer(s), and neither the debian guy nor the openssl guy who answered him on the mailing list appeared to notice this.
posted by hattifattener at 6:32 PM on May 18, 2008
posted by hattifattener at 6:32 PM on May 18, 2008
paulsc: But, if somebody wants remuneration for software flaws, there is somebody to sue in court, or to pursue by other means, at the bottom of many commercial responsibility chains, with economic interests of their own. Not so with much OSS software; in many cases, from the authors of critical modules of functionality, there is nothing but the personal property enclosed in a few one bedroom apartments to be had.
This is the argument that is made, time and again, and there's some truth in it. But it's not the truth that Microsoft's (or anyone else's) EULA really means anything, or that there have been likely legal cases that are sealed (give me someone who will speak off the record about one and I'll believe you), but that as a traditional company that has a vendor/client relationship, Microsoft and others will do what they need to do at the sales end in order to keep a client.
That's really all there is to it. Microsoft will offer you products and services that cost them money at a steep discount, maybe even at a loss, in order to keep your business. That's not software integrity or a guarantee that their software won't trash your data the next week, it's customer service and marketing. It doesn't erase the downtime you had or the data you lost. In the end, only reliable products that are robust and maintainable are going to give you that integrity, and whether it's Microsoft, IBM, Oracle, or the maintainers of a Linux distribution, it doesn't matter from the software side. Any of those could be stable and maintainable, and there's no guarantee that an open source developer, who I'd argue is more likely at a company that sells for-profit services than in his bedroom, will address your needs and issues any slower than Microsoft. You're talking about corporate accountability and client relations, not software.
posted by mikeh at 2:07 PM on May 19, 2008
This is the argument that is made, time and again, and there's some truth in it. But it's not the truth that Microsoft's (or anyone else's) EULA really means anything, or that there have been likely legal cases that are sealed (give me someone who will speak off the record about one and I'll believe you), but that as a traditional company that has a vendor/client relationship, Microsoft and others will do what they need to do at the sales end in order to keep a client.
That's really all there is to it. Microsoft will offer you products and services that cost them money at a steep discount, maybe even at a loss, in order to keep your business. That's not software integrity or a guarantee that their software won't trash your data the next week, it's customer service and marketing. It doesn't erase the downtime you had or the data you lost. In the end, only reliable products that are robust and maintainable are going to give you that integrity, and whether it's Microsoft, IBM, Oracle, or the maintainers of a Linux distribution, it doesn't matter from the software side. Any of those could be stable and maintainable, and there's no guarantee that an open source developer, who I'd argue is more likely at a company that sells for-profit services than in his bedroom, will address your needs and issues any slower than Microsoft. You're talking about corporate accountability and client relations, not software.
posted by mikeh at 2:07 PM on May 19, 2008
skorgu: nothing new to add, just wanted to let you know that your vi line made me laugh. :)
posted by Malor at 9:46 PM on May 19, 2008
posted by Malor at 9:46 PM on May 19, 2008
8220;The problem is that the line being removed is the one that [adds all entropy to the pool]8221;
So perhaps a standard "Do Not Touch" comment is needed. Maybe
posted by Monochrome at 10:01 PM on May 20, 2008
So perhaps a standard "Do Not Touch" comment is needed. Maybe
@dangerous
?posted by Monochrome at 10:01 PM on May 20, 2008
No. That line isn't significantly more important than lots of other stuff in OpenSSL. That whole project is super important for security (and everyone knows it).
posted by ryanrs at 2:55 AM on May 23, 2008
posted by ryanrs at 2:55 AM on May 23, 2008
« Older The blah story. | Graham Siebe Alaskan Photoblog Newer »
This thread has been archived and is closed to new comments
posted by Class Goat at 10:11 PM on May 16, 2008