Another basic vulnerability found in Linux
February 27, 2016 3:34 AM Subscribe
One of the Internet's core building blocks has a vulnerability that leaves hundreds or thousands of apps and hardware devices vulnerable to attacks that can take complete control over them. There is a patch available for Linux-based devices that do domain-name lookups, but it will take time to patch them all.
jesus, we just got finished last week patching this very bug. I thought there'd been another one found already!
(pours large whisky to steady the nerves)
posted by nonspecialist at 4:18 AM on February 27, 2016 [23 favorites]
(pours large whisky to steady the nerves)
posted by nonspecialist at 4:18 AM on February 27, 2016 [23 favorites]
Patch this bug with extreme prejudice. You’ll have to reboot everything, even if it doesn’t get worse...
This CVE is easily the most difficult to scope bug I’ve ever worked on, despite it being in a domain I am intimately familiar with. The trivial defenses against cache traversal are easily bypassable; the obvious attacks that would generate cache traversal are trivially defeated. What we are left with is a morass of maybe’s, with the consequences being remarkably dire.
-- Daniel Kaminsky
So it's remotely exploitable, and the severity is raised by how fundamental it is. The unsettling part is the experts haven't been able to prove just how general the exploit is. E.g. in the standard configuration where all DNS traffic passes through an ISP cache, if the attacker isn't sitting in the middle.
However that last qualifier is less reassuring that you might hope. Most local networks aren't protected against interception, apart from public WiFi configured with "wireless isolation". So any computer on the same network could exploit it.
As the Ars promoted comment says, the other nice qualifier is that it only applies to Linux and glibc specifically. Your standard home router is built with Linux, but uses a different libc.
posted by sourcejedi at 4:22 AM on February 27, 2016 [6 favorites]
This CVE is easily the most difficult to scope bug I’ve ever worked on, despite it being in a domain I am intimately familiar with. The trivial defenses against cache traversal are easily bypassable; the obvious attacks that would generate cache traversal are trivially defeated. What we are left with is a morass of maybe’s, with the consequences being remarkably dire.
-- Daniel Kaminsky
So it's remotely exploitable, and the severity is raised by how fundamental it is. The unsettling part is the experts haven't been able to prove just how general the exploit is. E.g. in the standard configuration where all DNS traffic passes through an ISP cache, if the attacker isn't sitting in the middle.
However that last qualifier is less reassuring that you might hope. Most local networks aren't protected against interception, apart from public WiFi configured with "wireless isolation". So any computer on the same network could exploit it.
As the Ars promoted comment says, the other nice qualifier is that it only applies to Linux and glibc specifically. Your standard home router is built with Linux, but uses a different libc.
posted by sourcejedi at 4:22 AM on February 27, 2016 [6 favorites]
Yeah, if you haven't already patched this bug, you should turn on automatic patching because you are demonstrably not paying enough attention to do it manually.
posted by ryanrs at 4:23 AM on February 27, 2016 [23 favorites]
posted by ryanrs at 4:23 AM on February 27, 2016 [23 favorites]
At first I was sanguine, and then I read Kaminsky's piece and got nervous. :7(
posted by wenestvedt at 5:16 AM on February 27, 2016
posted by wenestvedt at 5:16 AM on February 27, 2016
Honestly, at this point I'm just thankful Android uses Bionic instead of glibc.
posted by Talez at 5:21 AM on February 27, 2016 [1 favorite]
posted by Talez at 5:21 AM on February 27, 2016 [1 favorite]
Dan Kaminsky's writeup is great, as sourcejedi already linked. Everyone go read that.
posted by iffthen at 5:31 AM on February 27, 2016 [1 favorite]
posted by iffthen at 5:31 AM on February 27, 2016 [1 favorite]
Was wondering if Metafilter was going to notice this. :)
Dan Kaminsky here. Feel free to ask anything about this bug. It's a nasty one.
(Linux kernel vs. Linux ecosystem -- they're both "Linux" for the purpose of exploit scoping. And Android had libstagefright on the MMS surface, so...)
posted by effugas at 5:34 AM on February 27, 2016 [102 favorites]
Dan Kaminsky here. Feel free to ask anything about this bug. It's a nasty one.
(Linux kernel vs. Linux ecosystem -- they're both "Linux" for the purpose of exploit scoping. And Android had libstagefright on the MMS surface, so...)
posted by effugas at 5:34 AM on February 27, 2016 [102 favorites]
So, if I just let Synaptic make sure all my packages are up-to-date on my Linux Mint desktop, am I basically fine?
posted by biogeo at 5:53 AM on February 27, 2016
posted by biogeo at 5:53 AM on February 27, 2016
Holy cow, it really is you . Cool!!
So what's you feeling about how this will affect stuff derived from Linux, and always-ignored devices running some embedded OS?
posted by wenestvedt at 6:11 AM on February 27, 2016 [2 favorites]
So what's you feeling about how this will affect stuff derived from Linux, and always-ignored devices running some embedded OS?
posted by wenestvedt at 6:11 AM on February 27, 2016 [2 favorites]
Mighty relieved that this is just the glibc thing I've been patching all week, not a new one. Had to reboot >100 physical storage nodes because there's no way to be certain that simply upgrading the package will propagate the fix fully.
posted by Urtylug at 6:26 AM on February 27, 2016 [1 favorite]
posted by Urtylug at 6:26 AM on February 27, 2016 [1 favorite]
It's astonishing to me that this bug is so old and in such a fundamental piece of code. Naïvely we'd hope that getaddrinfo() of all things would have had some attention over the years, some code review and some fuzz testing. But either it didn't, or our tools for getting code correct aren't good enough.
It really is lucky that embedded Linux systems tend not to use glibc. I'm sure uClibc couldn't possibly have any bugs like this. Right?
posted by Nelson at 6:28 AM on February 27, 2016 [1 favorite]
It really is lucky that embedded Linux systems tend not to use glibc. I'm sure uClibc couldn't possibly have any bugs like this. Right?
posted by Nelson at 6:28 AM on February 27, 2016 [1 favorite]
So you can patch your gnu/linux desktop via update manager, but still be vulnerable because somewhere between you and the server you are connecting to is using an unpatched library? Or is it once you patch your local system, you are safe? I don't get it.
posted by jabah at 6:32 AM on February 27, 2016 [1 favorite]
posted by jabah at 6:32 AM on February 27, 2016 [1 favorite]
Finally, the year of Windows on desktop is here.
posted by Ghostride The Whip at 6:40 AM on February 27, 2016 [15 favorites]
posted by Ghostride The Whip at 6:40 AM on February 27, 2016 [15 favorites]
This has been your periodic reminder that the phrase "given enough eyes, all bugs are shallow" is a religious dogma, not a fact, and has been experimentally demonstrated to be false.
posted by mhoye at 6:57 AM on February 27, 2016 [22 favorites]
posted by mhoye at 6:57 AM on February 27, 2016 [22 favorites]
jesus, we just got finished last week patching this very bug. I thought there'd been another one found already!
Don't breath easy just yet!
posted by fragmede at 7:03 AM on February 27, 2016
Don't breath easy just yet!
posted by fragmede at 7:03 AM on February 27, 2016
jabdah: if you patch your local system, it's safe. Both the hole and the patch only apply to clients.
Dan went on to discuss possible ways to protect un-patched clients. It would be nice if there's a simple way for ISP DNS caches to prevent old embedded systems etc. from getting 0wned, simply from querying an attacker-controlled domain. So far we don't have anything beyond "we don't actually know how to exploit it through a DNS cache".
posted by sourcejedi at 7:05 AM on February 27, 2016 [2 favorites]
Dan went on to discuss possible ways to protect un-patched clients. It would be nice if there's a simple way for ISP DNS caches to prevent old embedded systems etc. from getting 0wned, simply from querying an attacker-controlled domain. So far we don't have anything beyond "we don't actually know how to exploit it through a DNS cache".
posted by sourcejedi at 7:05 AM on February 27, 2016 [2 favorites]
Damn just seeing that Synology NAS DSM uses glibc. I wonder if it gets patched in the regular updates.
posted by Mei's lost sandal at 7:56 AM on February 27, 2016 [1 favorite]
posted by Mei's lost sandal at 7:56 AM on February 27, 2016 [1 favorite]
> there's no way to be certain that simply upgrading the package will propagate the fix fully.
There's a way to tell the state of things though. For programs that are run after the new version is installed, they'll get the new version. For running programs, the last column /proc/$pid/maps will tell you if a program is using a given library, and if that file is update by yum/apt-get then it changes to say deleted (based on inode). It will continue to deleted even though a newer version of the file is put in place.
If you manage to restart all processes (including pid 1 systemd with "systemctl daemon-reexec") then you don't need to reboot.
It's a bit tricky to get right though, and rebooting is an easy enough way to be completely sure.
If not having to reboot is worth your while, you can pay for a service that lets you apply this and other security updates without rebooting.
Disclaimer: I work on said product that lets you upgrade without rebooting. The existence of the product feels relevant but I'll avoid the sales pitch by not naming it unsolicited.
posted by fragmede at 8:11 AM on February 27, 2016 [8 favorites]
There's a way to tell the state of things though. For programs that are run after the new version is installed, they'll get the new version. For running programs, the last column /proc/$pid/maps will tell you if a program is using a given library, and if that file is update by yum/apt-get then it changes to say deleted (based on inode). It will continue to deleted even though a newer version of the file is put in place.
If you manage to restart all processes (including pid 1 systemd with "systemctl daemon-reexec") then you don't need to reboot.
It's a bit tricky to get right though, and rebooting is an easy enough way to be completely sure.
If not having to reboot is worth your while, you can pay for a service that lets you apply this and other security updates without rebooting.
Disclaimer: I work on said product that lets you upgrade without rebooting. The existence of the product feels relevant but I'll avoid the sales pitch by not naming it unsolicited.
posted by fragmede at 8:11 AM on February 27, 2016 [8 favorites]
So when can I expect a patch to be available for my router, car, thermostat, fridge, and voting machine?
posted by antonymous at 8:16 AM on February 27, 2016 [16 favorites]
posted by antonymous at 8:16 AM on February 27, 2016 [16 favorites]
I have a related question... Back when Linux was mainly server and desktop, the common wisdom I perceived was that it was more secure in part because of how few people had it versus Windows as a personal system, making win systems a more appealing target by scale and Linux systems a safe option. For whatever value that was ever true, is it changing as more and more juicy personal and business details move to Linux based systems like phone, smart appliances, and web ready home entertainment?
posted by codacorolla at 8:43 AM on February 27, 2016
posted by codacorolla at 8:43 AM on February 27, 2016
In part, yes, but the other part is how much faith you put in 'many eyes make all bugs shallow' idea. I personally agree, but YMMV. Also, said axiom doesn't work for closed binary blobs, more and more of which are finding themselves working side-by-side with open code, thus, the candle of linux being a 'safe system' is being burned at both ends.
posted by eclectist at 9:13 AM on February 27, 2016 [3 favorites]
posted by eclectist at 9:13 AM on February 27, 2016 [3 favorites]
Don't worry! Your voting machine is probably running an unpatched version of Windows XP .
posted by Mitrovarr at 9:21 AM on February 27, 2016 [20 favorites]
posted by Mitrovarr at 9:21 AM on February 27, 2016 [20 favorites]
This has been your periodic reminder that the phrase "given enough eyes, all bugs are shallow" is a religious dogma, not a fact, and has been experimentally demonstrated to be false.
The main problem with this statement is that the availability of source code does not imply the availability of qualified people with the time to read or audit the source code.
Also, go C!
posted by Slothrup at 9:24 AM on February 27, 2016 [5 favorites]
The main problem with this statement is that the availability of source code does not imply the availability of qualified people with the time to read or audit the source code.
Also, go C!
posted by Slothrup at 9:24 AM on February 27, 2016 [5 favorites]
Yeah, if you haven't already patched this bug, you should turn on automatic patching because you are demonstrably not paying enough attention to do it manually.
Absolutely. There's very little justification these days for not accepting automatic patches from whatever official channel exists for your operating system distribution. The danger of accepting a patch that breaks something is infinitesimal compared with the danger of leaving critical vulnerabilities like this unpatched.
posted by tobascodagama at 9:29 AM on February 27, 2016 [1 favorite]
Absolutely. There's very little justification these days for not accepting automatic patches from whatever official channel exists for your operating system distribution. The danger of accepting a patch that breaks something is infinitesimal compared with the danger of leaving critical vulnerabilities like this unpatched.
posted by tobascodagama at 9:29 AM on February 27, 2016 [1 favorite]
There's very little justification these days for not accepting automatic patches from whatever official channel exists
The FBI is currently working hard to change that.
posted by antonymous at 10:06 AM on February 27, 2016 [8 favorites]
The FBI is currently working hard to change that.
posted by antonymous at 10:06 AM on February 27, 2016 [8 favorites]
the phrase "given enough eyes, all bugs are shallow" is a religious dogma, not a fact
This weeks' reminder that Viega's The Myth of Open Source Security is almost 16 years old. Not much has changed since he wrote that.
posted by effbot at 10:10 AM on February 27, 2016 [3 favorites]
This weeks' reminder that Viega's The Myth of Open Source Security is almost 16 years old. Not much has changed since he wrote that.
posted by effbot at 10:10 AM on February 27, 2016 [3 favorites]
Every time a bug like this is found, there's a chorus of voices saying that we need to rewrite our OSs and core libs in Rust or another language with stronger safety guarantees than C. I'm really interested to see if those voices become strong enough to actually make it happen.
posted by scose at 10:20 AM on February 27, 2016 [1 favorite]
posted by scose at 10:20 AM on February 27, 2016 [1 favorite]
The Ubuntu server I use for idling on IRC got pwned the day after this vulnerability was announced. I had updated right after I heard of it, but didn't bother to reboot. I never quite figured out how they got into the system. Was an exploit in the wild by then? The payload was a pretty run of the mill DDoS bot (which was commonly spread with shellshock exploits when that was a thing).
posted by zsazsa at 10:20 AM on February 27, 2016
posted by zsazsa at 10:20 AM on February 27, 2016
Was an exploit in the wild by then?
I don't think there's an exploit in the wild now, unless the attacker can sit between your machine and your DNS server.
posted by ryanrs at 10:31 AM on February 27, 2016 [2 favorites]
I don't think there's an exploit in the wild now, unless the attacker can sit between your machine and your DNS server.
posted by ryanrs at 10:31 AM on February 27, 2016 [2 favorites]
Every time a bug like this is found, there's a chorus of voices saying that we need to rewrite our OSs and core libs in Rust or another language with stronger safety guarantees than C.
Rust is nice, but languages are mostly irrelevant. We need to stop building huge monolithic piles of code in which every single line of code is treated with the same level of trust and respect.
posted by effbot at 10:49 AM on February 27, 2016 [5 favorites]
Rust is nice, but languages are mostly irrelevant. We need to stop building huge monolithic piles of code in which every single line of code is treated with the same level of trust and respect.
posted by effbot at 10:49 AM on February 27, 2016 [5 favorites]
Users of Mint who are using the default Update Manager settings got the patch on the 17th. <------- this is me
posted by Too-Ticky at 11:40 AM on February 27, 2016 [1 favorite]
posted by Too-Ticky at 11:40 AM on February 27, 2016 [1 favorite]
This weeks' reminder that Viega's The Myth of Open Source Security is almost 16 years old. Not much has changed since he wrote that.
Yep. If a developer says "With many eyes, all bugs are shallow" then AUDIT THE FUCK out of their code. If they believe nonsense like that then they believe their code is secure and never actually check it.
posted by eriko at 11:52 AM on February 27, 2016 [4 favorites]
Yep. If a developer says "With many eyes, all bugs are shallow" then AUDIT THE FUCK out of their code. If they believe nonsense like that then they believe their code is secure and never actually check it.
posted by eriko at 11:52 AM on February 27, 2016 [4 favorites]
Rust is nice, but languages are mostly irrelevant. We need to stop building huge monolithic piles of code in which every single line of code is treated with the same level of trust and respect.
Rust modules without "unsafe" blocks can automatically be trusted not to contain memory safety bugs like this one. (Only to trigger them in buggy modules with unsafe blocks :-). You need language to express that before you can implement it. And you need efficient language before you can implement it at a competitive cost. Memory safety was one key (out of about four?) Rust was built with, specifically in order to create a web browser. (But not the JS engine, heh. Nor the drawing library, I think).
Granted "Servo" will be the first browser written from scratch with an eye to sandboxing modules. But Chrome has been continuously developed (in C++) with a focus on security and sandboxing for half a decade. It's important and good work, but it doesn't seem to be a panacea.
Free software core libs and OS kernels can't get a whole lot smaller without affecting performance. (See: competitive cost, this time for hardware). I agree the free software world can't keep writing large systems in low-level C and claiming superhuman abilities at avoiding security bugs. Microsoft Research had a project with whole-system approach to, well, system software, and language was a key part of it. They scrapped the project, and kept working on language improvements. Their draft "C++ Core Guidelines" almost look like a subset of Rust.
Unless the hardware gets better too :-). There's the currently vaporware Mill CPU, which reduces overheads on safe communication. The safe "portal" calls do most of what kernel calls can do, but between arbitrary pieces of software. Isolating device drivers efficiently (the microkernel approach) is a specific goal, though I haven't seen details about IOMMU for devices using DMA.
posted by sourcejedi at 12:57 PM on February 27, 2016 [5 favorites]
Rust modules without "unsafe" blocks can automatically be trusted not to contain memory safety bugs like this one. (Only to trigger them in buggy modules with unsafe blocks :-). You need language to express that before you can implement it. And you need efficient language before you can implement it at a competitive cost. Memory safety was one key (out of about four?) Rust was built with, specifically in order to create a web browser. (But not the JS engine, heh. Nor the drawing library, I think).
Granted "Servo" will be the first browser written from scratch with an eye to sandboxing modules. But Chrome has been continuously developed (in C++) with a focus on security and sandboxing for half a decade. It's important and good work, but it doesn't seem to be a panacea.
Free software core libs and OS kernels can't get a whole lot smaller without affecting performance. (See: competitive cost, this time for hardware). I agree the free software world can't keep writing large systems in low-level C and claiming superhuman abilities at avoiding security bugs. Microsoft Research had a project with whole-system approach to, well, system software, and language was a key part of it. They scrapped the project, and kept working on language improvements. Their draft "C++ Core Guidelines" almost look like a subset of Rust.
Unless the hardware gets better too :-). There's the currently vaporware Mill CPU, which reduces overheads on safe communication. The safe "portal" calls do most of what kernel calls can do, but between arbitrary pieces of software. Isolating device drivers efficiently (the microkernel approach) is a specific goal, though I haven't seen details about IOMMU for devices using DMA.
posted by sourcejedi at 12:57 PM on February 27, 2016 [5 favorites]
Probably too late in the thread for anyone to respond, but [how] does this affect linux people who run their own DNS servers (using DJBDNS, which doesn't support IPv6)?
I tried the test code (which just calls getaddrinfo("foo.bar.google.com")) and it doesn't crash on any of my machines, patched or unpatched. But they all get DNS from a DJBDNS box (which runs OpenWRT).
posted by spacewrench at 1:15 PM on February 27, 2016
I tried the test code (which just calls getaddrinfo("foo.bar.google.com")) and it doesn't crash on any of my machines, patched or unpatched. But they all get DNS from a DJBDNS box (which runs OpenWRT).
posted by spacewrench at 1:15 PM on February 27, 2016
Yeah, if you're going to propose starting over again from scratch, it shouldn't be "rewriting", it should be building a system that is not based on the antiquated, problematic Unix model. Incorporating all of the advances in operating systems and systems assurance research since 1970-whatever when Unix first hit the scene. Or even take a look at things that are by now ancient history but still have useful lessons, like Burroughs systems or Oberon.
posted by indubitable at 1:17 PM on February 27, 2016 [4 favorites]
posted by indubitable at 1:17 PM on February 27, 2016 [4 favorites]
It could be worse. While I was installing the update, my internet connection dropped. The updater did some interesting things when the connection came back, but seemed to terminate normally. Alas, when I rebooted the next day, the OS had been mildly (but unrecoverably) trashed.
Moral of story: before applying OS updates, make sure your 'good stuff' is backed up. Else the processs can be riskier than the odds of malice. Moral 2: Windows 10 doesn't offer that choice.
posted by Twang at 1:52 PM on February 27, 2016
Moral of story: before applying OS updates, make sure your 'good stuff' is backed up. Else the processs can be riskier than the odds of malice. Moral 2: Windows 10 doesn't offer that choice.
posted by Twang at 1:52 PM on February 27, 2016
Free software core libs and OS kernels can't get a whole lot smaller without affecting performance.
The way software is going? Sure it can. What it will effect most is development performance. The more you limit core libraries and kernels, the harder it becomes to write code, and I'd be willing to bet every agile shop in the world would state "You can't use that library, it's not be vetted as secure" as a roadblock and get some management type to yell at you until you give in.
But, on the other hand, that may need to happen. The many vectors this attack may have to work isn't the fault of most of the developers who wrote the code. Their only mistake was trusting glibc to not be insecure in resolving DNS.
On the gripping hand, DJB refused to give that trust, looked at what routines he needed in libc and stdio, and then wrote them securely in his own library. The end result? While still under his care, there were either 0 or 1 remote exploits between qmail and djbdns.*
So, maybe it is all these developer's fault. Maybe they should have, at the very least, said "I'm using these functions from glibc, maybe I should take a good look at them before I publish code using them?" On the fourth hand, maybe the did -- but they looked at libc on some other platform?
Thus, in terms of writing secure software, I think DJB may well be right. You can't trust the libraries and the kernel. You have to either write your code to be resistant to flaws in those, or write your own version that you can verify is solid. But then you're going to spend a few dozen sprints just making sure it's safe to write code.
Did I mention this security shit is hard?
* Someone claims that he found a hole and DJB refuses to pay him the reward offered for finding a hole. DJB states that the compromise was a result of linking against a different library than his, and thus, it wasn't a qmail hole, it was a hole in that library. DJB has consistently paid rewards otherwise, and so, not having actually done the work to confirm or deny, I tend to believe DJB on this one.
Besides. Given that in the same era, Sendmail and BIND were looking at remote compromises on a weekly basis? 1 hole between two packages is a triumph.
posted by eriko at 2:33 PM on February 27, 2016 [4 favorites]
The way software is going? Sure it can. What it will effect most is development performance. The more you limit core libraries and kernels, the harder it becomes to write code, and I'd be willing to bet every agile shop in the world would state "You can't use that library, it's not be vetted as secure" as a roadblock and get some management type to yell at you until you give in.
But, on the other hand, that may need to happen. The many vectors this attack may have to work isn't the fault of most of the developers who wrote the code. Their only mistake was trusting glibc to not be insecure in resolving DNS.
On the gripping hand, DJB refused to give that trust, looked at what routines he needed in libc and stdio, and then wrote them securely in his own library. The end result? While still under his care, there were either 0 or 1 remote exploits between qmail and djbdns.*
So, maybe it is all these developer's fault. Maybe they should have, at the very least, said "I'm using these functions from glibc, maybe I should take a good look at them before I publish code using them?" On the fourth hand, maybe the did -- but they looked at libc on some other platform?
Thus, in terms of writing secure software, I think DJB may well be right. You can't trust the libraries and the kernel. You have to either write your code to be resistant to flaws in those, or write your own version that you can verify is solid. But then you're going to spend a few dozen sprints just making sure it's safe to write code.
Did I mention this security shit is hard?
* Someone claims that he found a hole and DJB refuses to pay him the reward offered for finding a hole. DJB states that the compromise was a result of linking against a different library than his, and thus, it wasn't a qmail hole, it was a hole in that library. DJB has consistently paid rewards otherwise, and so, not having actually done the work to confirm or deny, I tend to believe DJB on this one.
Besides. Given that in the same era, Sendmail and BIND were looking at remote compromises on a weekly basis? 1 hole between two packages is a triumph.
posted by eriko at 2:33 PM on February 27, 2016 [4 favorites]
The whole point of the doomday bug is lost... if you keep it a secret! Why didn't you tell the world?!
posted by indubitable at 3:23 PM on February 27, 2016 [2 favorites]
posted by indubitable at 3:23 PM on February 27, 2016 [2 favorites]
Going over the thread, here's a few comments:
wenestvedt-- I think the Internet of Things is the first time we're seeing a default presumption of insecurity and risk in otherwise innovative products. It helps that they are indeed as insecure as expected. A bright point is there's just a lot of platform work going on with IoT, based on the fact that Linux _can_ scale down so low (rather than wild embedded OS's), with the platforms definitely including centralized aggregation of data and increasingly centralized update and management. It's not work being done for security purposes but the pieces are there.
Nelson-- This is a really weird bug, genuinely smeared across a decent amount of code and requiring an unusual codepath to execute. It's very much the kind of thing that flummoxes automated and human code auditors.
fragmede-- No, this is the exact time it's OK to mention a product. Directly responsive and helpful, please do so.
antonymous-- You can expect increasing pressure for patching platforms to exist.
codacorrola-- Linux has basically been the default OS for the Internet as we know it.
tobascodagama-- I'm a little worried about how deploying code to the field is _becoming_ a test framework.
scose-- There's a Rust OS in development. My feeling on the language right now is they got the concept right (safe language with fast enough performance for bare metal) and the syntax possibly fatally wrong. Programming is an exercise in cognitive science, not math, and people seem to forget that. Still, there are *very* few languages you can write OS's in so it's good to have another.
zsazsa-- That Ubuntu server running anything else but ircd?
effbot-- Indeed, thus my comments on microsandboxing.
spacewrench-- The proof of concept code isn't even trying to traverse caches like dnscache. That does not mean such traversal is impossible.
Twang-- See my response to tobascodagama.
eriko-- "Developer performance". EXACTLY. How secure is code nobody can write?
Burn_IT-- For every bug there are thirty variants. It's not like that guy had a bad day. You've got to hunt them all down.
posted by effugas at 3:59 PM on February 27, 2016 [15 favorites]
wenestvedt-- I think the Internet of Things is the first time we're seeing a default presumption of insecurity and risk in otherwise innovative products. It helps that they are indeed as insecure as expected. A bright point is there's just a lot of platform work going on with IoT, based on the fact that Linux _can_ scale down so low (rather than wild embedded OS's), with the platforms definitely including centralized aggregation of data and increasingly centralized update and management. It's not work being done for security purposes but the pieces are there.
Nelson-- This is a really weird bug, genuinely smeared across a decent amount of code and requiring an unusual codepath to execute. It's very much the kind of thing that flummoxes automated and human code auditors.
fragmede-- No, this is the exact time it's OK to mention a product. Directly responsive and helpful, please do so.
antonymous-- You can expect increasing pressure for patching platforms to exist.
codacorrola-- Linux has basically been the default OS for the Internet as we know it.
tobascodagama-- I'm a little worried about how deploying code to the field is _becoming_ a test framework.
scose-- There's a Rust OS in development. My feeling on the language right now is they got the concept right (safe language with fast enough performance for bare metal) and the syntax possibly fatally wrong. Programming is an exercise in cognitive science, not math, and people seem to forget that. Still, there are *very* few languages you can write OS's in so it's good to have another.
zsazsa-- That Ubuntu server running anything else but ircd?
effbot-- Indeed, thus my comments on microsandboxing.
spacewrench-- The proof of concept code isn't even trying to traverse caches like dnscache. That does not mean such traversal is impossible.
Twang-- See my response to tobascodagama.
eriko-- "Developer performance". EXACTLY. How secure is code nobody can write?
Burn_IT-- For every bug there are thirty variants. It's not like that guy had a bad day. You've got to hunt them all down.
posted by effugas at 3:59 PM on February 27, 2016 [15 favorites]
So, if I just let Synaptic make sure all my packages are up-to-date on my Linux Mint desktop, am I basically fine?
As long as you didn't download the malware-ridden ISOs of Linux Mint from when their site was hacked on Feb 20th.
posted by sebastienbailard at 4:40 PM on February 27, 2016 [3 favorites]
As long as you didn't download the malware-ridden ISOs of Linux Mint from when their site was hacked on Feb 20th.
posted by sebastienbailard at 4:40 PM on February 27, 2016 [3 favorites]
Yep. If a developer says "With many eyes, all bugs are shallow" then AUDIT THE FUCK out of their code.
Oh absolutely. When I say I agree with the axiom, I mean in sentiment. Nothing is particularly sacred about bunch of people looking at open code - but it's easier for a bunch of people to look at code if its open. I was contrasting auditing code between open source and closed source, hence the following clause about binary blobs. We could get into the question of the quality of code audit between people getting paid to do it and those who don't, but that's getting into the weeds a bit for this discussion.
posted by eclectist at 5:06 PM on February 27, 2016 [4 favorites]
Oh absolutely. When I say I agree with the axiom, I mean in sentiment. Nothing is particularly sacred about bunch of people looking at open code - but it's easier for a bunch of people to look at code if its open. I was contrasting auditing code between open source and closed source, hence the following clause about binary blobs. We could get into the question of the quality of code audit between people getting paid to do it and those who don't, but that's getting into the weeds a bit for this discussion.
posted by eclectist at 5:06 PM on February 27, 2016 [4 favorites]
effugas: It was running inspircd, postfix for SMTP, dovecot for IMAP and POP, and an nginx web server with several not-recently-patched PHP apps like Drupal. So a pretty big surface area. I'm guessing the PHP apps were the juiciest target.
posted by zsazsa at 5:42 PM on February 27, 2016
posted by zsazsa at 5:42 PM on February 27, 2016
zsazsa,
Yeah. Isolate or bust. Drupal is indeed the most likely target -- the web surface isn't just large, it's also much easier to develop exploits against. (Also, to develop anything against, thus why it's large.)
posted by effugas at 5:44 PM on February 27, 2016 [3 favorites]
Yeah. Isolate or bust. Drupal is indeed the most likely target -- the web surface isn't just large, it's also much easier to develop exploits against. (Also, to develop anything against, thus why it's large.)
posted by effugas at 5:44 PM on February 27, 2016 [3 favorites]
I think the DJB era of minimalist mostly-tight-as-fuck services maintained by abrasive security-godkings is gone, and we just have to deal with it. (exim and postfix and unbound/nsd aren't as watertight as qmail and djbdns based on recent history, but they've got a better track record than fucking sendmail and BIND.)
We step back and look at where the secteam announcements are coming from. Google has enough fucking money that it is systematically auditing every single bit of the core GNU/Linux build, because it can't afford not to. And as my old mucker Danny O'Brien recently wrote, it's about time that some small perky state underwrote some kind of core code infrastructure work, because seriously, throwing high-ranking civil servant money at established developers is not going to drain their budgets.
posted by holgate at 8:27 PM on February 27, 2016 [6 favorites]
We step back and look at where the secteam announcements are coming from. Google has enough fucking money that it is systematically auditing every single bit of the core GNU/Linux build, because it can't afford not to. And as my old mucker Danny O'Brien recently wrote, it's about time that some small perky state underwrote some kind of core code infrastructure work, because seriously, throwing high-ranking civil servant money at established developers is not going to drain their budgets.
posted by holgate at 8:27 PM on February 27, 2016 [6 favorites]
Holgate,
Man, it's always the same names ain't it? Danny O' covering me in NTK was literally some of the first press I ever received, and among the most encouraging. He'a actually right about the underwriting. I support that.
posted by effugas at 10:24 PM on February 27, 2016 [2 favorites]
Man, it's always the same names ain't it? Danny O' covering me in NTK was literally some of the first press I ever received, and among the most encouraging. He'a actually right about the underwriting. I support that.
posted by effugas at 10:24 PM on February 27, 2016 [2 favorites]
Linux is a kernel. The bug is in glibc. (But yes, patch your stuff.)
RMS would like to remind us that another basic vulnerability has been found in GNU/Linux...
posted by atoxyl at 11:18 PM on February 27, 2016 [2 favorites]
RMS would like to remind us that another basic vulnerability has been found in GNU/Linux...
posted by atoxyl at 11:18 PM on February 27, 2016 [2 favorites]
Dan - great work on the writeup, one of the best vuln analyses I've read.
A couple questions if you're still around:
In galaxy #2 in your post, with glibc at the centre, what bits of software do the green/blue black holes or connect points represent?
Related to that, I think there's been a fair bit of awareness over the last decade or so that (besides the kernel itself) glibc is a big single point of failure for modern Linux systems. But I personally can't recall a vulnerability where a) so many moving parts have to align, not within one address space, but rather in multiple parts of a network b) the vuln affects a SPOF like that. Can you think of any similar bug in the last 20 years?
posted by iffthen at 11:19 PM on February 27, 2016
A couple questions if you're still around:
In galaxy #2 in your post, with glibc at the centre, what bits of software do the green/blue black holes or connect points represent?
Related to that, I think there's been a fair bit of awareness over the last decade or so that (besides the kernel itself) glibc is a big single point of failure for modern Linux systems. But I personally can't recall a vulnerability where a) so many moving parts have to align, not within one address space, but rather in multiple parts of a network b) the vuln affects a SPOF like that. Can you think of any similar bug in the last 20 years?
posted by iffthen at 11:19 PM on February 27, 2016
iffthen, much appreciated.
The source for Galaxy #2 is here. Not sure what they represent, only that the graph is sourced via package dependencies.
The on-path alignment is minimal, it really is just force a retry and dump out too many stack bytes for the stack god. The *off-path* alignment is by contrast as you describe. There's not much that's quite like this. Some of the TLS bugs? But then those are only TLS endpoints. I'll think about it.
Most of glibc isn't in a position where there's potential for security boundary escalation. This is one of the few places where it is.
posted by effugas at 12:52 AM on February 28, 2016
The source for Galaxy #2 is here. Not sure what they represent, only that the graph is sourced via package dependencies.
The on-path alignment is minimal, it really is just force a retry and dump out too many stack bytes for the stack god. The *off-path* alignment is by contrast as you describe. There's not much that's quite like this. Some of the TLS bugs? But then those are only TLS endpoints. I'll think about it.
Most of glibc isn't in a position where there's potential for security boundary escalation. This is one of the few places where it is.
posted by effugas at 12:52 AM on February 28, 2016
biogeo: So, if I just let Synaptic make sure all my packages are up-to-date on my Linux Mint desktop, am I basically fine?
Yes, but just letting the Update Manager do its stuff at the default settings has you covered, too.
posted by Too-Ticky at 1:40 AM on February 28, 2016 [1 favorite]
Yes, but just letting the Update Manager do its stuff at the default settings has you covered, too.
posted by Too-Ticky at 1:40 AM on February 28, 2016 [1 favorite]
seriously, throwing high-ranking civil servant money at established developers is not going to drain their budgets
Where are you going to find established developers willing to take such a significant pay cut?
posted by Mars Saxman at 12:45 PM on February 28, 2016 [1 favorite]
Where are you going to find established developers willing to take such a significant pay cut?
posted by Mars Saxman at 12:45 PM on February 28, 2016 [1 favorite]
> Can you think of any similar bug in the last 20 years?
I'll assume you're not counting GHOST/CVE-2015-0235 since Dan also has a writeup about why CVE-2015-7547 is worse than GHOST/CVE-2015-0235.
XSS attacks as a whole could be considered similar since an attacker provides data and a neutral server hosts the data that gets sent to the target. They're even more similar when you take into account Javascript sandbox escapes. Using an image hosting site to host a LibTIFF exploit for iOS 1.1 might also be considered similar. CVE-2010-4344 against Exim might have been able to get through a relay to exploit an internal mail server if an attacker know about target IT department configurations. Outside of DNS though, there aren't many open protocols left that publicly involve multiple servers, and other than SMTP, none that come to my mind are very popular.
> fragmede-- No, this is the exact time it's OK to mention a product. Directly responsive and helpful, please do so.
I work on Ksplice for Oracle, and we added live-patching for userland late last year. Users who installed Ksplice for Userspace were protected against CVE-2015-7547 (and will be protected against tomorrow's OpenSSL update). No rebooting, no downtime, no hassle.</sales>
posted by fragmede at 1:29 PM on February 28, 2016 [2 favorites]
I'll assume you're not counting GHOST/CVE-2015-0235 since Dan also has a writeup about why CVE-2015-7547 is worse than GHOST/CVE-2015-0235.
XSS attacks as a whole could be considered similar since an attacker provides data and a neutral server hosts the data that gets sent to the target. They're even more similar when you take into account Javascript sandbox escapes. Using an image hosting site to host a LibTIFF exploit for iOS 1.1 might also be considered similar. CVE-2010-4344 against Exim might have been able to get through a relay to exploit an internal mail server if an attacker know about target IT department configurations. Outside of DNS though, there aren't many open protocols left that publicly involve multiple servers, and other than SMTP, none that come to my mind are very popular.
> fragmede-- No, this is the exact time it's OK to mention a product. Directly responsive and helpful, please do so.
I work on Ksplice for Oracle, and we added live-patching for userland late last year. Users who installed Ksplice for Userspace were protected against CVE-2015-7547 (and will be protected against tomorrow's OpenSSL update). No rebooting, no downtime, no hassle.</sales>
posted by fragmede at 1:29 PM on February 28, 2016 [2 favorites]
> Linux kernel vs. Linux ecosystem -- they're both "Linux" for the purpose of
> exploit scoping.
Yes, and this is neither a Linux kernel bug nor a Linux ecosystem bug; it's a glibc bug. If you say the scope is "Linuxy stuff" you'll have both false positives and false negatives.
False positives, because many Linux systems don't use glibc. (Embedded systems like routers and set-top boxes and toasters mostly use uClibc or dietlibc or musl or something. Android uses bionic.) I won't say "most" because I haven't seen reliable statistics, but we're talking about a lot of the "Linux ecosystem" (in terms of plain count of installed systems in the wild) with no glibc in sight.
False negatives, because glibc can be built (and applications linked with it) on all sorts of things that aren't running a Linux kernel.
You could call it a "Linux desktop and server" problem and be a lot closer to right, but why not be precise?
posted by sourcequench at 3:20 PM on February 28, 2016 [1 favorite]
> exploit scoping.
Yes, and this is neither a Linux kernel bug nor a Linux ecosystem bug; it's a glibc bug. If you say the scope is "Linuxy stuff" you'll have both false positives and false negatives.
False positives, because many Linux systems don't use glibc. (Embedded systems like routers and set-top boxes and toasters mostly use uClibc or dietlibc or musl or something. Android uses bionic.) I won't say "most" because I haven't seen reliable statistics, but we're talking about a lot of the "Linux ecosystem" (in terms of plain count of installed systems in the wild) with no glibc in sight.
False negatives, because glibc can be built (and applications linked with it) on all sorts of things that aren't running a Linux kernel.
You could call it a "Linux desktop and server" problem and be a lot closer to right, but why not be precise?
posted by sourcequench at 3:20 PM on February 28, 2016 [1 favorite]
sourcequench--
Because everything Linux-y needs to be _evaluated_. The caveats don't show up until after you've started the patch war room.
posted by effugas at 6:49 PM on February 28, 2016
Because everything Linux-y needs to be _evaluated_. The caveats don't show up until after you've started the patch war room.
posted by effugas at 6:49 PM on February 28, 2016
fragmede--
"@dakami As in they rexec the software with same state? or just restarting daemons, etc? Wasn't the latter always possible?"
posted by effugas at 7:22 PM on February 28, 2016
"@dakami As in they rexec the software with same state? or just restarting daemons, etc? Wasn't the latter always possible?"
posted by effugas at 7:22 PM on February 28, 2016
sourcequench,
I should clarify this issue actually came up when I was writing the blog post. How much should I say this was a Linux problem? It's why I specifically call out Android as safe and specifically say there are other libc's that might not be. Also why I say we have trouble detecting on the network _this_ glibc. Actually having clarity on what's affected is a huge deal that I really do want us getting better at. Right now, it's more difficult than it should be to be precise.
posted by effugas at 10:23 PM on February 28, 2016 [2 favorites]
I should clarify this issue actually came up when I was writing the blog post. How much should I say this was a Linux problem? It's why I specifically call out Android as safe and specifically say there are other libc's that might not be. Also why I say we have trouble detecting on the network _this_ glibc. Actually having clarity on what's affected is a huge deal that I really do want us getting better at. Right now, it's more difficult than it should be to be precise.
posted by effugas at 10:23 PM on February 28, 2016 [2 favorites]
It's not rexec nor do programs need to be recompiled to be supported by Ksplice for Userspace. It's also not restarting daemons because, they're right, thats nothing new. While a daemon is restarting, your service is down until it comes up, nor can you be sure a daemon will actually come back up (eg config file has changed). Not everything is a restartable daemon either and we patch all running processes in order to be able to claim a system is no longer vulnerable to CVE-2015-7547.
If you're familiar with how Ksplice works to live-patch the kernel without rebooting, it's like that, just on userspace.
posted by fragmede at 2:42 AM on February 29, 2016 [1 favorite]
If you're familiar with how Ksplice works to live-patch the kernel without rebooting, it's like that, just on userspace.
posted by fragmede at 2:42 AM on February 29, 2016 [1 favorite]
Where are you going to find established developers willing to take such a significant pay cut?
That's a fairly US west-coast-centric line, and I think slightly outdated one. Without doubt, tech behemoths have lots of money to throw at their security hires, but one of the takeaways from Heartbleed was how the core OpenSSL project was working mostly on a part-time or spare-time basis with fuck-all money. It's not surprising that individual Silicon Valley security engineers may get paid more for auditing $OPEN_SOURCE_PROJECT than the projects themselves receive in funding, but it is a bit perverse.
More broadly: lots of developers took a pay cut to work for the UK GDS -- in London, where rent ain't cheap -- because they wanted to do something that made a difference as part of a pretty remarkable team. That's not an exact parallel to working on code infrastructure with state financial support, but given how a lot of that core infrastructure is already developed and maintained as a de facto labour of love, I can't see how an offer to underwrite it would fall on stony ground.
posted by holgate at 8:53 AM on February 29, 2016 [3 favorites]
That's a fairly US west-coast-centric line, and I think slightly outdated one. Without doubt, tech behemoths have lots of money to throw at their security hires, but one of the takeaways from Heartbleed was how the core OpenSSL project was working mostly on a part-time or spare-time basis with fuck-all money. It's not surprising that individual Silicon Valley security engineers may get paid more for auditing $OPEN_SOURCE_PROJECT than the projects themselves receive in funding, but it is a bit perverse.
More broadly: lots of developers took a pay cut to work for the UK GDS -- in London, where rent ain't cheap -- because they wanted to do something that made a difference as part of a pretty remarkable team. That's not an exact parallel to working on code infrastructure with state financial support, but given how a lot of that core infrastructure is already developed and maintained as a de facto labour of love, I can't see how an offer to underwrite it would fall on stony ground.
posted by holgate at 8:53 AM on February 29, 2016 [3 favorites]
The Core Infrastructure Initiative started up after Heartbleed, as I understand it -- Dan is one of its security experts -- and it's "providing funding for fundamental projects like OpenSSL, OpenSSH, NTPd and others" to make internet infrastructure more robust ("projects being able to add team members, improve coding best practices, set up predictable release schedules and roadmaps and perform audits to help future proof code"). They have additional programs around education, tooling, a census, & more. They are getting funded by some big companies. Dan, I'd be curious to hear your thoughts on how/whether CII plays a part in reducing vulnerabilities like this in the future -- I presume your involvement means you think it's a good idea :) but more specifically I'm wondering whether CII will/would help prevent vulns like the glibc one we're discussing here.
posted by brainwane at 10:58 AM on March 1, 2016 [2 favorites]
posted by brainwane at 10:58 AM on March 1, 2016 [2 favorites]
Today I'm hearing about:
"CacheBleed: A Timing Attack on OpenSSL Constant Time RSA" (as of today an OpenSSL update is available; "Cloud servers that commonly run mutually untrusting workloads concurrently are a more realistic attack scenario")
The followup announcement to fragmede's comment earlier: DROWN ("Decrypting RSA with Obsolete and Weakened eNcryption") ("merely supporting SSLv2 is a threat to modern servers and clients. It allows an attacker to decrypt modern TLS connections between up-to-date clients and servers by sending probes to a server that supports SSLv2 and uses the same private key." "Our measurements indicate 33% of all HTTPS servers are vulnerable to the attack." Site includes instructions for securing your server.)
posted by brainwane at 11:12 AM on March 1, 2016 [1 favorite]
"CacheBleed: A Timing Attack on OpenSSL Constant Time RSA" (as of today an OpenSSL update is available; "Cloud servers that commonly run mutually untrusting workloads concurrently are a more realistic attack scenario")
The followup announcement to fragmede's comment earlier: DROWN ("Decrypting RSA with Obsolete and Weakened eNcryption") ("merely supporting SSLv2 is a threat to modern servers and clients. It allows an attacker to decrypt modern TLS connections between up-to-date clients and servers by sending probes to a server that supports SSLv2 and uses the same private key." "Our measurements indicate 33% of all HTTPS servers are vulnerable to the attack." Site includes instructions for securing your server.)
posted by brainwane at 11:12 AM on March 1, 2016 [1 favorite]
brainwane--
CII is where it begins but we're several generations away from where we should be. We'll get there, and I'm pushing along with a number of others. Journey of a thousand miles and all that.
CacheBleed -- multitenancy is a lie. Heresy, I know, but a repeatedly proven thing is repeatedly proven.
I'm cautiously approving of DROWN as a practical threat. It's combining three things that can absolutely happen (1000 viewed sessions from the client with no corruption, 40000 connections to a server with SSLv2, and a 2^50 workload *implemented* on EC2) to crack 2048 bit. Not everything is so careful so I'm happy to see this be.
posted by effugas at 9:24 PM on March 1, 2016
CII is where it begins but we're several generations away from where we should be. We'll get there, and I'm pushing along with a number of others. Journey of a thousand miles and all that.
CacheBleed -- multitenancy is a lie. Heresy, I know, but a repeatedly proven thing is repeatedly proven.
I'm cautiously approving of DROWN as a practical threat. It's combining three things that can absolutely happen (1000 viewed sessions from the client with no corruption, 40000 connections to a server with SSLv2, and a 2^50 workload *implemented* on EC2) to crack 2048 bit. Not everything is so careful so I'm happy to see this be.
posted by effugas at 9:24 PM on March 1, 2016
And coming in from a less abstract perspective: DROWN is viable because updating SSLProtocol and SSLCipherSuite on Apache is mostly a piece of piss, but nobody wants to touch a fucking production mail server. So you'll have systems where the working cipher suite is just whatever openssl ciphers has to offer, and that's not good enough if you've been using the default binary OpenSSL packages on your server, or even the default options when building OpenSSL from source.
posted by holgate at 9:50 PM on March 1, 2016
posted by holgate at 9:50 PM on March 1, 2016
Mars Saxman: I'm a civil servant who gets paid to write code, much of it open-source, and I partially agree with you. I think you're right to mention this as a major problem since it's certainly true that e.g. the top of the federal pay scale tops out well below the private sector, and not only when compared with Silicon Valley.
That said, the GSA's 18F and the USDS project have been successful in getting many extremely talented people based on patriotism and a sense of mission – do you want to work on another likely to fail iPhone app or make it easier for veterans to get their benefits? That brings me to the real problem: there aren't many positions available for programmers because the model has been non-technical managers overseeing contractors, with the usual poor results and cost overruns which that entails.
I don't think we're anywhere near saturation on the number of skilled people who would take a safe job at what would be in most other fields a decent wage, especially when the hook is that your full-time job is to make the Internet safer and every line of code you write is by statute open-source so there's no arguing with your boss about releasing it, either. Since the federal government is widely distributed, you could even take the sting out of wages somewhat by locating teams in cities which don't have absurdly competitive real estate markets - nerds grow up in many places & not everyone likes SF - or expanding the availability of full-time telework.
posted by adamsc at 6:36 AM on March 2, 2016 [3 favorites]
That said, the GSA's 18F and the USDS project have been successful in getting many extremely talented people based on patriotism and a sense of mission – do you want to work on another likely to fail iPhone app or make it easier for veterans to get their benefits? That brings me to the real problem: there aren't many positions available for programmers because the model has been non-technical managers overseeing contractors, with the usual poor results and cost overruns which that entails.
I don't think we're anywhere near saturation on the number of skilled people who would take a safe job at what would be in most other fields a decent wage, especially when the hook is that your full-time job is to make the Internet safer and every line of code you write is by statute open-source so there's no arguing with your boss about releasing it, either. Since the federal government is widely distributed, you could even take the sting out of wages somewhat by locating teams in cities which don't have absurdly competitive real estate markets - nerds grow up in many places & not everyone likes SF - or expanding the availability of full-time telework.
posted by adamsc at 6:36 AM on March 2, 2016 [3 favorites]
… since I was too lazy to enter this on my phone for the previous post, if anyone is curious about joining the United States Digital Service:
https://www.whitehouse.gov/digital/united-states-digital-service
posted by adamsc at 6:43 AM on March 2, 2016 [2 favorites]
https://www.whitehouse.gov/digital/united-states-digital-service
posted by adamsc at 6:43 AM on March 2, 2016 [2 favorites]
It's more than just pay, though. Last I heard, the US Digital Service was drug testing applicants; software development is in high enough demand that the indignity of peeing into a cup for a job that pays far less is not a good start to a recruitment drive.
posted by fragmede at 2:10 AM on March 3, 2016 [2 favorites]
posted by fragmede at 2:10 AM on March 3, 2016 [2 favorites]
« Older Are negative forces playing a larger role than... | The Foggy Dew Newer »
This thread has been archived and is closed to new comments
posted by sourcequench at 4:16 AM on February 27, 2016 [17 favorites]