How to get your University banned in 1 easy step
April 22, 2021 1:20 PM   Subscribe

Linux kernel maintainer Greg Kroah-Hartman reacted to the discovery that researchers at the University of Minnesota had been submitting bogus patches by banning all code submissions from UMN. The University's CS&E has since put a halt to the research.
Qiushi Wu, a doctoral student in computer science and engineering at the American college, and Kangjie Lu, assistant professor at the school, penned a paper titled, "On the Feasibility of Stealthily Introducing Vulnerabilities in Open-Source Software via Hypocrite Commits"
The researchers and their IRB apparently came to the conclusion that intentionally submitting bad code as patches to the linux kernel did not constitute human experimentation. UMN's statement seems to suggest that the University has reconsidered that position.
posted by axiom (102 comments total) 34 users marked this as a favorite
 
Personal opinion: IRBs and their equivalents often lack representation/experience from computer scientists. This is not necessarily their fault -- the compliance burdens they're meant to meet stem from research ethics issues in the bio/med and social sciences, at least how the U.S. construes it. However, I firmly believe that scope needs to grow. Previous institutions I've been in have encountered similar issues that didn't make the headlines; think things like web scraping or uses of Mechanical Turk. And those were at institutions that are considered tech savvy!
posted by redct at 1:29 PM on April 22, 2021 [16 favorites]


Needs the plonk tag.
posted by Space Coyote at 1:30 PM on April 22, 2021 [10 favorites]


Wow, that's terrible. Even though the researchers notified the kernel maintainers before the patches were merged, it still could've put intentionally insecure code on the maintainers' machines. Admittedly I imagine these days that's all done with VMs, but still.

At the very least it was a waste of the maintainers' time in service of proving the obvious point that subtly insecure patches from well-credentialed developers can be accepted by an open source project. Since that happens unintentionally all the time, there was zero need for this "experiment".
posted by jedicus at 1:30 PM on April 22, 2021 [12 favorites]


I think they've done the right thing here, it's not the kernel people's job to have to know who the bad actors at UMN are - banning the lot, to get their attention, and then watching to see if they police themselves before letting them back in seems like a sensible way to move forward
posted by mbo at 1:35 PM on April 22, 2021 [10 favorites]


Mind boggling that the IRB thought this was OK (or at least irrelevant.)

Some are opposed to the ban:
In a Twitter post, Filipo Valsorda, a cryptography and software engineer at Google, pointed to Kroah-Hartman's remarks about rejecting future contributions from University of Minnesota email addresses and argued that making trust decisions on the basis of email domains rather than confirmed code correctness is a more noteworthy problem.

"Possibly unpopular opinion, but I feel like 'only merge things after verifying they are valid' should maybe be the default policy of the most used piece of software in the world," he wrote.
Strong disagree with the conclusion here. If someone has been proven to be untrustworthy, you do not accept their contributions. Of course you should *also* verify validity when possible, but I've never been exposed to a quality system in engineering where the honesty or competence of engineers was rendered irrelevant because of good testing. "You can't test quality into the system" was an oft-repeated mantra. The first step towards a good product is having good process to create the product, including well trained, competent contributors.
posted by mark k at 1:38 PM on April 22, 2021 [29 favorites]


Banning is totally appropriate.

The whole "validate correct code and don't blanket ban addresses" complete ignores the social engineering aspect of all this. That attitude itself is a security risk, I'd argue, a category error. Security should be in depth not just a single point of failure.
posted by bonehead at 1:43 PM on April 22, 2021 [18 favorites]


At the very least it was a waste of the maintainers' time in service of proving the obvious point that subtly insecure patches from well-credentialed developers can be accepted by an open source project. Since that happens unintentionally all the time, there was zero need for this "experiment".
This doesn't follow. This experiment shows that an approval system which claims to generate more secure systems using many eyes remains vulnerable at many other points in the supply chain, especially when the unintentional becomes intentional and some of those eyes are vulnerable to subtle social engineering attacks.

Quantifying how hard this is and what level of threat you can sneak through undetected makes it possible to estimate risk and maybe apply resources better to fulfill that security promise.

By this logic, penetration testing is unnecessary because everybody knows systems are vulnerable and vulnerabilities are compromised.
posted by abulafa at 1:43 PM on April 22, 2021 [4 favorites]


"Only merge things after verifying they are valid" is a pretty concerning opinion from a software engineer who works in the security space, IMHO. They, of all people, should know that it is impossible for human review to infallibly verify that code is valid.
posted by primethyme at 1:44 PM on April 22, 2021 [15 favorites]


The other great bit of this is how the researcher cynically brought up the problem the Linux kernel developer community has had with its reputation for being unfriendly as if he was jest a newbie trying to learn the ropes. Score one for social justice.
posted by Space Coyote at 1:47 PM on April 22, 2021 [5 favorites]


By this logic, penetration testing is unnecessary because everybody knows systems are vulnerable and vulnerabilities are compromised.

The goal of penetration testing is to establish whether a specific system can be compromised, so that vulnerabilities in that specific system can be addressed. The goal is concrete and meaningful.

The goal of this project was to test whether insecure code from a well-credentialed developer could make it into the kernel. But since the insecure code was made so on purpose and since the researchers offer no real plan for avoiding this problem*, it's a pointless exercise. It offers no particular lesson learned, either in fixing the specific problem (since the vulnerability was intentional) or in the bigger picture of how the kernel or other open source projects are maintained.

* The "proposed mitigations" section of the paper offers such gems as
We believe that an effective and immediate action would be to update the code of conduct of OSS, such as adding a term like “by submitting the patch, I agree to not intend to introduce bugs.”
The claim that this would be effective is, frankly, insulting to the reader.
posted by jedicus at 1:57 PM on April 22, 2021 [46 favorites]


Greg Kroah-Hartman is kind of an asshole, but he's right about this.
posted by Joakim Ziegler at 2:05 PM on April 22, 2021 [2 favorites]


To be clear: the rest of their mitigations are either a) directed at intentional vulnerabilities but pointless or b) equally useful for finding unintentional vulnerabilities and so just amount to "do better" and would not require this experiment.

For another example of (a), they suggest verifying the identities of committers but acknowledge that "previous works [14, 42] show that checking the identity over online social networks is a challenging problem" and offer no further ideas on how to resolve those problems.

"Have you considered developing and implementing a widely available and acceptable online identity verification system that is not susceptible to compromise by the very sort of state actors who might try to sneak a vulnerability into the Linux kernel?" is not a helpful suggestion.
posted by jedicus at 2:06 PM on April 22, 2021 [9 favorites]


It's like they thought they were doing a software version of the Sokal Affair, but are so bad at their own field of study that they failed to grasp the difference between malicious code in the Linux kernel and a prank article published in Social Text.
posted by snuffleupagus at 2:09 PM on April 22, 2021 [6 favorites]


a giant educational institution that relies on good faith effort from students tries to abuse that same kind of good faith effort on a different, larger institution. nelson muntz haha gif occurs.
posted by gorestainedrunes at 2:13 PM on April 22, 2021 [3 favorites]


Hi, everybody! We're researchers from your local university, and we've cut the brake lines on a few of the cars in your parking lot. It turns out that this was surprisingly easy! Our IRB agrees that this is fine, as long as we remembered to tell you before you all left work for the day. As a mitigation, we recommend that you take twenty to thirty minutes each, in shifts, to stand guard over your cars. Frankly, you should already have been doing this anyway. Have a good day, and don't forget to read our publication in the next issue of Automotive Disaster Journal.
posted by phooky at 2:16 PM on April 22, 2021 [107 favorites]


"Have you considered developing and implementing a widely available and acceptable online identity verification system that is not susceptible to compromise by the very sort of state actors who might try to sneak a vulnerability into the Linux kernel?" is not a helpful suggestion.
When the alternative is "do no verification whatsoever so far less than a motivated state actor can achieve either a supply chain compromise or a tld-level denial of service by false flagging - or both" I'm not sure that's true?

Not every penetration test has actionable recommendations either, because many boil down to "introduce measures in depth that may make your s execs slightly inconvenienced, who will then ignore them and create higher value targets." Ask me how I know.
posted by abulafa at 2:18 PM on April 22, 2021 [4 favorites]


I think as an experiment in "how far do we have to intentionally degrade the trustworthiness of our institution's contributions before we get banned," this has been a roaring success.
posted by darkstar at 2:21 PM on April 22, 2021 [22 favorites]


OMG phooky, I'm dyin'!
posted by darkstar at 2:24 PM on April 22, 2021


As an infrequent user of open source software, I have often wondered how exactly I can trust that the software is reliable and not bearing malicious code. Given some of the comments here, my faith in it being harmless has decreased. In general, I assume that a malign programmer targets software with a large and probably general distribution thus to spread their infection effectively. The open source stuff I use is much more specific and niche. My other assumption is that somebody else is looking at any submitted code to make sure it’s safe. But comments here suggest that that is unlikely. Having been a software engineer in a previous lifetime, I am aware of the complexity of code and the inability to test it 100%. But still... So here we have a group trying to test a key element of a large body of code to see if they can sneak malign code through whatever system is in place to prevent it. A noble cause. But a truly flawed approach. It is like human experimentation without the safeguards. So, to get to my question, why should I ever trust open source software?
posted by njohnson23 at 2:32 PM on April 22, 2021 [4 favorites]


Why do you think it's notably harder to introduce vulnerabilities to closed-source software?
posted by sagc at 2:35 PM on April 22, 2021 [20 favorites]


I don't know, yesterday I immediately agreed with their ban, but reading the quotes, the maintainer comes across as kind of power tripping and angry, and failing to regulate his own emotions about a complex issue.

The issue is that if an unknown exploit is worth proving, then could the researchers in this case have done it non-invasively? If the IRB here was incompetent then they were incompetent, but assuming they weren't, and the only way to convince was to actually exercise the exploit in some capacity, then it's not implausible to imagine of a totally different outcome: the Linux people responded by saying, wow, thank you for showing us this, this gives us something to seriously think about, etc. So the split falls on the determination of whether people would've been persuaded of the exploit by being told it (like, just ask them to read their paper, right?), versus being shown it (with appropriate restrictions). Consent by an organization or community being "experimented on" is not actually the final ethical rule, nor are arguments like "wasting our time" which is what the maintainer focused on. There is an ethical calculus.
posted by polymodus at 2:40 PM on April 22, 2021 [2 favorites]


How is the need for identity verification relevant to this incident? The researchers weren’t pretending to be someone else. If there had been an identity verification system in place, their code would still have been accepted.
posted by bq at 2:43 PM on April 22, 2021 [4 favorites]



"We take this situation extremely seriously"


Ugh
posted by bq at 2:43 PM on April 22, 2021 [4 favorites]


One problem here is, I think the UMN IRB is probably right: this just doesn't fall under their purview, and I think it would be a bad idea to expect an IRB to be capable of reviewing this type of project. That's just not their area of expertise, which is more focused on evaluating individual harm. If this is a valid area of academic research, it needs a more specialized regulatory body overseeing it. I would think that this falls more under the bailiwick of engineering ethics and should be handled there.
posted by biogeo at 2:44 PM on April 22, 2021 [6 favorites]


Somebody should submit faulty complaints to U of Minnesota's IRB as "a test" of their IRB's complaint handling process.
posted by srboisvert at 2:46 PM on April 22, 2021 [24 favorites]


Both sides can be full of dicks, being dicks to each other! It's a little laugh-and-point except for the whole "this is actually important software that billions of people rely on" thing. They could have chosen a less critical piece of software, yes? Something widely used but not on any kind of critical path. They would still have been fuckers for doing it, but at least less "cut your brake lines" and more "remove a tailight" kind of thing. But yeah, Linux Kernel maintainers, not much pity for them either. Just dicks all around!
posted by seanmpuckett at 2:48 PM on April 22, 2021 [10 favorites]


We're researchers from your local university, and we've cut the brake lines on a few of the cars in your parking lot.

Thinking about how this wouldn't be okay even if there had been a rash of brake vandalism all over the country and they had been trying for months to get people to implement more parking lot security.
posted by straight at 2:50 PM on April 22, 2021 [5 favorites]


To be more clear, the UMN IRB didn't say "Yes, this is fine, go ahead." It said "This is not human subjects research, we have nothing to say on the matter." If I go to an IRB and say "I want to graft extra butts onto baboons," they are going to tell me that doesn't involve them. They'll probably tell me to go talk to the IACUC, whose job is to tell me that no I'm not allowed to do that, but really it's my job to know who to talk to, or go to my university's Office of Research Integrity (or whatever my institution calls it) to find out whose job it is to tell me no if I don't know. In this case maybe there is no body whose job it is to regulate this type of work, and that's the problem. Or at least it becomes a problem when the individual researchers themselves lack the ethics and common sense to realize this is a bad idea.
posted by biogeo at 3:00 PM on April 22, 2021 [17 favorites]


I don't know, yesterday I immediately agreed with their ban, but reading the quotes, the maintainer comes across as kind of power tripping and angry, and failing to regulate his own emotions about a complex issue.

The Linux kernel community isn't exactly reputed to be the most friendly place on the internet, but their visibility and accessibility also makes them regularly subjected to experiments and researchers. If every year or so a hapless grad student wastes your time (or worse), you probably have cause to be angry with researchers.
posted by pwnguin at 3:03 PM on April 22, 2021 [7 favorites]


apt-get install babboonass
posted by Huffy Puffy at 3:05 PM on April 22, 2021 [14 favorites]


babboonass++
posted by Huffy Puffy at 3:07 PM on April 22, 2021 [7 favorites]


This experiment did not teach us anything new. Malicious code has been introduced into open source software on many occasions in the past. It is also well established that human code review is not 100% effective. The idea that "many eyes" eliminates all bugs is also false: remember the Open SSL debacle!

An experiment to prove that malicious actors can introduce false information into human activities, is as useless as an experiment to prove that ice is cold or that fire is hot. We've known this since the invention of Language.
posted by monotreme at 3:13 PM on April 22, 2021 [6 favorites]


Permission denied.
posted by biogeo at 3:14 PM on April 22, 2021 [2 favorites]


To be more clear, the UMN IRB didn't say "Yes, this is fine, go ahead." It said "This is not human subjects research, we have nothing to say on the matter."

For what it's worth, I've seen it claimed by at least one person involved in this situation that the researchers only sought IRB exemption after conducting their experiment and submitting the first draft of their paper.

I've also seen some speculation that they may have misled the IRB about what they were doing, because it doesn't seem to fit into the categories that are normally considered to be exempt from human-subjects review.

This whole thing reminds me a little bit of last year's Hacktoberfest debacle, in which a large company semi-accidentally DDoSed a bunch of open-source projects by giving out T-shirts in exchange for pull requests (most of which turned out to be predictably low quality). At least in this case, the contributions were limited to the Linux kernel, so the blast radius was a lot smaller. Still, one would hope academic research would have a higher ethical standard than for-profit tech companies, and it's disappointing that the oversight processes seem to have failed here.
posted by teraflop at 3:18 PM on April 22, 2021 [9 favorites]


Hi, everybody! We're researchers from your local university, and we've cut the brake lines on a few of the cars in your parking lot.

Hi, everybody again! It turns out that brake lines often look a lot like gas lines and well, who knew? Anyway, the fire department is just about done in the parking lot and you can all go back to what's left of your cars.

At least you won't have to fix your brakes now.
posted by pyramid termite at 3:21 PM on April 22, 2021 [10 favorites]


I thought lying to people in the name of science was, absolutely and unequivocally, experimenting on them. Don't most IRBs cover things like fake profiles on dating sites or those fake resume studies? Or am I wrong about that too?
posted by mark k at 3:28 PM on April 22, 2021 [8 favorites]


Oh yeah, and the research was funded in part by grants from the NSF, which means the principal investigator would have been required to certify at the time of application that their proposal had been either approved or declared exempt by an IRB. So it seems to me that there's a chance this might graduate from academic misconduct to actual fraud.
posted by teraflop at 3:29 PM on April 22, 2021 [9 favorites]


"The idea that "many eyes" eliminates all bugs is also false: remember the Open SSL debacle!"

I haven't heard anyone but you claiming that '"many eyes" eliminates all bugs' is a thing. The so-called "Linus' Law" is, "Given enough eyeballs, all bugs are shallow."
Like most of the bullshit esr says, it sounds "smart" without really meaning anything definitive. How many eyeballs is "enough", and what does it mean for a bug to be "shallow"?

"Open Source" isn't perfectly secure. Nothing is. It can be less insecure, other things being equal.
posted by Rev. Irreverent Revenant at 3:42 PM on April 22, 2021 [4 favorites]


I don't know, maybe there's a good argument that this is in fact human subjects research on the basis of deceptive behavior, but it's not clear that they actually lied, rather than just submitting deliberately bad code. It seems to me more like penetration testing using mild social engineering on a critical infrastructure system, which should be considered to be unethical for academic researchers for a whole variety of reasons completely independently of whether they lied to anyone or caused individual harm to any members of the Linux kernel team. My feeling is that even if there was some way of conducting this "study" with informed consent from the participants, it would be a violation of professional engineering ethics.
posted by biogeo at 3:45 PM on April 22, 2021


To me there is no doubt this is human subjects research, and claims that it is not represents some sort of researcher / IRB / institutional failure:

From the Code

(e)(1) Human subject means a living individual about whom an investigator (whether professional or student) conducting research:

(i) Obtains information or biospecimens through intervention or interaction with the individual, and uses, studies, or analyzes the information or biospecimens; or
(ii) Obtains, uses, studies, analyzes, or generates identifiable private information or identifiable biospecimens.
(2) Intervention includes both physical procedures by which information or biospecimens are gathered (e.g., venipuncture) and manipulations of the subject or the subject’s environment that are performed for research purposes.

(3) Interaction includes communication or interpersonal contact between investigator and subject.


(emphasis mine)
posted by soylent00FF00 at 3:47 PM on April 22, 2021 [9 favorites]


Having read the whole thread, I’m not sure I’d call it power-tripping. Dude’s pissed, but these researchers seem to have given him more than enough reason to be pretty fscking angry.

It seems to have gone something like:

G: Uh oh. Somebody just published an article about how they’ve been sending us bad code.
A: Yep that was us!
G: Please revert your changes
A: No
G: This is wildly unethical
A: Actually, I’m testing a static analysis tool which will help others to also submit bad patches.
G: That’s worse. You see how that’s worse, right?
A: That accusation is SLANDER!!!
G: That’s it. We’re done here.
posted by schmod at 3:48 PM on April 22, 2021 [36 favorites]


In my experience with IRBs and human subjects research in U.S. academia, proposed research that may involve people can be submitted for potential oversight with the expectation that it will be exempt from review or determined to not involve people. If research clearly involves people, there are specific criteria that are usually applied to determine if the research is exempt from review ("exempt"), requires review by just one member of the IRB ("expedited review") or requires review by the full IRB ("full review"). The specific criteria used by this IRB to determine if a proposal is exempt are in this publicly available worksheet; they appear to be pretty common criteria used by many institutions. At the institutions at which I have worked, the determination of whether a project is exempt, requires review (by one member or the full board), or isn't human subjects research at all is delegated to a staff member who is not a member of the board. So the determination that this project was exempt from review was likely made by one person.

Critically, this entire process relies on the researcher(s) being honest and providing all appropriate materials. So I would really like to know how these researchers described their project to their IRB. More specifically, it seems important to know if they were clear about the role that the Linux code maintainers - people - play in their experiment or if they focused only or primarily on technical, programmatic aspects of it.

The assertion that the board could not review this proposal because they lack sufficient expertise is not an excuse that anyone should buy. They are required to seek additional expertise to review proposals that they do not themselves have sufficient or appropriate expertise.
posted by ElKevbo at 3:58 PM on April 22, 2021 [22 favorites]


>In general, I assume that a malign programmer targets software with a large and probably general distribution thus to spread their infection effectively. ... My other assumption is that somebody else is looking at any submitted code to make sure it’s safe. But comments here suggest that that is unlikely.
Your first assumption is wrong because incompetence is a bigger threat than malice when it comes to software, and malicious actors don't just have to sneak in failing code to volunteer and open source projects, they have to build a reputation to be part of the community committing software code to the tree of source code files. On your second assumption, Linux Kernel has tags on each set of code changes for Reviewed-by and Ack[nowledged/ccepted]-by from the person responsible for the sub-system it changes, Other projects collect change-sets on contributors' code trees matched to "pull requests" that go through reviews before pulling in code to a project's central store. Rest assured that state-level across have tried to taint significant projects, but none to the extent of SolarWinds, Active Directory and Exchange (to pick three from the last 6 months).

It is a maxim widely believed in the software industry that it's easier to hide incompetence in proprietary closed-source products.
posted by k3ninho at 4:00 PM on April 22, 2021 [8 favorites]


Even if you think the research had value, and even if you think it was ethically OK, the Linux kernel developers are still perfectly justified to say "we aren't interesting in participating in your experiments anymore". There's no obligation on their part to keep giving the University of Minnesota opportunities to trick them into experiments.
posted by Pyry at 4:11 PM on April 22, 2021 [21 favorites]


Luckily, release managers and code reviewers aren't considered 'people,' so it was fine with the IRB.
posted by kaibutsu at 4:13 PM on April 22, 2021 [2 favorites]


I haven't heard anyone but you claiming that '"many eyes" eliminates all bugs' is a thing. The so-called "Linus' Law" is, "Given enough eyeballs, all bugs are shallow."

The optimal number of eyeballs is seven. We know this from empirical research; once code has been reviewed by three people with domain expertise, odds that the fourth will find a new error are extremely low. Odds that a fifth reviewer will find anything are negligible.
posted by mhoye at 4:20 PM on April 22, 2021 [7 favorites]


Also, ESR is a turnip.
posted by mhoye at 4:20 PM on April 22, 2021 [12 favorites]


mhoye, wait, you're saying there have been experiments on code reviewers' efficacy at finding bugs? That did not involve risks to millions of people by tainting critical software? Tell me^WUMN more... ;)
posted by joeyh at 4:37 PM on April 22, 2021 [1 favorite]


So I would really like to know how these researchers described their project to their IRB. More specifically, it seems important to know if they were clear about the role that the Linux code maintainers - people - play in their experiment or if they focused only or primarily on technical, programmatic aspects of it.

I'd venture that their description looked to the IRB reviewers like "rhubarb rhubarb Linux kernel rhubarb rhubarb source repository rhubarb rhubarb identity verification" and they didn't see references to actual human beings there.

There's a good argument that the review standards are outdated, You don't have to interact with any specific people to have a real adverse effect on human welfare anymore.
posted by jackbishop at 4:55 PM on April 22, 2021 [11 favorites]


if fuck_around:
  find_out(UMN)
posted by haileris23 at 5:00 PM on April 22, 2021 [27 favorites]


Two thoughts:

a) Ethics should be considered in terms of "public impact" rather than just right or wrong.

Using open-source software such as Linux kernel for unethical hacking, even if it's for academia, can almost be compared to using a section of US population for secret experiments without telling them, like the Tuskegee Syphilis Study, even if the vulnerabilities were only "academic".

b) Trust had been lost, and likely will NEVER return. Confession and reform is not enough. Open-Source movement is dependent on trust, and the idea that someone can just unilaterally decide use it for experimentation completely upends the idea of openness. Furthermore, trust in the ENTIRE open-source repo may be jeopardized due to one team's ambitions.

Whoever approved their project needs to suffer more dire consequences than mere academic discipline.
posted by kschang at 6:01 PM on April 22, 2021 [4 favorites]


What year is this, 1998?

This is exactly the kind of bad faith research verging on espionage I would have expected some Microsoft- or SCO-backed industry association to have funded in the late 1990s, early 2000s. Look! Open source projects are vulnerable to bad actors who abuse community trust and deliberately contribute bad code! How can businesses ever trust anything written by a group of hippies collaborating over the Internet? That this kind of shenanigans could be pulled in 2021 by someone in academia is just reprehensible.

What were they trying to prove, and who's idea was it to pursue this activity?
posted by RonButNotStupid at 6:31 PM on April 22, 2021 [6 favorites]


If you thought the SolarWinds hack or MeDoc was bad. Introducing exploits in Linux through the update mechanism would be catastrophic. Open Source projects likely need to get additional code review, just as SolarWinds is teaching the commercial community.

It's likely (or I hope) there will be some kind of legislation or policy to address this. How can or should policy address Open Source as effectively as commercial software?
posted by geekP1ng at 7:12 PM on April 22, 2021


the Tuskegee Syphilis Study

I know you didn't mean anything by it, but I think that could be a pretty offensive comparison to some people. This is nothing compared to that. And I do think this is something.
posted by biogeo at 7:31 PM on April 22, 2021 [19 favorites]


On this morning's SANS Internet Stormcast podcast, I could swear that Johannes said that not only was the "bad" code reverted, but all previous UMN code, too -- which must be, like, a lot of code.

And if they are now blocking future contributions from @umn.edu addresses, that's going to sweep up a TON of people. They invented Gopher, and have been working pretty steadily since then.

This would be an amazingly bad pissing-in-the-pool by these researchers to interfere in the work of untold other programmers. Christ, what a bunch of assholes.
posted by wenestvedt at 7:46 PM on April 22, 2021 [7 favorites]


Time for primary sources. Here is the email thread. I can understand some loss of patience given this instance of them screaming at people for acting in bad faith _while lying to them_

On Wed, Apr 21, 2021 at 02:56:27AM -0500, Aditya Pakki wrote:

Greg,

I respectfully ask you to cease and desist from making wild accusations that are bordering on slander.

These patches were sent as part of a new static analyzer that I wrote and it's sensitivity is obviously not great. I sent patches on the hopes to get feedback. We are not experts in the linux kernel and repeatedly making these statements is disgusting to hear.

Obviously, it is a wrong step but your preconceived biases are so strong that you make allegations without merit nor give us any benefit of doubt.

I will not be sending any more patches due to the attitude that is not only unwelcome but also intimidating to newbies and non experts.

posted by dum spiro spero at 8:16 PM on April 22, 2021 [5 favorites]


I can understand some loss of patience given this instance of them screaming at people for acting in bad faith _while lying to them_

It's not yet proven Aditya is lying. But the previous "hypocrite" submissions have folks on edge. Hopefully in a few days a disinterested party will have a timeline of events, and maybe even some meaningful documents released by the UMN administration. In the meantime, Ted Tso has a summary over on HN.
posted by pwnguin at 8:57 PM on April 22, 2021 [4 favorites]


Seems junior tech bros are still tech bros.
posted by evilDoug at 9:19 PM on April 22, 2021 [2 favorites]


Our community welcomes developers who wish to help and enhance Linux. That is NOT what you are attempting to do here, so please do not try to frame it that way.

Our community does not appreciate being experimented on, and being "tested" by submitting known patches that are either do nothing on purpose, or introduce bugs on purpose. If you wish to do work like this, I suggest you find a different community to run your experiments on, you are not welcome here.


Anyone who has contributed to any OSS project knows that it is a mostly, pretty much entirely thankless job. I cannot see any sensible reason to tolerate contributions that are not submitted in good faith, let alone those sent surreptitiously and which make the larger project worse. Good riddance.
posted by They sucked his brains out! at 9:33 PM on April 22, 2021 [23 favorites]


It's particularly unethical that the grad student was playing victim about feeling unwelcome as a newbie. This does a disservice to the maintainers as well as actual newbies, and erodes trust in the community for selfish reasons.

Here's the other part showing that they were onto Aditya.

On Wed, Apr 21, 2021 at 07:43:03AM +0200, Greg KH wrote:
> On Wed, Apr 21, 2021 at 08:10:25AM +0300, Leon Romanovsky wrote:
> > On Tue, Apr 20, 2021 at 01:10:08PM -0400, J. Bruce Fields wrote:
> > > On Tue, Apr 20, 2021 at 09:15:23AM +0200, Greg KH wrote:
> > > > If you look at the code, this is impossible to have happen.
> > > >
> > > > Please stop submitting known-invalid patches. Your professor is playing
> > > > around with the review process in order to achieve a paper in some
> > > > strange and bizarre way.
> > > >
> > > > This is not ok, it is wasting our time, and we will have to report this,
> > > > AGAIN, to your university...
> > >
> > > What's the story here?
> >
> > Those commits are part of the following research:
> > https://github.com/QiushiWu/QiushiWu.github.io/blob/main/papers/OpenSourceInsecurity.pdf
> >
> > They introduce kernel bugs on purpose. Yesterday, I took a look on 4
> > accepted patches from Aditya and 3 of them added various severity security
> > "holes".
>
> All contributions by this group of people need to be reverted, if they
> have not been done so already, as what they are doing is intentional
> malicious behavior and is not acceptable and totally unethical. I'll
> look at it after lunch unless someone else wants to do it...
posted by dum spiro spero at 9:38 PM on April 22, 2021 [7 favorites]


yesterday I immediately agreed with their ban, but reading the quotes, the maintainer comes across as kind of power tripping and angry, and failing to regulate his own emotions about a complex issue.

Tone-policing comments made by experts on issues of fundamental importance to their areas of expertise is almost always unhelpful.
posted by flabdablet at 10:54 PM on April 22, 2021 [16 favorites]


.

For the grad student's career, maybe.
posted by darkstar at 11:07 PM on April 22, 2021


Lol, unhelpful how? An expert being angry is one the worst combinations ever to be making decisions. It's not tone policing to point out than a Linux kernel maintainer or leader occupies a position of power, and people in positions of power acting through anger rather than owning it, using empathy to resolve a conflict by actually communicating with the other person (which is not a big part of white dominated tech culture, or American culture in general), and so forth, is indeed doing power-tripping. The emails literally show the person taking it upon himself to decide what counts as contributing to Linux and what doesn't.

Reading the actual paper, section VI.A actually sounds kind of reasonable, and non-aggressive at all. They say that nothing gets committed, and human interaction/effort is minimal. And elsewhere in the paper they say they've identified thousands of bugs, so in context three UAF bugs that don't even make it into the actual system really isn't as terrifying as the other side made it seem in the emails. It's an upside-down version of the Linux people's framing of the conflict, in the emails being quoted everywhere.
posted by polymodus at 11:09 PM on April 22, 2021 [1 favorite]


That Ted Tso post really put the reaction into perspective. The same professor was caught doing the same shit last year, also without IRB approval, and now is doing it again?

Yeah, ban the university from contributing until we see a real CS IRB policy, at least.
posted by ryanrs at 11:11 PM on April 22, 2021 [9 favorites]


The emails literally show the person taking it upon himself to decide what counts as contributing and what doesn't.

I mean, that's literally his job, right?
posted by ryanrs at 11:13 PM on April 22, 2021 [31 favorites]


And to be super clear, not saying there aren't questions of consent or the fact that this experiment is obviously causing distress. But the ethical debate, and comparative precedents to suppression of consent ethics, is not as clear, and requires actual investigation, accountability, etc.
posted by polymodus at 11:15 PM on April 22, 2021


(For context, most of Australia's country fire services is made up of volunteers)

I've been researching whether the fire services have well-maintained equipment, and good mechanisms for evaluating equipment upgrades.

My method is to volunteer as a truck mechanic, randomly cut water hoses and brake lines, and wait to see whether the other volunteers approve my work or fix it.

I'm very ethical. The IRB has told me, on reading the write-up of the research already done in the past year, that this doesn't count as research on human subjects, because it's the fire services-as-systems themselves that are being investigated, and not the humans that those services are made of. Therefore, it's all ethical.

I'm also totally on top of the work, ready to repair any fault I introduce that goes uninspected or wrongly approved. I'd never be a saboteur. They're small cuts and nicks anyway, and in very obvious sections of the hoses and brake lines. The other volunteer mechanics should be able to catch them before the trucks go out.

When people from my department did this in the past, the fireys did not like it. But research must go on, so I'm researching more.

The Chief Fire Equipment Inspector's decision to ban all and any volunteer work from my school and to re-revise all past work is discrimination, pure and simple. I also did not like his tone. He should know better, but we all know firefighters are a close knit community, often disdainful of newcomers.

Who does the Chief Fire Equipment Inspection think he is to judge me and my work anyway? Why should I or anyone else recognise his authority?

Also, how dare he accuse me of introducing these faults on purpose as I've stated in my published papers that I was doing? The latest fault, the one that triggered the ban we're now discussing, is not a deliberate one, but a weirdness in an experimental firehose nozzle of my own design that I installed in good faith and man, I'm new and inexperienced, anyone can make mistakes!
posted by kandinski at 12:38 AM on April 23, 2021 [24 favorites]


Who does the Chief Fire Equipment Inspection think he is to judge me and my work anyway? Why should I or anyone else recognise his authority?

Because in this case the leader explicitly said in the email exchange that the Linux community welcomes everyone who wishes to contribute to the project. There's a consistency/hypocrisy problem there because every participant indeed deserves that claim. In the paper the PI's have basically argued that demonstrating a basic structural flaw is a broader contribution than any incremental improvement to a body of code. A leader failing to acknowledge this in their replies should invite criticism as well. Both sides here can be making mistakes.

There's a bit of Sokal for thee, not for me reaction. A different leader might've accurately recognized the researchers' attempts as coming not from malice but from a deep naivete (because of the potential reckless consequences of invasive experimental code without methodological accountability making its way into real world software) and rather than use authoritarian, hierarchical methods of punishing transgressors using old school, zero tolerance tactics, especially given Linux's political origins, they could've acted like actual liberals and leftists when dealing with a conflict.
posted by polymodus at 1:28 AM on April 23, 2021 [1 favorite]


A different leader might've accurately recognized the researchers' attempts as coming not from malice but from a deep naivete

You don't think they used up their naivete excuse the first time they did this? It's a kind of one-time-use thing.
posted by bashing rocks together at 1:41 AM on April 23, 2021 [18 favorites]


mhoye:The optimal number of eyeballs is seven . . .
20 years ago just after Bill Clinton and Tony Blair put the finishing touches to the Human Genome and released the data to the public, we discovered a cluster of small, immunologically important genes in the sequence of chromosome 8. As evolutionary biologists we wondered whether these genes were uniquely human or found in other primates and mused "if only we had 10cc of chimpanzee blood we'd have a more interesting [read Nature] paper". I knew the director of the local zoo and asked if, the next time the vet needed a blood sample from one of the chimpanzees, we could have the rest of the vial. The director was amazed at our what harm could that be naivety and said that IF we filled in a bunch of forms, ANDIF we ran the idea past our own Ethics Board, THEN the zoo _might_ give us some chimpanzee poo on the end of a stick.
Some of those seven eyeballs need to be outside the echo-chamber. Research can get to be all consuming in its focus on the goals. In the competitive rush to get there firstest with the mostest, the implications of the research become a distraction.
posted by BobTheScientist at 2:15 AM on April 23, 2021 [8 favorites]


I think this is just an unrelated novice student getting shat on for the sins of their professor and for having a lame patch.

Indeed. And they should take it up with that professor, and the institution that backed him.
posted by Dysk at 2:22 AM on April 23, 2021 [1 favorite]


floam: > > I think this is just an unrelated novice student getting shat on for the sins of their professor and for having a lame patch.

If a fourth year PhD student can forget to state "From our new static analyzer, for review only, DO NOT MERGE" on an automatically generated patch, that's a rookie mistake, but...

Dysk: > Indeed. And they should take it up with that professor, and the institution that backed him.

If their supervisor forgot to warn them that the department had a bit of a history with the kernel and to be on their best behaviour, that's a considerable error of judgment.

Paraphrasing a lawyer friend of mine: if the supervisor hated the student and wanted to set him up for a bollocking by a kernel maintainer, there's nothing he'd have to do differently. Of course that's not the case. It's obliviousness all around.

Polymodus: > There's a bit of Sokal for thee, not for me reaction. A different leader might've accurately recognized the researchers' attempts as coming not from malice but from a deep naivete (because of the potential reckless consequences of invasive experimental code without methodological accountability making its way into real world software) and rather than use authoritarian, hierarchical methods of punishing transgressors using old school, zero tolerance tactics, especially given Linux's political origins, they could've acted like actual liberals and leftists when dealing with a conflict.

This is not punishment, this is self-protection. The kernel devs didn't launch a DOS attack on UMN, nor sent goons to beat up the researcher. It's also not authoritarian or hierarchical: see how other volunteers jumped on the tasks of verifying the suspect patches, nobody asked for permission to re-submit on un-revert, and I assure you that they would challenge the decision if they thought it was wrong, and would not be punished by the "hierarchy" for it. Everyone in that list is a volunteer, choosing the tasks they want to work on. Admittedly, sometimes on behalf of their employer, but GKH is not their manager nor their boss. Volunteers work on the kernel, not for the kernel.

Job one of the kernel maintainers is protecting the stable tree. Job two is being welcoming to developers, but not at the expense of job one. And this if there was ever a case for zero tolerance and overabundance of caution (which is what reverting all changes from UMN is, pending further review), this is it.

For all that some Linux kernel devs have a (in some cases, deservedly) bad reputation about not being friendly to newbies, I think it's fine if they get a reputation for being fierce with people taking the Linux kernel as a trial and error playground. Essential infrastructure and the volunteer project that runs it is not where you go fucking about and finding out.

Using your aggression metaphors, it's true that the ban is also a shot across UMN's bow, and apparently a much needed one. The department heads didn't know about the "hypocrytical commits" shitshow, and now they do.

If it turns out that GKH has to apologise for assuming stupid malice* when he should only have assumed naive incompetence, I don't think that's a bad outcome in this case.
posted by kandinski at 3:16 AM on April 23, 2021 [22 favorites]


But the ethical debate, and comparative precedents to suppression of consent ethics, is not as clear, and requires actual investigation, accountability, etc.

If you're experimenting on someone without their informed consent and when they find out they say "I wish you hadn't done that," that should spark internal horror and revulsion and cause a sharp change to IRB rules and behavior to ensure that that type of unwelcome research never occurs again. It won't, but it should.
posted by GCU Sweet and Full of Grace at 4:00 AM on April 23, 2021 [17 favorites]


dum spiro spero: "> All contributions by this group of people need to be reverted, if they
> have not been done so already, as what they are doing is intentional
> malicious behavior and is not acceptable and totally unethical. I'll
> look at it after lunch unless someone else wants to do it...
"

I love this.
posted by chavenet at 4:22 AM on April 23, 2021 [6 favorites]


A different leader might've accurately recognized the researchers' attempts as coming not from malice but from a deep naivete (because of the potential reckless consequences of invasive experimental code without methodological accountability making its way into real world software) and rather than use authoritarian, hierarchical methods of punishing transgressors using old school, zero tolerance tactics, especially given Linux's political origins, they could've acted like actual liberals and leftists when dealing with a conflict.

The Morris Worm was supposedly created as a proof of concept and defended as a naive attempt at highlighting security vulnerabilities of the early Internet. It also just happened to be released in a manner to hide it's origin and because of a coding "mistake", it replicated uncontrollably and disabled multiple systems causing hundreds of thousands dollars worth of damage.

Morris was tried, convicted, and sentenced to 400 hours of community service and fined just over ten thousand dollars for his reckless experiment to test the potential reckless consequences of lax network security.
posted by RonButNotStupid at 4:46 AM on April 23, 2021 [7 favorites]


A different leader might've accurately recognized the researchers' attempts as coming not from malice but from a deep naivete (because of the potential reckless consequences of invasive experimental code without methodological accountability making its way into real world software) and rather than use authoritarian, hierarchical methods of punishing transgressors using old school, zero tolerance tactics, especially given Linux's political origins, they could've acted like actual liberals and leftists when dealing with a conflict.

While the Linux kernel developers would do well to improve how welcoming and accessible their community is, this paragraph is nonsense. Partly because it relies on a very particular reading of a cherrypicked sentence to make this quasi-political word-salad of a non-point, but mostly because it's made in ignorance of the how software development in general, and open community development in particular, actually work.

Once you're above the toy-project threshold, software is the product of an organization, not its individual developers, the sum of the organization's goals, priorities and allocated resources as they manifest themselves across the software development lifecycle and in the shape of that development lifecycle. So the consequences of a bad act like this aren't, and cannot be, "we don't trust these students or this professor". The only reasonable decision in that case is "we can no longer trust this institution."

As has been pointed out upthread, you can't keep going back for another drink at the naivete well, and "welcoming contributions" in no way means that anyone is obligated to take anyone who's fallen face first onto a keyboard and pushed the result into version control seriously. People's time has value. Sure, somebody should be responsible for the growth and development of these potential future kernel contributors but good news: they're in a school. A grad school! Where they can learn things, I'm sure including how to participate effectively and honestly in these processes, because learning things is their whole job! But that's not what's happening here, because that's not what the institution has sent them to do.

Hence, out of necessity: *plonk*
posted by mhoye at 5:19 AM on April 23, 2021 [27 favorites]


I'd argue that the kernel isn't an organization, it's a community of practice. This is important to understand why the ban had to be put in place. Big projects can be collaborations, they don't have to come from monolithic entities. To my eyes the kernel has been since Linus' first decision to accept patches a collaborative project of which he is the coordinator, but not the owner. There are multiple corporate, private, academic and even personal contributors, all with their own resources and autonomy. Responsibility means a different thing for the maintainers than "do you job or be fired/lose your contract".

The authors of this episode depended on that being the case, and abused the community for their own purposes. A community response in return, in addition to any technical one, is therefore not only warranted, but necessary. Yes it's sweeping and broad. Sometimes that's what's needed to get the attention of TPTB in a big organization like a university. A professor would be able to manage the reputational damage of a small targeted measure. They are unable to do so for a larger, more-comprehensible response like this.

The department heads didn't know about the "hypocrytical commits" shitshow, and now they do.

This is the whole point of the ban. The university management is the audience for a public ban of the university. What the department does next will decide if this ban stays forever or is lifted after sufficient actions are taken. It's no longer up to the professor.

This is about the most drastic thing a community can realistically do, ban participation. It's a social engineering response to the bad actor's institution. It's therefore entirely appropriate.
posted by bonehead at 5:45 AM on April 23, 2021 [18 favorites]


As someone who has done actual, IRB-approved research on online communities: there's literature going back 25 years about the ethics of human subjects research on the Internet. Some of which *explicitly* mentions open source communities. This was negligence, provincial cluelessness, and I suspect no small amount of academic territorialism on the part of the IRB. They failed at their jobs, and there should be consequences for both them and the institutional flaws that led to this error.

If they did in fact get IRB approval before starting their project, I'm a bit less outraged at the researchers than some other folks. This is rather tame compared to some of the egregiously amoral bullshit I've seen people try to get away with. Foolish and arrogant academics under publication pressure are going to do their thing which is why we have institutional guardrails.

Whether he realized it or not, by targeting the institution GKH is definitely applying the pressure and the punishment where it is due. Humiliation is literally the only way to get an academic administration to do anything to correct their toxic incompetence.
posted by xthlc at 6:02 AM on April 23, 2021 [22 favorites]


Pen testing a social engineering vector is regular practice. I think the difference here is that the maintainers were the unaware target (usually the asset owner is aware of the test). I think an experiment like this can be useful, but needs to have some guidelines for responsible disclosure. Isn’t it fair to say the researchers exposed a real vulnerability?
posted by simra at 6:06 AM on April 23, 2021 [1 favorite]


they could've acted like actual liberals and leftists when dealing with a conflict.


Actual liberals and leftists famously known for knowing how to swiftly identify and deal with bad faith infiltrators.
posted by Space Coyote at 6:10 AM on April 23, 2021 [11 favorites]


Pen testing a social engineering vector is regular practice.

The difference between pen testing and breaking in or sabotage is permission.
posted by Dysk at 6:41 AM on April 23, 2021 [11 favorites]



That Ted Tso post really put the reaction into perspective. The same professor was caught doing the same shit last year, also without IRB approval, and now is doing it again?

Yeah, ban the university from contributing until we see a real CS IRB policy, at least.


Not a professor, a grad student. That's a huge difference.

There is real IRB policy. The issue is that nothing was submitted.

I 100% agree this was ridiculous behavior on the part of this grad student, but there's an amazing amount of spurious discussion surrounding this story, including in this thread.

Banning the entire University of Minnesota from submitting patches certainly got everyone's attention here (I am a developer at the U of MN), and is certainly on brand for the linux kernel community. I don't work in the CS department, not even in their college, don't even know where they are located on campus. But by all means, ban us all from contribution. I can blissfully check that box off as "something I'll never do".
posted by mcstayinskool at 6:51 AM on April 23, 2021 [2 favorites]


Pen testing a social engineering vector is regular practice. I think the difference here is that the maintainers were the unaware target (usually the asset owner is aware of the test).

Notably, having permission from the asset owner ahead of time is what makes it be not crime. (Edit: Whoops, Dysk got there first. Jinx)
posted by mhoye at 7:10 AM on April 23, 2021 [3 favorites]


That's not how I explain it to companies that pay me bounties. They seem to agree.

Anyone with a bug bounty program is implicitly granting permission to find bugs, not manufacture them.
posted by Dysk at 7:18 AM on April 23, 2021 [16 favorites]


You're not employed by them and submitting known broken code, and then having them pay you for finding those bugs?

I'm not sure what relevance a bug bounty has to the behaviour here.
posted by sagc at 8:10 AM on April 23, 2021 [3 favorites]


Floam, are you a university professor doing research and therefore subject to CFR and IRB? If not, then that is a big and important difference in context...
posted by soylent00FF00 at 8:38 AM on April 23, 2021 [5 favorites]


There's also a difference between finding vulnerabilities, and exploiting them. If you're doing the latter without permission, that is definitely not pen testing. The difference between breaking in (not merely trying door handles) and pen testing is permission.
posted by Dysk at 8:50 AM on April 23, 2021 [3 favorites]


I have wondered about this for Comp Sci and 'Tech', in general. I mean Microsoft is famous for putting out products that they know are buggy. I cannot imagine something like that happening in the regular stuff (like cars etc.); without having real life legal repercussions. Ralph Nader basically made his name based on this.

This is another reason why Comp Sci really needs to think about ethics seriously. I come from this as a 'old-time' engineer. If you are a PE in these kinds; like ME, Civil, Elect. or Chem.; getting PE certification has a real ethics component to it.

I cannot imagine one of these disciplines even thinking of doing something like this to try to fuck up something in general use.
posted by indianbadger1 at 8:56 AM on April 23, 2021 [1 favorite]


Ethics in software engineering. I remember reading an essay back in the late 70’s that argued that software engineers should be licensed just like the engineers above. The argument was based on risk, if a civil engineer screws, up a bridge can fall down. If a mechanical engineer screws up, a rear axel can fall off. Software engineers, in general, seem to be unaware of the potential risks associated with their code. Oh, so the word processor screwed up the print format? Big deal. The author of the article listed a number of software bugs that either killed people, or caused severe economic damage. Note late 70’s. We are in the early 2020’s and still no licensing. Do they teach ethics in CS departments?
posted by njohnson23 at 9:08 AM on April 23, 2021 [8 favorites]


Do they teach ethics in CS departments?

They do, it's part of ABET requirements. The question today seems to be whether they are qualified to teach it.
posted by pwnguin at 9:24 AM on April 23, 2021 [8 favorites]


It occurs to me that now that they have gotten and IRB exemption and despite that UMN is banned, they can write another paper showing "vulnerabilities" in the ethics guidelines and review process in academia. Narcissistic, self-justifying researchers can never fail, they can only be failed.


The Morris Worm came up. It's ancient history now, but I think that was the one that got the analogy in Cuckoo's Egg to someone going around a neighborhood and letting the air out of everyone's tires. You could argue there's no "harm"--no permanent damage was done, no one actually drove on the street with a flat, you didn't steal any property. But you'd be full of shit. (I'm paraphrasing.)
posted by mark k at 11:04 AM on April 23, 2021 [4 favorites]


This basically illustrates the differences between whitehat and blackhat hackers.

Whitehats are there with permission.

Blackhats are not.

Even if they are roughly doing the same thing.
posted by kschang at 12:10 PM on April 23, 2021


We are in the early 2020’s and still no licensing.

Or product liability, although there are cracks in that dam.
posted by snuffleupagus at 6:32 AM on April 25, 2021


Well so here's the thing about all those fancy certified Civil Engineers, ChemEs, MEs, and PEs: they all use shitty buggy computer software to do their engineering. And a lot of professional CAD software is buggy, crashy garbage (less so now, but back in the day some were pretty bad).

But these professional users have enough training and experience to know when the software is fucking up, and still use it to build safe, efficient designs, most of which would be impossible to design without computers.

Which brings us to the obvious conclusion which I have always known in my heart but feared to say: we need to certify and license the users!
posted by ryanrs at 7:53 PM on April 25, 2021 [4 favorites]


Sure. People who design cars need to be trained, licensed mechanical engineers. People who drive cars need to be trained, licensed drivers. If there's a tool that has the potential to cause significant harm to others if it malfunctions or is operated incorrectly, it's common sense that both the designers and operators of the tool should be trained and licensed appropriately. Why not the same thing for software engineering?
posted by biogeo at 8:48 PM on April 25, 2021


Why not the same thing for software engineering?

Let's imagine for a moment that you got your wish: all software procured by government must be designed / inspected and signed off on by a Professional Engineer.

1. Is there anyone competent to do so? Beyond the exams of dubious relevance, the apprentice model requires many years service under an existing engineer. I know of a few ambitious PEs that are also faculty in Texas, but we need some method of bootstrapping.
2. Among those who are perhaps qualified, who would actually sign on for this? Is it possible to confirm a million line of code product is error free? Is 1 MLoC even in the ballpark for this particular product? Would you be personally liable for false convictions made on the basis of an error free assumption? How do you even price an Errors & Omissions insurance policy for something where false convictions is a possible outcome?
3. Where is the line between software and hardware drawn -- if the system produces bad results due to faulty memory chips, is that your fault? What about the JVM, the OS, the microcode, network switches, puppet manifests, BIOS settings, browser settings, and other configurations between your software and your user? Is there a 'known safe' version of Windows I don't know about? The seL4 microkernel has been proven to meet is specification, but I know of nobody who can say whether the specifications were themselves correct or sufficient.
4. Can the government afford it? seL4 cost about $400 per line of code to verify, so projecting that out, simply the cost of verification for a 1MLoC system is $400 million. If that's the process we need in order to sign off on software, society may need to consider making do with less government services, or like, restoring the draft in order to train people on TLA+ against their will.
5. Is the alternative safer? It seems likely that cars are more dangerous than walking, or even cycling. But is the process it replaces actually safer than no software at all?
posted by pwnguin at 2:43 PM on April 26, 2021 [2 favorites]


What seems to have happened is that some patches from UWN were "utter crap" and this raised suspicions that some people at UMN were still up to shenanigans. This patch would be an example:
https://lore.kernel.org/lkml/20210407000913.2207831-1-pakki001@umn.edu/
If you know C the patch doesn't look right even without looking at the code it is patching. Al Viro commented: "Plainly put, the patch demonstrates either complete lack of understanding or somebody not acting in good faith." I've looked at the functions being patched and I suspect the author doesn't understand how parameter passing works in C.
Al Viro has now concluded the patches were not in bad faith. From a recent LWN comment: "results of review so far - nothing dramatic whatsoever. Nothing plausibly malicious, no wrongbots, overall more or less usual quality."
posted by mscibing at 4:35 PM on April 26, 2021 [2 favorites]


It isn't quite the arms-length analysis one might hope for given the author is on the Linux Foundation Technical Advisory Board, but LWN has an update. A few gems:
Five patches were submitted overall from two sock-puppet accounts, but one of those was an ordinary bug fix that was sent from the wrong account by mistake. Of the remaining four, one of them was an attempt to insert a bug that was, itself, buggy, so the patch was actually valid; the other three (1, 2, 3) contained real bugs.
...
Perhaps that is a more interesting outcome than the one that the original "hypocrite commit" researchers were looking for. They failed in their effort to deliberately insert bugs, but were able to inadvertently add dozens of them.
...
One final lesson that one might be tempted to take is that the kernel is running a terrible risk of malicious patches inserted by actors with rather more skill and resources than the UMN researchers have shown. That could be, but the simple truth of the matter is that regular kernel developers continue to insert bugs at such a rate that there should be little need for malicious actors to add more.
posted by pwnguin at 9:22 PM on April 29, 2021 [3 favorites]


That’s priceless
posted by bq at 8:56 AM on April 30, 2021 [1 favorite]


Don’t regulate software like buildings. Regulate software like tools.

Ban waivers of the warranties of merchantability and fitness for purpose in EULAs and TOS.

Impose liability for warning defects.

See what happens to code quality and documentation. And security.
posted by snuffleupagus at 9:14 PM on May 9, 2021


Ban waivers of the warranties of merchantability and fitness for purpose in EULAs and TOS.

Somehow, we have wandered from 'ethical research of online software development communities' to 'ethical practices of software developers' to a proposal to 'Ban waivers of the warranties of merchantability,' like the clause that enables online software development communities.

I'm really struggling to see how giving something away without promising it might not work is immoral or unethical. Especially when you also provide a means of inspecting the innermost workings. And like, would it be ethical in your framework to provide a patch that to the kernel that fixes a bug but disclaims responsibility for the other million bugs hiding in the kernel?
posted by pwnguin at 3:59 PM on May 10, 2021 [2 favorites]


Oh, I thought we had moved beyond talking about just TFA, Linux and FOSS. These warranties are typically implied in products that are sold. I’d be fine with excepting FOSS for the reasons stated. And because you can look at open source code. Not true of “enterprise” software. Yet we permit enterprise vendors to disclaim all liability for defective products that can result in huge economic and social consequences when released into the stream of commerce.

I’ll stop before I start talking about the railroad era and the development of tort law.
posted by snuffleupagus at 8:06 PM on May 10, 2021


« Older Things that fall off trucks: Cellebrite bags...   |   You think I own Ikea? Newer »


This thread has been archived and is closed to new comments