You know what we need? More standards.
January 23, 2019 1:06 PM Subscribe
How do you govern the development of artificial intelligence? MIT hosted a three-day gathering with the Organization for Economic Co-operation and Development (OECD) to help figure that out ahead of the group's release of AI governance guidelines. But academics aren't the only ones getting in on the game.
Microsoft's approach to AI
Microsoft's approach to AI
Designing AI to be trustworthy requires creating solutions that reflect ethical principles that are deeply rooted in important and timeless values.Google: Engaging policy stakeholders on issues in AI governance
As with other technologies, there are new policy questions that arise with the use of AI, and governments and civil society groups worldwide have a key role to play in the AI governance discussion. In a white paper we’re publishing today, we outline five areas where government can work with civil society and AI practitioners to provide important guidance on responsible AI development and use: explainability standards, fairness appraisal, safety considerations, human-AI collaboration and liability frameworks.European Union: Draft Ethics Guidelines for Trustworthy AI
This working document constitutes a draft of the AI Ethics Guidelines produced by the European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG) [...] After the consultation process is completed, the AI HLEG will present a final version of the Guidelines to the Commission in March 2019. The ambition is then to bring Europe's ethical approach to the global stage.Preparing for the Future of Artificial Intelligence (Obama administration)
As a contribution toward preparing the United States for a future in which AI plays a growing role, this report surveys the current state of AI, its existing and potential applications, and the questions that are raised for society and public policy by progress in AI. The report also makes recommendations for specific further actions by Federal agencies and other actors. A companion document lays out a strategic plan for Federally-funded research and development in AI. Additionally, in the coming months, the Administration will release a follow-on report exploring in greater depth the effect of AI-driven automation on jobs and the economy.The Public Voice: AI Universal Guidelines
We propose these Universal Guidelines to inform and improve the design and use of AI. The Guidelines are intended to maximize the benefits of AI, to minimize the risk, and to ensure the protection of human rights. These Guidelines should be incorporated into ethical standards, adopted in national law and international agreements, and built into the design of systems. We state clearly that the primary responsibility for AI systems must reside with those institutions that fund, develop, and deploy these systems.
the only exceptions to "actually it's really boring" are adversarial examples and deepfakes, which are/could be very scary and serious problems, but are also fascinating and can be accurately understood by people without technical background.
posted by vogon_poet at 3:38 PM on January 23, 2019
posted by vogon_poet at 3:38 PM on January 23, 2019
It's super-depressing that the dream was Asimovian positronic brains making decisions about whether they should supersede humanity and the reality is automated resume filtering tools that auto-reject anyone with the word "women’s" in their resume.
posted by GuyZero at 3:52 PM on January 23, 2019 [12 favorites]
posted by GuyZero at 3:52 PM on January 23, 2019 [12 favorites]
GuyZero. I just sat through a seminar on that issue. There is a wicked simple solution regarding determining bias in your training data that was presented - but, like all 'seminar' cases it requires a simplification in a manner that made all participants smile and say 'WOW COMPANY X HAS SOLVED THE PROBLEM THAT AMAZON HAS SCRAPPED'. So, I treat it as a bullshit super-specific case that should never be actually implemented - but hey - it was their sales pitch and there are some basic ideas in it that have some merit.
What did they do? They built crappy classifiers to generalize 'WOMEN'S SPORTY MCSPORT SPORT' extracurricular activities into very generalized examples (Hint: Even with Title-9 'Football' generally means a man played and 'Netball' (or 'Volleyball') are indicative of a women's team) of TEAM_SPORT and INDIVIDUAL_SPORT. Then they moved to counts of the classifiers instead of the actual text.
How did they know that this was the problem? They regressed the names against the sex and when they BROKE the predictive value by leveraging classifiers and counts they were able to bring them back into the model. Now, the variables no longer contributed as much to the likelihood of hire - but - it also weighed their education and degree type *higher* than it did before...
Amazingly though, they didn't have a comment when the AI still went ahead and tried to chat up the candidates from Smith...
posted by Nanukthedog at 4:12 PM on January 23, 2019 [8 favorites]
What did they do? They built crappy classifiers to generalize 'WOMEN'S SPORTY MCSPORT SPORT' extracurricular activities into very generalized examples (Hint: Even with Title-9 'Football' generally means a man played and 'Netball' (or 'Volleyball') are indicative of a women's team) of TEAM_SPORT and INDIVIDUAL_SPORT. Then they moved to counts of the classifiers instead of the actual text.
How did they know that this was the problem? They regressed the names against the sex and when they BROKE the predictive value by leveraging classifiers and counts they were able to bring them back into the model. Now, the variables no longer contributed as much to the likelihood of hire - but - it also weighed their education and degree type *higher* than it did before...
Amazingly though, they didn't have a comment when the AI still went ahead and tried to chat up the candidates from Smith...
posted by Nanukthedog at 4:12 PM on January 23, 2019 [8 favorites]
Did the people giving the seminar cite anything? I would be very interested in reading it.
The paper Fairness Through Awareness is one of the most influential in the area. The main paper is extremely technical, but it's worth looking at the appendix, titled "A Catalog of Evils". That part is accessible to someone with no background and explains some of the ways algorithmic bias can come about.
posted by vogon_poet at 4:56 PM on January 23, 2019 [2 favorites]
The paper Fairness Through Awareness is one of the most influential in the area. The main paper is extremely technical, but it's worth looking at the appendix, titled "A Catalog of Evils". That part is accessible to someone with no background and explains some of the ways algorithmic bias can come about.
posted by vogon_poet at 4:56 PM on January 23, 2019 [2 favorites]
If we forego the glitzy flash of "trustworthy AI" for boring factual descriptions of privacy liability and multiple testing corrections then we lose the hearts and minds to Yudkowski and I won't have it!
In all seriousness, though, I'm only just wading into a few different machine learning algorithms, primarily for unsupervised clustering, and I barely know what I'm doing, but I definitely know that these algorithms aren't about to eject the oxygen from my spaceship without some human doing something really stupid that they easily could have known better than to do.
posted by Made of Star Stuff at 8:02 PM on January 23, 2019
In all seriousness, though, I'm only just wading into a few different machine learning algorithms, primarily for unsupervised clustering, and I barely know what I'm doing, but I definitely know that these algorithms aren't about to eject the oxygen from my spaceship without some human doing something really stupid that they easily could have known better than to do.
posted by Made of Star Stuff at 8:02 PM on January 23, 2019
Statistical fairness is all well and good, but in advance of that I would advise another less glamorous task: interrogate the system's premise.
posted by waninggibbon at 8:03 PM on January 23, 2019 [2 favorites]
posted by waninggibbon at 8:03 PM on January 23, 2019 [2 favorites]
If you can't govern the situation producing the data that you're collecting, how in the absolute fuck could you hope to govern an oblique stats trick you apply against the data?
posted by thsmchnekllsfascists at 8:36 PM on January 23, 2019
posted by thsmchnekllsfascists at 8:36 PM on January 23, 2019
A New Approach to Understanding How Machines Think - "Neural networks are famously incomprehensible — a computer can come up with a good answer, but not be able to explain what led to the conclusion. Been Kim is developing a 'translator for humans' so that we can understand when artificial intelligence breaks down."
Evolutionary algorithm outperforms deep-learning machines at video games - "It starts with code generated entirely at random. And not just one version of it, but lots of versions, sometimes hundreds of thousands of randomly assembled pieces of code... The evolved code has another advantage. Because it is small, it is easy to see how it works. By contrast, a well-known problem with deep-learning techniques is that it is sometimes impossible to know why they have made particular decisions, and this can have practical and legal ramifications."
also btw...
How artificial intelligence can help us make judges less biased - "Predicting which judges are likely to be biased could give them the opportunity to consider more carefully."
posted by kliuless at 9:14 PM on January 23, 2019 [2 favorites]
Evolutionary algorithm outperforms deep-learning machines at video games - "It starts with code generated entirely at random. And not just one version of it, but lots of versions, sometimes hundreds of thousands of randomly assembled pieces of code... The evolved code has another advantage. Because it is small, it is easy to see how it works. By contrast, a well-known problem with deep-learning techniques is that it is sometimes impossible to know why they have made particular decisions, and this can have practical and legal ramifications."
also btw...
How artificial intelligence can help us make judges less biased - "Predicting which judges are likely to be biased could give them the opportunity to consider more carefully."
posted by kliuless at 9:14 PM on January 23, 2019 [2 favorites]
Designing AI to be trustworthy requires creating solutions that reflect ethical principles that are deeply rooted in important and timeless values.
That sounds hard to us. So what we do instead is, we just make one designed to suck up more and more PII from our users and cram more and more advertising down their throats on every release.
posted by flabdablet at 6:03 AM on January 24, 2019 [2 favorites]
That sounds hard to us. So what we do instead is, we just make one designed to suck up more and more PII from our users and cram more and more advertising down their throats on every release.
posted by flabdablet at 6:03 AM on January 24, 2019 [2 favorites]
Agreed. Whether you call it an expert system or a neural network or some other buzzwordy name at this point we're still talking about algorithmic tools, and by all means there should be safeguards in place. I don't want my plane going down because of buggy software. Once we (if we) reach the stage where there is a thinking, feeling, self-aware non-human entity all bets are off. So now that being exists and we're going to say, "Ok, now you run this city" (or bring butter), and that's the end of it? To further crib from Rick and Morty. That sounds like slavery with extra steps. Why do you think the majority of sci-fi has the AIs "rebelling" against humanity? Maybe the truly ethical choice is to stop pursuing true AI.
posted by SonInLawOfSam at 8:43 AM on January 24, 2019
posted by SonInLawOfSam at 8:43 AM on January 24, 2019
> I'm increasingly of the opinion that debate on this topic would be vastly improved by making it as boring as possible.
As an addendum from someone who works in the ML (read: applied statistics) field: all of the really scary eventualities that are presented as consequences of AI are 100% possible and happening presently with technologies that were boring a decade ago, but are just now permeating and reshaping businesses/governments/social interactions. All of the privacy and agency concerns don't require AI, they just require systems that are capable of insulating themselves from accountability, and humanity excels at producing those (and has for thousands of years).
> I would advise another less glamorous task: interrogate the system's premise.
This is one of the better one-liners I've heard re: automated decisions and fairness. This is the crux, statistical validity can only tell you that your biased data matches your biased system.
posted by cirgue at 10:55 AM on January 24, 2019 [3 favorites]
As an addendum from someone who works in the ML (read: applied statistics) field: all of the really scary eventualities that are presented as consequences of AI are 100% possible and happening presently with technologies that were boring a decade ago, but are just now permeating and reshaping businesses/governments/social interactions. All of the privacy and agency concerns don't require AI, they just require systems that are capable of insulating themselves from accountability, and humanity excels at producing those (and has for thousands of years).
> I would advise another less glamorous task: interrogate the system's premise.
This is one of the better one-liners I've heard re: automated decisions and fairness. This is the crux, statistical validity can only tell you that your biased data matches your biased system.
posted by cirgue at 10:55 AM on January 24, 2019 [3 favorites]
DeepMind's new Starcraft II agent (I refuse to call it an AI even though it actually does build off of years of traditional AI research) just absolutely slaughtered the 20th best player in the world. So, prepare for a lot of hysterical and stupid reporting accompanied by the Terminator movie poster.
posted by vogon_poet at 11:50 AM on January 24, 2019 [1 favorite]
posted by vogon_poet at 11:50 AM on January 24, 2019 [1 favorite]
> cirgue:
"all of the really scary eventualities that are presented as consequences of AI are 100% possible and happening presently with technologies that were boring a decade ago"
Yes, and even more boring than that. The "Acosta assault" video was created by speeding up a video sequence. False information spread virally on Twitter is simply text. Propaganda and fakery don't depend on sophisticated tools, just the intent to deceive and access to an audience that is willing to believe.
We must really ask ourselves who benefits when we assert:
1. The problem is unprecedented.
2. The problem is primarily technological in nature.
3. That technology is both necessary and inevitable.
4. It makes decisions unforeseeable to its creators.
5. Its regulation depends largely on more technology (e.g. deepfake detection algorithms) and voluntary guidelines for technology development.
posted by waninggibbon at 3:20 PM on January 24, 2019 [2 favorites]
"all of the really scary eventualities that are presented as consequences of AI are 100% possible and happening presently with technologies that were boring a decade ago"
Yes, and even more boring than that. The "Acosta assault" video was created by speeding up a video sequence. False information spread virally on Twitter is simply text. Propaganda and fakery don't depend on sophisticated tools, just the intent to deceive and access to an audience that is willing to believe.
We must really ask ourselves who benefits when we assert:
1. The problem is unprecedented.
2. The problem is primarily technological in nature.
3. That technology is both necessary and inevitable.
4. It makes decisions unforeseeable to its creators.
5. Its regulation depends largely on more technology (e.g. deepfake detection algorithms) and voluntary guidelines for technology development.
posted by waninggibbon at 3:20 PM on January 24, 2019 [2 favorites]
« Older Entertain me, I'm as blank as can be | Stoker-King model vs Rice model vs... Newer »
This thread has been archived and is closed to new comments
Deep learning algorithms have as much autonomy as linear regression, a skyscraper, a bridge, or a sandwich. Inability to make assurances about their performance ("fairness", out-of-sample accuracy, etc.) should be seen as a weakness and a potential liability, not an opening for "responsibility laundering".
posted by waninggibbon at 2:48 PM on January 23, 2019 [13 favorites]